Skip to content

Latest commit

 

History

History
27 lines (15 loc) · 1.06 KB

workflow_inference_api.md

File metadata and controls

27 lines (15 loc) · 1.06 KB

⚠️ Notice: Limited Maintenance

This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.

Workflow Inference API

Workflow Inference API is listening on port 8080 and only accessible from localhost by default. To change the default setting, see TorchServe Configuration.

The TorchServe server supports the following APIs:

Predictions API

To get predictions from a workflow, make a REST call to /wfpredict/{workflow_name}:

POST /wfpredict/{workflow_name}

curl Example

curl -O https://raw.githubusercontent.com/pytorch/serve/master/docs/images/kitten_small.jpg

curl http://localhost:8080/wfpredict/myworkflow -T kitten_small.jpg

The result is JSON object returning the response bytes from the leaf node of the workflow DAG.