You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: cpp/README.md
+12-14
Original file line number
Diff line number
Diff line change
@@ -5,36 +5,34 @@
5
5
* cmake version: 3.18+
6
6
## Installation and Running TorchServe CPP
7
7
8
+
This installation instruction assumes that TorchServe is already installed through pip/conda/source. If this is not the case install it after the `Install dependencies` step through your preferred method.
9
+
8
10
### Install dependencies
9
11
```
10
12
cd serve
11
13
python ts_scripts/install_dependencies.py --cpp --environment dev [--cuda=cu121|cu118]
12
14
```
13
15
### Building the backend
16
+
Don't forget to install or update TorchServe at this point if it wasn't previously installed.
To clean the build directory in order to rebuild from scratch simply delete the cpp/_build directory with
32
+
```
33
+
rm -rf cpp/_build
34
+
```
35
+
38
36
## Backend
39
37
TorchServe cpp backend can run as a process, which is similar to [TorchServe Python backend](https://github.com/pytorch/serve/tree/master/ts). By default, TorchServe supports torch scripted model in cpp backend. Other platforms such as MxNet, ONNX can be supported through custom handlers following the TorchScript example [src/backends/handler/torch_scripted_handler.hh](https://github.com/pytorch/serve/blob/master/cpp/src/backends/handler/torch_scripted_handler.hh).
Q: When loading a handler which uses a model exported with torch._export.aot_compile the handler dies with "error: Error in dlopen: MODEL.SO : undefined symbol: SOME_SYMBOL".
99
-
A: Make sure that you are using matching libtorch and Pytorch versions for inference and export, respectively.
97
+
A: Make sure that you are using matching libtorch and Pytorch versions for inference and export, respectively.
Copy file name to clipboardexpand all lines: docs/configuration.md
+1
Original file line number
Diff line number
Diff line change
@@ -297,6 +297,7 @@ e.g. : To allow base URLs `https://s3.amazonaws.com/` and `https://torchserve.py
297
297
* For security reason, `use_env_allowed_urls=true` is required in config.properties to read `allowed_urls` from environment variable.
298
298
*`workflow_store` : Path of workflow store directory. Defaults to model store directory.
299
299
*`disable_system_metrics` : Disable collection of system metrics when set to "true". Default value is "false".
300
+
*`system_metrics_cmd`: The customized system metrics python script name with arguments. For example:`ts/metrics/metric_collector.py --gpu 0`. Default: empty which means TorchServe collects system metrics via "ts/metrics/metric_collector.py --gpu $CUDA_VISIBLE_DEVICES".
Copy file name to clipboardexpand all lines: docs/token_authorization_api.md
+6-2
Original file line number
Diff line number
Diff line change
@@ -1,8 +1,12 @@
1
1
# TorchServe token authorization API
2
2
3
+
## Setup
4
+
1. Download the jar files from [Maven](https://mvnrepository.com/artifact/org.pytorch/torchserve-endpoint-plugin)
5
+
2. Enable token authorization by adding the `--plugins-path /path/to/the/jar/files` flag at start up with the path leading to the downloaded jar files.
6
+
3
7
## Configuration
4
-
1.Enable token authorization by adding the provided plugin at start using the `--plugins-path` command.
5
-
2.Torchserve will enable token authorization if the plugin is provided. In the current working directory a file `key_file.json` will be generated.
8
+
1.Torchserve will enable token authorization if the plugin is provided. Expected log statement `[INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager - Loading plugin for endpoint token`
9
+
2. In the current working directory a file `key_file.json` will be generated.
0 commit comments