You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Remove install_dependencies from buld.sh call
* remove sudo from folly build
* Align cpp wf file with cpu wf file
* Make yaml statically linked dependency
* move cpp binaries to location based on ts.__file__
* Add FAQ to cpp readme
* Update readme to expect installed TS
* Fix libtorch install on mac
* Add cpp build faqs
* Remove llama.cpp as submodule and add through fetch_content
* Disable metal in llamacpp example
* Fix dangling pointer error in cpp worker
* Remove kineto from mac build
* Add llvm as mac cpp dependencies (req for clang-tidy)
* Enable fPIC for llama.cpp example
* Add sudo to clang-tidy link creation
* Add undefined symbol faq
* fix llv typo
* Add install from source to cpp_ci
* Correct install from source to cpp_ci
* bump up pyyaml version to avoid cython 3 issue yaml/pyyaml#702
* Move cpp ci to M1 mac
* Run workflow on self hosted runner
* Disable mac ci for cpp
* Fix workflow syntax
* Run on cpp-ci
* Remove sudo
* Add apt update for act docker
* print library_path in print_env_info.py
* print end in cpp ci workflow
* Run on github runner
* Add upterm session to workflow
* Move post mortem upterm session before build
* Remove first upterm session
* ci debugging
* Observe disk space
* move _build to /mnt on github runner
* fix permission denied
* use mount instead of ln
* Adjust llamacpp api
* Reactivate set -e
* Remove note on env variable in cpp readme
* Fix linting issue in print_env_info.py
* Cleanup ci-cpu-cpp.yml
* quieten wget
* Add build clean section in cpp/readme
* Readjust to llama.cpp api
Copy file name to clipboardexpand all lines: cpp/README.md
+12-14
Original file line number
Diff line number
Diff line change
@@ -5,36 +5,34 @@
5
5
* cmake version: 3.18+
6
6
## Installation and Running TorchServe CPP
7
7
8
+
This installation instruction assumes that TorchServe is already installed through pip/conda/source. If this is not the case install it after the `Install dependencies` step through your preferred method.
9
+
8
10
### Install dependencies
9
11
```
10
12
cd serve
11
13
python ts_scripts/install_dependencies.py --cpp --environment dev [--cuda=cu121|cu118]
12
14
```
13
15
### Building the backend
16
+
Don't forget to install or update TorchServe at this point if it wasn't previously installed.
To clean the build directory in order to rebuild from scratch simply delete the cpp/_build directory with
32
+
```
33
+
rm -rf cpp/_build
34
+
```
35
+
38
36
## Backend
39
37
TorchServe cpp backend can run as a process, which is similar to [TorchServe Python backend](https://github.com/pytorch/serve/tree/master/ts). By default, TorchServe supports torch scripted model in cpp backend. Other platforms such as MxNet, ONNX can be supported through custom handlers following the TorchScript example [src/backends/handler/torch_scripted_handler.hh](https://github.com/pytorch/serve/blob/master/cpp/src/backends/handler/torch_scripted_handler.hh).
Q: When loading a handler which uses a model exported with torch._export.aot_compile the handler dies with "error: Error in dlopen: MODEL.SO : undefined symbol: SOME_SYMBOL".
99
-
A: Make sure that you are using matching libtorch and Pytorch versions for inference and export, respectively.
97
+
A: Make sure that you are using matching libtorch and Pytorch versions for inference and export, respectively.
0 commit comments