You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If you plan to develop with TorchServe and change some source code, you must install it from source code.
62
51
63
-
Ensure that you have `python3` installed, and the user has access to the site-packages or `~/.local/bin` is added to the `PATH` environment variable.
52
+
1. Clone the repository, including third-party modules, with `git clone --recurse-submodules --remote-submodules [email protected]:pytorch/serve.git`
53
+
2. Ensure that you have `python3` installed, and the user has access to the site-packages or `~/.local/bin` is added to the `PATH` environment variable.
54
+
3. Run the following script from the top of the source directory. NOTE: This script force re-installs `torchserve`, `torch-model-archiver` and `torch-workflow-archiver`if existing installations are found
64
55
65
-
Run the following script from the top of the source directory.
56
+
#### For Debian Based Systems/MacOS
66
57
67
-
NOTE: This script force re-installs `torchserve`, `torch-model-archiver` and `torch-workflow-archiver`if existing installations are found
Use `--cuda` flag with `install_dependencies.py`for installing cuda version specific dependencies. Possible values are `cu111`, `cu102`, `cu101`, `cu92`
69
+
For example `python ./ts_scripts/install_dependencies.py --environment=dev --rocm=rocm61`
77
70
78
-
#### For Windows
71
+
#### For Windows
79
72
80
-
Refer to the documentation [here](docs/torchserve_on_win_native.md).
73
+
Refer to the documentation [here](docs/torchserve_on_win_native.md).
81
74
82
-
For information about the model archiver, see [detailed documentation](model-archiver/README.md).
75
+
For information about the model archiver, see [detailed documentation](model-archiver/README.md).
TorchServe can be run on any combination of operating system and device that is
4
+
[supported by ROCm](https://rocm.docs.amd.com/projects/radeon/en/latest/docs/compatibility.html).
5
+
6
+
## Supported Versions of ROCm
7
+
8
+
The current stable `major.patch` version of ROCm and the previous path version will be supported. For example version `N.2` and `N.1` where `N` is the current major version.
9
+
10
+
## Installation
11
+
12
+
- Make sure you have **python >= 3.8 installed** on your system.
Copy file name to clipboardexpand all lines: docs/hardware_support/apple_silicon_support.md
+17-17
Original file line number
Diff line number
Diff line change
@@ -1,19 +1,19 @@
1
-
# Apple Silicon Support
1
+
# Apple Silicon Support
2
2
3
-
## What is supported
3
+
## What is supported
4
4
* TorchServe CI jobs now include M1 hardware in order to ensure support, [documentation](https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners/about-github-hosted-runners#standard-github-hosted-runners-for-public-repositories) on github M1 hardware.
* For [Docker](https://docs.docker.com/desktop/install/mac-install/) ensure Docker for Apple silicon is installed then follow [setup steps](https://github.com/pytorch/serve/tree/master/docker)
8
8
9
9
## Experimental Support
10
10
11
-
* For GPU jobs on Apple Silicon, [MPS](https://pytorch.org/docs/master/notes/mps.html) is now auto detected and enabled. To prevent TorchServe from using MPS, users have to set `deviceType: "cpu"` in model-config.yaml.
12
-
* This is an experimental feature and NOT ALL models are guaranteed to work.
11
+
* For GPU jobs on Apple Silicon, [MPS](https://pytorch.org/docs/master/notes/mps.html) is now auto detected and enabled. To prevent TorchServe from using MPS, users have to set `deviceType: "cpu"` in model-config.yaml.
12
+
* This is an experimental feature and NOT ALL models are guaranteed to work.
13
13
* Number of GPUs now reports GPUs on Apple Silicon
14
14
15
-
### Testing
16
-
*[Pytests](https://github.com/pytorch/serve/tree/master/test/pytest/test_device_config.py) that checks for MPS on MacOS M1 devices
15
+
### Testing
16
+
*[Pytests](https://github.com/pytorch/serve/tree/master/test/pytest/test_device_config.py) that checks for MPS on MacOS M1 devices
17
17
* Models that have been tested and work: Resnet-18, Densenet161, Alexnet
18
18
* Models that have been tested and DO NOT work: MNIST
19
19
@@ -31,10 +31,10 @@ Config file: N/A
31
31
Inference address: http://127.0.0.1:8080
32
32
Management address: http://127.0.0.1:8081
33
33
Metrics address: http://127.0.0.1:8082
34
-
Model Store:
34
+
Model Store:
35
35
Initial Models: resnet-18=resnet-18.mar
36
-
Log dir:
37
-
Metrics dir:
36
+
Log dir:
37
+
Metrics dir:
38
38
Netty threads: 0
39
39
Netty client threads: 0
40
40
Default workers per model: 16
@@ -48,7 +48,7 @@ Custom python dependency for model allowed: false
48
48
Enable metrics API: true
49
49
Metrics mode: LOG
50
50
Disable system metrics: false
51
-
Workflow Store:
51
+
Workflow Store:
52
52
CPP log config: N/A
53
53
Model config: N/A
54
54
024-04-08T14:18:02,380 [INFO ] main org.pytorch.serve.servingsdk.impl.PluginsManager - Loading snapshot serializer plugin...
0 commit comments