Skip to content

Commit 05a72b9

Browse files
committed
documentation updates
1 parent ad71e0f commit 05a72b9

File tree

2 files changed

+64
-0
lines changed

2 files changed

+64
-0
lines changed

docs/M1_support.md

+62
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,62 @@
1+
# M1 support
2+
3+
## M1 Support
4+
TorchServe supports Mac OS with M1 hardware.
5+
6+
1. TorchServe CI jobs now include M1 hardware in order to ensure support, [documentation](https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners/about-github-hosted-runners#standard-github-hosted-runners-for-public-repositories) on github M1 hardware.
7+
- [Regression Tests](https://github.com/pytorch/serve/blob/master/.github/workflows/regression_tests_cpu.yml)
8+
- [Regression binaries Test](https://github.com/pytorch/serve/blob/master/.github/workflows/regression_tests_cpu_binaries.yml)
9+
2. For [Docker](https://docs.docker.com/desktop/install/mac-install/) ensure Docker for Apple silicon is installed then follow [setup steps](https://github.com/pytorch/serve/tree/master/docker)
10+
## Running Torchserve on M1
11+
12+
Follow [getting started documentation](https://github.com/pytorch/serve?tab=readme-ov-file#-quick-start-with-torchserve-conda)
13+
14+
### Example
15+
16+
```
17+
(myenv) serve % pip list | grep torch
18+
torch 2.2.1
19+
torchaudio 2.2.1
20+
torchdata 0.7.1
21+
torchtext 0.17.1
22+
torchvision 0.17.1
23+
(myenv3) serve % conda install -c pytorch-nightly torchserve torch-model-archiver torch-workflow-archiver
24+
(myenv3) serve % pip list | grep torch
25+
torch 2.2.1
26+
torch-model-archiver 0.10.0b20240312
27+
torch-workflow-archiver 0.2.12b20240312
28+
torchaudio 2.2.1
29+
torchdata 0.7.1
30+
torchserve 0.10.0b20240312
31+
torchtext 0.17.1
32+
torchvision 0.17.1
33+
34+
(myenv3) serve % torchserve --start --ncs --models densenet161.mar --model-store ./model_store_gen/
35+
Torchserve version: 0.10.0
36+
Number of GPUs: 0
37+
Number of CPUs: 10
38+
Max heap size: 8192 M
39+
Config file: N/A
40+
Inference address: http://127.0.0.1:8080
41+
Management address: http://127.0.0.1:8081
42+
Metrics address: http://127.0.0.1:8082
43+
Initial Models: densenet161.mar
44+
Netty threads: 0
45+
Netty client threads: 0
46+
Default workers per model: 10
47+
Blacklist Regex: N/A
48+
Maximum Response Size: 6553500
49+
Maximum Request Size: 6553500
50+
Limit Maximum Image Pixels: true
51+
Prefer direct buffer: false
52+
Allowed Urls: [file://.*|http(s)?://.*]
53+
Custom python dependency for model allowed: false
54+
Enable metrics API: true
55+
Metrics mode: LOG
56+
Disable system metrics: false
57+
CPP log config: N/A
58+
Model config: N/A
59+
System metrics command: default
60+
...
61+
Model server started.
62+
```

docs/Security.md

+2
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@
55
| Version | Supported |
66
|---------| ------------------ |
77
| 0.9.0 | :white_check_mark: |
8+
| 0.10.0 | :white_check_mark: |
89

910

1011
## How we do security
@@ -36,6 +37,7 @@ TorchServe as much as possible relies on automated tools to do security scanning
3637
2. Using private-key/certificate files
3738

3839
You can find more details in the [configuration guide](https://pytorch.org/serve/configuration.html#enable-ssl)
40+
6. TorchServe supports token authorization: check [documentaion](https://github.com/pytorch/serve/blob/master/docs/token_authorization_api.md) for more information.
3941

4042

4143

0 commit comments

Comments
 (0)