Skip to content

Commit b5871e2

Browse files
authored
Add limited maintenance notice (#3395)
1 parent 2a0ce75 commit b5871e2

File tree

167 files changed

+751
-86
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

167 files changed

+751
-86
lines changed

CODE_OF_CONDUCT.md

+4
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
# ⚠️ Notice: Limited Maintenance
2+
3+
This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.
4+
15
# Code of Conduct
26

37
## Our Pledge

CONTRIBUTING.md

+4
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
# ⚠️ Notice: Limited Maintenance
2+
3+
This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.
4+
15
## Contributing to TorchServe
26
### Merging your code
37

README.md

+4
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
# ⚠️ Notice: Limited Maintenance
2+
3+
This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.
4+
15
# ❗ANNOUNCEMENT: Security Changes❗
26
TorchServe now enforces token authorization enabled and model API control disabled by default. These security features are intended to address the concern of unauthorized API calls and to prevent potential malicious code from being introduced to the model server. Refer the following documentation for more information: [Token Authorization](https://github.com/pytorch/serve/blob/master/docs/token_authorization_api.md), [Model API control](https://github.com/pytorch/serve/blob/master/docs/model_api_control.md)
37

SECURITY.md

+4
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
# ⚠️ Notice: Limited Maintenance
2+
3+
This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.
4+
15
# Security Policy
26

37
## Supported Versions

benchmarks/README.md

+4
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
# ⚠️ Notice: Limited Maintenance
2+
3+
This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.
4+
15
# Torchserve Model Server Benchmarking
26

37
The benchmarks measure the performance of TorchServe on various models and benchmarks. It supports either a number of built-in models or a custom model passed in as a path or URL to the .mar file. It also runs various benchmarks using these models (see benchmarks section below). The benchmarks are executed in the user machine through a python3 script in case of jmeter and a shell script in case of apache benchmark. TorchServe is run on the same machine in a docker instance to avoid network latencies. The benchmark must be run from within `serve/benchmarks`

benchmarks/add_jmeter_test.md

+9-5
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,20 @@
1+
# ⚠️ Notice: Limited Maintenance
2+
3+
This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.
4+
15
## Adding a new test plan for torchserve
26

37
A new Jmeter test plan for torchserve benchmark can be added as follows:
48

59
* Assuming you know how to create a jmeter test plan. If not then please use this jmeter [guide](https://jmeter.apache.org/usermanual/build-test-plan.html)
610
* Here, we will show you how 'MMS Benchmarking Image Input Model Test Plan' plan can be added.
7-
This test plan does following:
8-
11+
This test plan does following:
12+
913
* Register a model - `default is resnet-18`
1014
* Scale up to add workers for inference
1115
* Send Inference request in a loop
1216
* Unregister a model
13-
17+
1418
(NOTE - This is an existing plan in `serve/benchmarks`)
1519
* Open jmeter GUI
1620
e.g. on macOS, type `jmeter` on commandline
@@ -63,7 +67,7 @@ You can create variables or use them directly in your test plan.
6367
* input_filepath - input image file for prediction
6468
* min_workers - minimum workers to be launch for serving inference request
6569

66-
NOTE -
70+
NOTE -
6771

6872
* In above, screenshot, some variables/input box are partially displayed. You can view details by opening an existing test cases from serve/benchmarks/jmx for details.
69-
* Apart from above argument, you can define custom arguments specific to you test plan if needed. Refer `benchmark.py` for details
73+
* Apart from above argument, you can define custom arguments specific to you test plan if needed. Refer `benchmark.py` for details

benchmarks/jmeter.md

+4
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
# ⚠️ Notice: Limited Maintenance
2+
3+
This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.
4+
15
# Benchmarking with JMeter
26

37
## Installation

benchmarks/sample_report.md

+4
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
# ⚠️ Notice: Limited Maintenance
2+
3+
This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.
4+
15

26
TorchServe Benchmark on gpu
37
===========================

binaries/README.md

+20-16
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,8 @@
1-
# Building TorchServe and Torch-Model-Archiver release binaries
1+
# ⚠️ Notice: Limited Maintenance
2+
3+
This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.
4+
5+
# Building TorchServe and Torch-Model-Archiver release binaries
26
1. Make sure all the dependencies are installed
37
##### Linux and macOS:
48
```bash
@@ -10,8 +14,8 @@
1014
python .\ts_scripts\install_dependencies.py --environment=dev
1115
```
1216
> For GPU with Cuda 10.2, make sure add the `--cuda cu102` arg to the above command
13-
14-
17+
18+
1519
2. To build a `torchserve` and `torch-model-archiver` wheel execute:
1620
##### Linux and macOS:
1721
```bash
@@ -22,23 +26,23 @@
2226
python .\binaries\build.py
2327
```
2428

25-
> If the scripts detect a conda environment, it also builds torchserve conda packages
29+
> If the scripts detect a conda environment, it also builds torchserve conda packages
2630
> For additional info on conda builds refer to [this readme](conda/README.md)
2731
2832
3. Build outputs are located at
2933
##### Linux and macOS:
3034
- Wheel files
31-
`dist/torchserve-*.whl`
35+
`dist/torchserve-*.whl`
3236
`model-archiver/dist/torch_model_archiver-*.whl`
3337
`workflow-archiver/dist/torch_workflow_archiver-*.whl`
3438
- Conda pacakages
35-
`binaries/conda/output/*`
36-
39+
`binaries/conda/output/*`
40+
3741
##### Windows:
3842
- Wheel files
39-
`dist\torchserve-*.whl`
40-
`model-archiver\dist\torch_model_archiver-*.whl`
41-
`workflow-archiver\dist\torch_workflow_archiver-*.whl`
43+
`dist\torchserve-*.whl`
44+
`model-archiver\dist\torch_model_archiver-*.whl`
45+
`workflow-archiver\dist\torch_workflow_archiver-*.whl`
4246
- Conda pacakages
4347
`binaries\conda\output\*`
4448

@@ -74,7 +78,7 @@
7478
```bash
7579
conda install --channel ./binaries/conda/output -y torchserve torch-model-archiver torch-workflow-archiver
7680
```
77-
81+
7882
##### Windows:
7983
Conda install is currently not supported. Please use pip install command instead.
8084

@@ -147,17 +151,17 @@
147151
exec bash
148152
python3 binaries/build.py
149153
cd binaries/
150-
python3 upload.py --upload-pypi-packages --upload-conda-packages
154+
python3 upload.py --upload-pypi-packages --upload-conda-packages
151155
```
152-
4. To upload *.whl files to S3 bucket, run the following command:
156+
4. To upload *.whl files to S3 bucket, run the following command:
153157
Note: `--nightly` option puts the *.whl files in a subfolder named 'nightly' in the specified bucket
154158
```
155159
python s3_binary_upload.py --s3-bucket <s3_bucket> --s3-backup-bucket <s3_backup_bucket> --nightly
156160
```
157161
158162
## Uploading packages to production torchserve account
159163
160-
As a first step binaries and docker containers need to be available in some staging environment. In that scenario the binaries can just be `wget`'d and then uploaded using the instructions below and the docker staging environment just needs a 1 line code change in https://github.com/pytorch/serve/blob/master/docker/promote-docker.sh#L8
164+
As a first step binaries and docker containers need to be available in some staging environment. In that scenario the binaries can just be `wget`'d and then uploaded using the instructions below and the docker staging environment just needs a 1 line code change in https://github.com/pytorch/serve/blob/2a0ce756b179677f905c3216b9c8427cd530a129/docker/promote-docker.sh#L8
161165
162166
### pypi
163167
Binaries should show up here: https://pypi.org/project/torchserve/
@@ -182,7 +186,7 @@ anaconda upload -u pytorch <path/to/.bz2>
182186
## docker
183187
Binaries should show up here: https://hub.docker.com/r/pytorch/torchserve
184188
185-
Change the staging org to your personal docker or test docker account https://github.com/pytorch/serve/blob/master/docker/promote-docker.sh#L8
189+
Change the staging org to your personal docker or test docker account https://github.com/pytorch/serve/blob/2a0ce756b179677f905c3216b9c8427cd530a129/docker/promote-docker.sh#L8
186190
187191
188192
### Direct upload
@@ -197,7 +201,7 @@ For an official release our tags include `pytorch/torchserve/<version_number>-cp
197201
## Direct upload Kserve
198202
To build the Kserve docker image follow instructions from [kubernetes/kserve](../kubernetes/kserve/README.md)
199203
200-
When tagging images for an official release make sure to tag with the following format `pytorch/torchserve-kfs/<version_number>-cpu` and `pytorch/torchserve-kfs/<version_number>-gpu`.
204+
When tagging images for an official release make sure to tag with the following format `pytorch/torchserve-kfs/<version_number>-cpu` and `pytorch/torchserve-kfs/<version_number>-gpu`.
201205
202206
### Uploading from staging account
203207

binaries/conda/README.md

+5-2
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
# ⚠️ Notice: Limited Maintenance
2+
3+
This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.
4+
15
# Building conda packages
26

37
1. To build conda packages you must first produce wheels for the project, see [this readme](../README.md) for more details on building `torchserve` and `torch-model-archiver` wheel files.
@@ -9,7 +13,7 @@
913
```
1014
# Build all packages
1115
python build_packages.py
12-
16+
1317
# Selectively build packages
1418
python build_packages.py --ts-wheel=/path/to/torchserve.whl --ma-wheel=/path/to/torch_model_archiver_wheel --wa-wheel=/path/to/torch_workflow_archiver_wheel
1519
```
@@ -21,4 +25,3 @@ The built conda packages are available in the `output` directory
2125
Anaconda packages are both OS specific and python version specific so copying them one by one from a test/staging environment like https://anaconda.org/pytorch/torchserve/ to an official environment like https://anaconda.org/torchserve-staging can be fiddly
2226

2327
Instead you can run `anaconda copy torchserve-staging/<package>/<version_number> --to-owner pytorch`
24-

cpp/README.md

+4
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
# ⚠️ Notice: Limited Maintenance
2+
3+
This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.
4+
15
# TorchServe CPP (Experimental Release)
26
## Requirements
37
* C++17

docker/README.md

+4
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
# ⚠️ Notice: Limited Maintenance
2+
3+
This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.
4+
15
## Security Changes
26
TorchServe now enforces token authorization enabled and model API control disabled by default. Refer the following documentation for more information: [Token Authorization](https://github.com/pytorch/serve/blob/master/docs/token_authorization_api.md), [Model API control](https://github.com/pytorch/serve/blob/master/docs/model_api_control.md)
37

docs/FAQs.md

+4
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
# ⚠️ Notice: Limited Maintenance
2+
3+
This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.
4+
15
# FAQ'S
26
Contents of this document.
37
* [General](#general)

docs/README.md

+4
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
# ⚠️ Notice: Limited Maintenance
2+
3+
This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.
4+
15
# ❗ANNOUNCEMENT: Security Changes❗
26
TorchServe now enforces token authorization enabled and model API control disabled by default. These security features are intended to address the concern of unauthorized API calls and to prevent potential malicious code from being introduced to the model server. Refer the following documentation for more information: [Token Authorization](https://github.com/pytorch/serve/blob/master/docs/token_authorization_api.md), [Model API control](https://github.com/pytorch/serve/blob/master/docs/model_api_control.md)
37

docs/Troubleshooting.md

+4
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
# ⚠️ Notice: Limited Maintenance
2+
3+
This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.
4+
15
## Troubleshooting Guide
26
Refer to this section for common issues faced while deploying your Pytorch models using Torchserve and their corresponding troubleshooting steps.
37

docs/batch_inference_with_ts.md

+4
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
# ⚠️ Notice: Limited Maintenance
2+
3+
This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.
4+
15
# Batch Inference with TorchServe
26

37
## Contents of this Document

docs/code_coverage.md

+4
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
# ⚠️ Notice: Limited Maintenance
2+
3+
This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.
4+
15
# Code Coverage
26

37
## To check branch stability run the sanity suite as follows

docs/configuration.md

+4
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
# ⚠️ Notice: Limited Maintenance
2+
3+
This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.
4+
15
# Advanced configuration
26

37
The default settings form TorchServe should be sufficient for most use cases. However, if you want to customize TorchServe, the configuration options described in this topic are available.

docs/custom_service.md

+9-5
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
# ⚠️ Notice: Limited Maintenance
2+
3+
This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.
4+
15
# Custom Service
26

37
## Contents of this Document
@@ -257,12 +261,12 @@ Refer [waveglow_handler](https://github.com/pytorch/serve/blob/master/examples/t
257261
Torchserve returns the captum explanations for Image Classification, Text Classification and BERT models. It is achieved by placing the below request:
258262
`POST /explanations/{model_name}`
259263

260-
The explanations are written as a part of the explain_handle method of base handler. The base handler invokes this explain_handle_method. The arguments that are passed to the explain handle methods are the pre-processed data and the raw data. It invokes the get insights function of the custom handler that returns the captum attributions. The user should write his own get_insights functionality to get the explanations
264+
The explanations are written as a part of the explain_handle method of base handler. The base handler invokes this explain_handle_method. The arguments that are passed to the explain handle methods are the pre-processed data and the raw data. It invokes the get insights function of the custom handler that returns the captum attributions. The user should write his own get_insights functionality to get the explanations
261265

262-
For serving a custom handler the captum algorithm should be initialized in the initialize functions of the handler
266+
For serving a custom handler the captum algorithm should be initialized in the initialize functions of the handler
263267

264268
The user can override the explain_handle function in the custom handler.
265-
The user should define their get_insights method for custom handler to get Captum Attributions.
269+
The user should define their get_insights method for custom handler to get Captum Attributions.
266270

267271
The above ModelHandler class should have the following methods with captum functionality.
268272

@@ -292,7 +296,7 @@ The above ModelHandler class should have the following methods with captum funct
292296
else :
293297
model_output = self.explain_handle(model_input, data)
294298
return model_output
295-
299+
296300
# Present in the base_handler, so override only when neccessary
297301
def explain_handle(self, data_preprocess, raw_data):
298302
"""Captum explanations handler
@@ -323,7 +327,7 @@ The above ModelHandler class should have the following methods with captum funct
323327
def get_insights(self,**kwargs):
324328
"""
325329
Functionality to get the explanations.
326-
Called from the explain_handle method
330+
Called from the explain_handle method
327331
"""
328332
pass
329333
```

docs/default_handlers.md

+4
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
# ⚠️ Notice: Limited Maintenance
2+
3+
This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.
4+
15
# TorchServe default inference handlers
26

37
TorchServe provides following inference handlers out of box. It's expected that the models consumed by each support batched inference.

docs/genai_use_cases.md

+5-1
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
# ⚠️ Notice: Limited Maintenance
2+
3+
This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.
4+
15
# TorchServe GenAI use cases and showcase
26

37
This document shows interesting usecases with TorchServe for Gen AI deployments.
@@ -8,4 +12,4 @@ In this blog, we show how to deploy a RAG Endpoint using TorchServe, increase th
812

913
## [Multi-Image Generation Streamlit App: Chaining Llama & Stable Diffusion using TorchServe, torch.compile & OpenVINO](https://pytorch.org/serve/llm_diffusion_serving_app.html)
1014

11-
This Multi-Image Generation Streamlit app is designed to generate multiple images based on a provided text prompt. Instead of using Stable Diffusion directly, this app chains Llama and Stable Diffusion to enhance the image generation process. This multi-image generation use case exemplifies the powerful synergy of cutting-edge AI technologies: TorchServe, OpenVINO, Torch.compile, Meta-Llama, and Stable Diffusion.
15+
This Multi-Image Generation Streamlit app is designed to generate multiple images based on a provided text prompt. Instead of using Stable Diffusion directly, this app chains Llama and Stable Diffusion to enhance the image generation process. This multi-image generation use case exemplifies the powerful synergy of cutting-edge AI technologies: TorchServe, OpenVINO, Torch.compile, Meta-Llama, and Stable Diffusion.

docs/getting_started.md

+4
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
# ⚠️ Notice: Limited Maintenance
2+
3+
This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.
4+
15
# Getting started
26

37
## Install TorchServe and torch-model-archiver

0 commit comments

Comments
 (0)