Skip to content

Commit 0f62def

Browse files
authored
Merge branch 'master' into feature/k8s_nightly_test
2 parents 1cf88dc + 8e94416 commit 0f62def

File tree

10 files changed

+97
-33
lines changed

10 files changed

+97
-33
lines changed

.github/workflows/regression_tests_cpu_binaries.yml

+18-3
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ name: Run Regression Tests for CPU nightly binaries
33
on:
44
# run every day at 6:15am
55
schedule:
6-
- cron: '15 6 * * *'
6+
- cron: '15 6 * * *'
77

88
concurrency:
99
group: ci-cpu-${{ github.workflow }}-${{ github.ref == 'refs/heads/master' && github.run_number || github.ref }}
@@ -31,24 +31,39 @@ jobs:
3131
with:
3232
submodules: recursive
3333
- name: Setup conda with Python ${{ matrix.python-version }}
34+
if: matrix.os == 'macos-14'
35+
uses: conda-incubator/setup-miniconda@v3
36+
with:
37+
auto-update-conda: true
38+
channels: anaconda, conda-forge
39+
python-version: ${{ matrix.python-version }}
40+
- name: Setup conda with Python ${{ matrix.python-version }}
41+
if: matrix.os != 'macos-14'
3442
uses: s-weigand/setup-conda@v1
3543
with:
3644
update-conda: true
3745
python-version: ${{ matrix.python-version }}
3846
conda-channels: anaconda, conda-forge
39-
- run: conda --version
40-
- run: python --version
4147
- name: Setup Java 17
4248
uses: actions/setup-java@v3
4349
with:
4450
distribution: 'zulu'
4551
java-version: '17'
4652
- name: Checkout TorchServe
4753
uses: actions/checkout@v3
54+
- name: Run install dependencies and regression test
55+
if: matrix.os == 'macos-14'
56+
shell: bash -el {0}
57+
run: |
58+
conda info
59+
python ts_scripts/install_dependencies.py --environment=dev
60+
python test/regression_tests.py --binaries --${{ matrix.binaries }} --nightly
4861
- name: Install dependencies
62+
if: matrix.os != 'macos-14'
4963
run: |
5064
python ts_scripts/install_dependencies.py --environment=dev
5165
- name: Validate Torchserve CPU Regression
66+
if: matrix.os != 'macos-14'
5267
run: |
5368
python test/regression_tests.py --binaries --${{ matrix.binaries }} --nightly
5469

cpp/README.md

+3-2
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,10 @@
22
## Requirements
33
* C++17
44
* GCC version: gcc-9
5-
* cmake version: 3.18+
5+
* cmake version: 3.26.4+
66
* Linux
77

8-
For convenience, a docker container can be used as the development environment to build and install Torchserve CPP
8+
For convenience, a [docker container](../docker/README.md#create-torchserve-docker-image) can be used as the development environment to build and install Torchserve CPP
99
```
1010
cd serve/docker
1111
# For CPU support
@@ -21,6 +21,7 @@ docker run [-v /path/to/build/dir:/serve/cpp/_build] -it pytorch/torchserve:cpp-
2121
# For GPU support
2222
docker run --gpus all [-v /path/to/build/dir:/serve/cpp/_build] -it pytorch/torchserve:cpp-dev-gpu /bin/bash
2323
```
24+
`Warning`: The dev docker container does not install all necessary dependencies or build Torchserve CPP. Please follow the steps below after starting the container.
2425

2526
## Installation and Running TorchServe CPP
2627
This installation instruction assumes that TorchServe is already installed through pip/conda/source. If this is not the case install it after the `Install dependencies` step through your preferred method.

docker/Dockerfile.cpp

+9-21
Original file line numberDiff line numberDiff line change
@@ -16,29 +16,34 @@
1616
ARG BASE_IMAGE=ubuntu:20.04
1717
ARG PYTHON_VERSION=3.9
1818
ARG CMAKE_VERSION=3.26.4
19+
ARG GCC_VERSION=9
1920
ARG BRANCH_NAME="master"
2021
ARG USE_CUDA_VERSION=""
2122

2223
FROM ${BASE_IMAGE} AS cpp-dev-image
2324
ARG BASE_IMAGE
2425
ARG PYTHON_VERSION
2526
ARG CMAKE_VERSION
27+
ARG GCC_VERSION
2628
ARG BRANCH_NAME
2729
ARG USE_CUDA_VERSION
30+
ARG DEBIAN_FRONTEND=noninteractive
2831
ENV PYTHONUNBUFFERED TRUE
32+
ENV TZ=Etc/UTC
2933

3034
RUN --mount=type=cache,id=apt-dev,target=/var/cache/apt \
3135
apt-get update && \
3236
apt-get install software-properties-common -y && \
3337
add-apt-repository -y ppa:deadsnakes/ppa && \
34-
DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y \
38+
apt-get install --no-install-recommends -y \
3539
sudo \
3640
vim \
3741
git \
3842
curl \
3943
wget \
4044
rsync \
4145
gpg \
46+
gcc-$GCC_VERSION \
4247
ca-certificates \
4348
lsb-release \
4449
openjdk-17-jdk \
@@ -51,32 +56,15 @@ RUN --mount=type=cache,id=apt-dev,target=/var/cache/apt \
5156
RUN python$PYTHON_VERSION -m venv /home/venv
5257
ENV PATH="/home/venv/bin:$PATH"
5358
54-
# Enable installation of recent cmake release
59+
# Enable installation of recent cmake release and pin cmake & cmake-data version
5560
# Ref: https://apt.kitware.com/
5661
RUN (wget -O - https://apt.kitware.com/keys/kitware-archive-latest.asc 2>/dev/null | gpg --dearmor - | tee /usr/share/keyrings/kitware-archive-keyring.gpg >/dev/null) \
5762
&& (echo "deb [signed-by=/usr/share/keyrings/kitware-archive-keyring.gpg] https://apt.kitware.com/ubuntu/ $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/kitware.list >/dev/null) \
5863
&& apt-get update \
5964
&& (test -f /usr/share/doc/kitware-archive-keyring/copyright || sudo rm /usr/share/keyrings/kitware-archive-keyring.gpg) \
6065
&& sudo apt-get install kitware-archive-keyring \
61-
&& rm -rf /var/lib/apt/lists/*
62-
63-
# Pin cmake and cmake-data version
64-
# Ref: https://manpages.ubuntu.com/manpages/xenial/man5/apt_preferences.5.html
65-
RUN echo "Package: cmake\nPin: version $CMAKE_VERSION*\nPin-Priority: 1001" > /etc/apt/preferences.d/cmake
66-
RUN echo "Package: cmake-data\nPin: version $CMAKE_VERSION*\nPin-Priority: 1001" > /etc/apt/preferences.d/cmake-data
67-
68-
# Install CUDA toolkit to enable "libtorch" build with GPU support
69-
RUN apt-get update && \
70-
if echo "$BASE_IMAGE" | grep -q "cuda:"; then \
71-
if [ "$USE_CUDA_VERSION" = "cu121" ]; then \
72-
apt-get -y install cuda-toolkit-12-1; \
73-
elif [ "$USE_CUDA_VERSION" = "cu118" ]; then \
74-
apt-get -y install cuda-toolkit-11-8; \
75-
else \
76-
echo "Cuda version not supported by CPP backend: $USE_CUDA_VERSION"; \
77-
exit 1; \
78-
fi; \
79-
fi \
66+
&& echo "Package: cmake\nPin: version $CMAKE_VERSION*\nPin-Priority: 1001" > /etc/apt/preferences.d/cmake \
67+
&& echo "Package: cmake-data\nPin: version $CMAKE_VERSION*\nPin-Priority: 1001" > /etc/apt/preferences.d/cmake-data \
8068
&& rm -rf /var/lib/apt/lists/*
8169
8270
RUN git clone --recursive https://github.com/pytorch/serve.git \

docker/README.md

+11
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,7 @@ Use `build_image.sh` script to build the docker images. The script builds the `p
4141
|-t, --tag|Tag name for image. If not specified, script uses torchserve default tag names.|
4242
|-cv, --cudaversion| Specify to cuda version to use. Supported values `cu92`, `cu101`, `cu102`, `cu111`, `cu113`, `cu116`, `cu117`, `cu118`. `cu121`, Default `cu121`|
4343
|-ipex, --build-with-ipex| Specify to build with intel_extension_for_pytorch. If not specified, script builds without intel_extension_for_pytorch.|
44+
|-cpp, --build-cpp specify to build TorchServe CPP|
4445
|-n, --nightly| Specify to build with TorchServe nightly.|
4546
|-py, --pythonversion| Specify the python version to use. Supported values `3.8`, `3.9`, `3.10`, `3.11`. Default `3.9`|
4647

@@ -147,6 +148,16 @@ Creates a docker image with `torchserve` and `torch-model-archiver` installed fr
147148
./build_image.sh -bt dev -ipex -t torchserve-ipex:1.0
148149
```
149150

151+
- For creating image to build Torchserve CPP with CPU support:
152+
```bash
153+
./build_image.sh -bt dev -cpp
154+
```
155+
156+
- For creating image to build Torchserve CPP with GPU support:
157+
```bash
158+
./build_image.sh -bt dev -g [-cv cu121|cu118] -cpp
159+
```
160+
150161

151162
## Start a container with a TorchServe image
152163

docker/build_image.sh

+7-2
Original file line numberDiff line numberDiff line change
@@ -174,9 +174,14 @@ then
174174

175175
if [[ "${MACHINE}" == "gpu" || "${CUDA_VERSION}" != "" ]];
176176
then
177-
if [[ "${CUDA_VERSION}" != "cu121" && "${CUDA_VERSION}" != "cu118" ]];
177+
if [ "${CUDA_VERSION}" == "cu121" ];
178178
then
179-
echo "Only cuda versions 12.1 and 11.8 are supported for CPP"
179+
BASE_IMAGE="nvidia/cuda:12.1.1-devel-ubuntu20.04"
180+
elif [ "${CUDA_VERSION}" == "cu118" ];
181+
then
182+
BASE_IMAGE="nvidia/cuda:11.8.0-devel-ubuntu20.04"
183+
else
184+
echo "Cuda version $CUDA_VERSION is not supported for CPP"
180185
exit 1
181186
fi
182187
fi

docker/build_upload_release.py

+20
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,14 @@
3636
f"./build_image.sh -g -cv cu121 -t {organization}/torchserve:latest-gpu",
3737
dry_run,
3838
)
39+
try_and_handle(
40+
f"./build_image.sh -bt dev -cpp -t {organization}/torchserve:latest-cpp-dev-cpu",
41+
dry_run,
42+
)
43+
try_and_handle(
44+
f"./build_image.sh -bt dev -g -cv cu121 -cpp -t {organization}/torchserve:latest-cpp-dev-gpu",
45+
dry_run,
46+
)
3947
try_and_handle(
4048
f"docker tag {organization}/torchserve:latest {organization}/torchserve:latest-cpu",
4149
dry_run,
@@ -48,13 +56,25 @@
4856
f"docker tag {organization}/torchserve:latest-gpu {organization}/torchserve:{check_ts_version()}-gpu",
4957
dry_run,
5058
)
59+
try_and_handle(
60+
f"docker tag {organization}/torchserve:latest-cpp-dev-cpu {organization}/torchserve:{check_ts_version()}-cpp-dev-cpu",
61+
dry_run,
62+
)
63+
try_and_handle(
64+
f"docker tag {organization}/torchserve:latest-cpp-dev-gpu {organization}/torchserve:{check_ts_version()}-cpp-dev-gpu",
65+
dry_run,
66+
)
5167

5268
for image in [
5369
f"{organization}/torchserve:latest",
5470
f"{organization}/torchserve:latest-cpu",
5571
f"{organization}/torchserve:latest-gpu",
72+
f"{organization}/torchserve:latest-cpp-dev-cpu",
73+
f"{organization}/torchserve:latest-cpp-dev-gpu",
5674
f"{organization}/torchserve:{check_ts_version()}-cpu",
5775
f"{organization}/torchserve:{check_ts_version()}-gpu",
76+
f"{organization}/torchserve:{check_ts_version()}-cpp-dev-cpu",
77+
f"{organization}/torchserve:{check_ts_version()}-cpp-dev-gpu",
5878
]:
5979
try_and_handle(f"docker push {image}", dry_run)
6080

docker/docker_nightly.py

+22
Original file line numberDiff line numberDiff line change
@@ -35,17 +35,29 @@
3535
project = "torchserve-nightly"
3636
cpu_version = f"{project}:cpu-{get_nightly_version()}"
3737
gpu_version = f"{project}:gpu-{get_nightly_version()}"
38+
cpp_dev_cpu_version = f"{project}:cpp-dev-cpu-{get_nightly_version()}"
39+
cpp_dev_gpu_version = f"{project}:cpp-dev-gpu-{get_nightly_version()}"
3840

3941
# Build Nightly images and append the date in the name
4042
try_and_handle(f"./build_image.sh -n -t {organization}/{cpu_version}", dry_run)
4143
try_and_handle(
4244
f"./build_image.sh -g -cv cu121 -n -t {organization}/{gpu_version}",
4345
dry_run,
4446
)
47+
try_and_handle(
48+
f"./build_image.sh -bt dev -cpp -t {organization}/{cpp_dev_cpu_version}",
49+
dry_run,
50+
)
51+
try_and_handle(
52+
f"./build_image.sh -bt dev -g -cv cu121 -cpp -t {organization}/{cpp_dev_gpu_version}",
53+
dry_run,
54+
)
4555

4656
# Push Nightly images to official PyTorch Dockerhub account
4757
try_and_handle(f"docker push {organization}/{cpu_version}", dry_run)
4858
try_and_handle(f"docker push {organization}/{gpu_version}", dry_run)
59+
try_and_handle(f"docker push {organization}/{cpp_dev_cpu_version}", dry_run)
60+
try_and_handle(f"docker push {organization}/{cpp_dev_gpu_version}", dry_run)
4961

5062
# Tag nightly images with latest
5163
try_and_handle(
@@ -56,10 +68,20 @@
5668
f"docker tag {organization}/{gpu_version} {organization}/{project}:latest-gpu",
5769
dry_run,
5870
)
71+
try_and_handle(
72+
f"docker tag {organization}/{cpp_dev_cpu_version} {organization}/{project}:latest-cpp-dev-cpu",
73+
dry_run,
74+
)
75+
try_and_handle(
76+
f"docker tag {organization}/{cpp_dev_gpu_version} {organization}/{project}:latest-cpp-dev-gpu",
77+
dry_run,
78+
)
5979

6080
# Push images with latest tag
6181
try_and_handle(f"docker push {organization}/{project}:latest-cpu", dry_run)
6282
try_and_handle(f"docker push {organization}/{project}:latest-gpu", dry_run)
83+
try_and_handle(f"docker push {organization}/{project}:latest-cpp-dev-cpu", dry_run)
84+
try_and_handle(f"docker push {organization}/{project}:latest-cpp-dev-gpu", dry_run)
6385

6486
# Cleanup built images
6587
if args.cleanup:

docs/Security.md

+2
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@
55
| Version | Supported |
66
|---------| ------------------ |
77
| 0.9.0 | :white_check_mark: |
8+
| 0.10.0 | :white_check_mark: |
89

910

1011
## How we do security
@@ -36,6 +37,7 @@ TorchServe as much as possible relies on automated tools to do security scanning
3637
2. Using private-key/certificate files
3738

3839
You can find more details in the [configuration guide](https://pytorch.org/serve/configuration.html#enable-ssl)
40+
6. TorchServe supports token authorization: check [documentation](https://github.com/pytorch/serve/blob/master/docs/token_authorization_api.md) for more information.
3941

4042

4143

Original file line numberDiff line numberDiff line change
@@ -1 +1,2 @@
1-
sentencepiece
1+
transformers==4.36.2
2+
sentencepiece==0.1.99

ts_scripts/install_dependencies.py

+3-4
Original file line numberDiff line numberDiff line change
@@ -209,10 +209,9 @@ def install_numactl(self):
209209
os.system(f"{self.sudo_cmd}apt-get install -y numactl")
210210

211211
def install_cpp_dependencies(self):
212-
if os.system("clang-tidy --version") != 0 or args.force:
213-
os.system(
214-
f"{self.sudo_cmd}apt-get install -y {' '.join(CPP_LINUX_DEPENDENCIES)}"
215-
)
212+
os.system(
213+
f"{self.sudo_cmd}apt-get install -y {' '.join(CPP_LINUX_DEPENDENCIES)}"
214+
)
216215

217216
def install_neuronx_driver(self):
218217
# Configure Linux for Neuron repository updates

0 commit comments

Comments
 (0)