Skip to content
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Commit fbf81a2

Browse files
committedMar 13, 2025··
more documentation
Signed-off-by: Nir Rozenbaum <nirro@il.ibm.com>
1 parent f032b4c commit fbf81a2

File tree

2 files changed

+10
-3
lines changed

2 files changed

+10
-3
lines changed
 

‎config/manifests/vllm/cpu-deployment.yaml

+2-2
Original file line numberDiff line numberDiff line change
@@ -24,8 +24,8 @@ spec:
2424
- "8000"
2525
- "--enable-lora"
2626
- "--lora-modules"
27-
- '{"name": "tweet-summary-0", "path": "/adapters/hub/models--ai-blond--Qwen-Qwen2.5-Coder-1.5B-Instruct-lora/snapshots/9cde18d8ed964b0519fb481cca6acd936b2ca811"}'
28-
- '{"name": "tweet-summary-1", "path": "/adapters/hub/models--ai-blond--Qwen-Qwen2.5-Coder-1.5B-Instruct-lora/snapshots/9cde18d8ed964b0519fb481cca6acd936b2ca811"}'
27+
- '{"name": "tweet-summary-0", "path": "/adapters/ai-blond/Qwen-Qwen2.5-Coder-1.5B-Instruct-lora_0"}'
28+
- '{"name": "tweet-summary-1", "path": "/adapters/ai-blond/Qwen-Qwen2.5-Coder-1.5B-Instruct-lora_1"}'
2929
env:
3030
- name: PORT
3131
value: "8000"

‎site-src/guides/index.md

+8-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ This quickstart guide is intended for engineers familiar with k8s and model serv
55
## **Prerequisites**
66
- Envoy Gateway [v1.2.1](https://gateway.envoyproxy.io/docs/install/install-yaml/#install-with-yaml) or higher
77
- A cluster with:
8-
- Support for services of typs `LoadBalancer`. (This can be validated by ensuring your Envoy Gateway is up and running).
8+
- Support for services of type `LoadBalancer`. (This can be validated by ensuring your Envoy Gateway is up and running).
99
For example, with Kind, you can follow [these steps](https://kind.sigs.k8s.io/docs/user/loadbalancer).
1010

1111
## **Steps**
@@ -34,6 +34,13 @@ This quickstart guide is intended for engineers familiar with k8s and model serv
3434

3535
#### CPU-Based Model Server
3636

37+
For this setup, we use approximately 9.5GB of memory and 12 CPUs for each replica.
38+
While it is possible to deploy the model server with less resources, this is not recommended.
39+
For example, in our tests, loading the model using 8GB of memory and 1 CPU was possible but took almost 3.5 minutes and inference requests took unreasonable time.
40+
In general, there is a tradeoff between the memory and CPU we allocate to our pods and the performance. The more memory and CPU we allocate the better performance we can get.
41+
After running multiple configurations of these values we decided in this sample to use 9.5GB of memory and 12 CPUs for each replica, which gives reasonable response times. You can increase those numbers and potentially may even get better response times.
42+
For modifying the allocated resources, adjust the numbers in `./config/manifests/vllm/cpu-deployment.yaml` as needed.
43+
3744
Deploy a sample vLLM deployment with the proper protocol to work with the LLM Instance Gateway.
3845
```bash
3946
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api-inference-extension/raw/main/config/manifests/vllm/cpu-deployment.yaml

0 commit comments

Comments
 (0)
Please sign in to comment.