Skip to content

Commit f33be46

Browse files
authored
Update SECURITY.md with extra security recommendations (#3041)
1 parent 1994aa0 commit f33be46

File tree

1 file changed

+8
-0
lines changed

1 file changed

+8
-0
lines changed

SECURITY.md

+8
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,14 @@ TorchServe as much as possible relies on automated tools to do security scanning
3636
2. Using private-key/certificate files
3737

3838
You can find more details in the [configuration guide](https://pytorch.org/serve/configuration.html#enable-ssl)
39+
6. Prepare your model against bad inputs and prompt injections. Some recommendations:
40+
1. Pre-analysis: check how the model performs by default when exposed to prompt injection (e.g. using [fuzzing for prompt injection](https://github.com/FonduAI/awesome-prompt-injection?tab=readme-ov-file#tools)).
41+
2. Input Sanitation: Before feeding data to the model, sanitize inputs rigorously. This involves techniques such as:
42+
- Validation: Enforce strict rules on allowed characters and data types.
43+
- Filtering: Remove potentially malicious scripts or code fragments.
44+
- Encoding: Convert special characters into safe representations.
45+
- Verification: Run tooling that identifies potential script injections (e.g. [models that detect prompt injection attempts](https://python.langchain.com/docs/guides/safety/hugging_face_prompt_injection)).
46+
7. If you intend to run multiple models in parallel with shared memory, it is your responsibility to ensure the models do not interact or access each other's data. The primary areas of concern are tenant isolation, resource allocation, model sharing and hardware attacks.
3947

4048

4149

0 commit comments

Comments
 (0)