Skip to content

Commit 4849681

Browse files
chauhangmsaroufim
andauthored
Update PT2 examples readme (#3029)
* Update README.md Add links for AOTInductor CPP examples * Lint updates * Update wordlist.txt --------- Co-authored-by: Mark Saroufim <[email protected]>
1 parent d60ddb0 commit 4849681

File tree

2 files changed

+6
-1
lines changed

2 files changed

+6
-1
lines changed

examples/pt2/README.md

+5-1
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ opt_mod = torch.compile(mod)
4949

5050
torchserve takes care of 4 and 5 for you while the remaining steps are your responsibility. You can do the exact same thing on the vast majority of TIMM or HuggingFace models.
5151

52-
### Note
52+
### Compiler Cache
5353

5454
`torch.compile()` is a JIT compiler and JIT compilers generally have a startup cost. To reduce the warm up time, `TorchInductor` already makes use of caching in `/tmp/torchinductor_USERID` of your machine
5555

@@ -146,3 +146,7 @@ The example can be found [here](../large_models/segment_anything_fast/README.md)
146146
Diffusion Fast is a simple and efficient pytorch-native way of optimizing Stable Diffusion XL (SDXL) with 3x performance improvements compared to the original implementation. This is using `torch.compile`
147147

148148
The example can be found [here](../large_models/diffusion_fast/README.md)
149+
150+
## C++ AOTInductor examples
151+
152+
AOTInductor is the Ahead-of-time-compiler, a specialized version of `TorchInductor`, designed to process exported PyTorch models, optimize them, and produce shared libraries as well as other relevant artifacts. These compiled artifacts are specifically crafted for deployment in non-Python environments. You can find the AOTInductor C++ examples [here](../cpp/aot_inductor)

ts_scripts/spellcheck_conf/wordlist.txt

+1
Original file line numberDiff line numberDiff line change
@@ -1215,3 +1215,4 @@ dylib
12151215
libomp
12161216
rpath
12171217
venv
1218+
TorchInductor

0 commit comments

Comments
 (0)