You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
update PyTorch 2.x examples to use PyTorch >=2.3 (#3111)
* Updated SAM Fast and aot_compile example
* Updated Diffusion Fast example
* Updated GPT Fast example
* Updated GPT Fast example
---------
Co-authored-by: Matthias Reso <[email protected]>
Copy file name to clipboardexpand all lines: examples/large_models/segment_anything_fast/README.md
+8-14
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,8 @@ Details on how this is achieved can be found in this [blog](https://pytorch.org/
15
15
16
16
#### Pre-requisites
17
17
18
-
Needs python 3.10
18
+
- Needs python 3.10
19
+
- PyTorch >= 2.3.0
19
20
20
21
`cd` to the example folder `examples/large_models/segment_anything_fast`
21
22
@@ -24,8 +25,6 @@ Install `Segment Anything Fast` by running
24
25
chmod +x install_segment_anything_fast.sh
25
26
source install_segment_anything_fast.sh
26
27
```
27
-
Segment Anything Fast needs the nightly version of PyTorch. Hence the script is uninstalling PyTorch, its domain libraries and installing the nightly version of PyTorch.
Copy file name to clipboardexpand all lines: examples/pt2/README.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
## PyTorch 2.x integration
2
2
3
-
PyTorch 2.0 brings more compiler options to PyTorch, for you that should mean better perf either in the form of lower latency or lower memory consumption.
3
+
PyTorch 2.x brings more compiler options to PyTorch, for you that should mean better perf either in the form of lower latency or lower memory consumption.
4
4
5
5
We strongly recommend you leverage newer hardware so for GPUs that would be an Ampere architecture. You'll get even more benefits from using server GPU deployments like A10G and A100 vs consumer cards. But you should expect to see some speedups for any Volta or Ampere architecture.
PyTorch 2.0 supports several compiler backends and you pick which one you want by passing in an optional file `model_config.yaml` during your model packaging
19
+
PyTorch 2.x supports several compiler backends and you pick which one you want by passing in an optional file `model_config.yaml` during your model packaging
0 commit comments