You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This Multi-Image Generation Streamlit app is designed to generate multiple images based on a provided text prompt. Instead of using Stable Diffusion directly, this app chains Llama and Stable Diffusion to enhance the image generation process. Here’s how it works:
@@ -7,7 +6,7 @@ This Multi-Image Generation Streamlit app is designed to generate multiple image
7
6
- For performance optimization, the models are compiled using [torch.compile using OpenVINO backend.](https://docs.openvino.ai/2024/openvino-workflow/torch-compile.html)
8
7
- The application leverages [TorchServe](https://pytorch.org/serve/) for efficient model serving and management.
@@ -83,12 +82,12 @@ Note: You can replace the model identifiers (MODEL_NAME_LLM, MODEL_NAME_SD) as n
83
82
</details>
84
83
85
84
## What to expect
86
-
After launching the Docker container using the `docker run ..` command displayed after successful build, you can access two separate Streamlit applications:
85
+
After launching the Docker container using the `docker run ..` command displayed after a successful build, you can access two separate Streamlit applications:
87
86
1. TorchServe Server App (running at http://localhost:8084) to start/stop TorchServe, load/register models, scale up/down workers.
88
87
2. Client App (running at http://localhost:8085) where you can enter prompt for Image generation.
89
88
90
-
> Note: You could also run a quick benchmark comparing performance of Stable Diffusion with Eager, torch.compile with inductor and openvino.
91
-
> Review the `docker run ..` command displayed after successful build for benchmarking
89
+
> Note: You could also run a quick benchmark comparing the performance of Stable Diffusion with Eager, torch.compile with inductor and openvino.
90
+
> Review the `docker run ..` command displayed after a successful build for benchmarking
92
91
93
92
#### Sample Output of Starting the App:
94
93
@@ -140,7 +139,7 @@ Collecting usage statistics. To deactivate, set browser.gatherUsageStats to fals
140
139
</details>
141
140
142
141
#### Sample Output of Stable Diffusion Benchmarking:
143
-
To run Stable Diffusion benchmarking, use the `sd-benchmark.py`. See details below for sample.
142
+
To run Stable Diffusion benchmarking, use the `sd-benchmark.py`. See details below for a sample console output.
144
143
145
144
<details>
146
145
@@ -199,7 +198,7 @@ Results saved at /home/model-server/model-store/ which is a Docker container mou
199
198
</details>
200
199
201
200
#### Sample Output of Stable Diffusion Benchmarking with Profiling:
202
-
To run Stable Diffusion benchmarking with profiling, use `--run_profiling` or `-rp`. See details below for sample. Sample profiling benchmarking output files are available in [assets/benchmark_results_20241123_044407/](./assets/benchmark_results_20241123_044407/)
201
+
To run Stable Diffusion benchmarking with profiling, use `--run_profiling` or `-rp`. See details below for a sample console output. Sample profiling benchmarking output files are available in [assets/benchmark_results_20241123_044407/](https://github.com/pytorch/serve/tree/master/examples/usecases/llm_diffusion_serving_app/assets/benchmark_results_20241123_044407)
203
202
204
203
<details>
205
204
@@ -264,18 +263,18 @@ Results saved at /home/model-server/model-store/ which is a Docker container mou
0 commit comments