You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+12-10
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ Inference of Stable Diffusion and Flux in pure C/C++
10
10
11
11
- Plain C/C++ implementation based on [ggml](https://github.com/ggerganov/ggml), working in the same way as [llama.cpp](https://github.com/ggerganov/llama.cpp)
12
12
- Super lightweight and without external dependencies
13
-
- SD1.x, SD2.x, SDXL and SD3 support
13
+
- SD1.x, SD2.x, SDXL and [SD3/SD3.5](./docs/sd3.md) support
14
14
- !!!The VAE in SDXL encounters NaN issues under FP16, but unfortunately, the ggml_conv_2d only operates under FP16. Hence, a parameter is needed to specify the VAE that has fixed the FP16 NaN issue. You can find it here: [SDXL VAE FP16 Fix](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix/blob/main/sdxl_vae.safetensors).
15
15
-[Flux-dev/Flux-schnell Support](./docs/flux.md)
16
16
@@ -197,23 +197,24 @@ usage: ./bin/sd [arguments]
197
197
arguments:
198
198
-h, --help show this help message and exit
199
199
-M, --mode [MODEL] run mode (txt2img or img2img or convert, default: txt2img)
200
-
-t, --threads N number of threads to use during computation (default: -1).
200
+
-t, --threads N number of threads to use during computation (default: -1)
201
201
If threads <= 0, then threads will be set to the number of CPU physical cores
202
202
-m, --model [MODEL] path to full model
203
203
--diffusion-model path to the standalone diffusion model
204
204
--clip_l path to the clip-l text encoder
205
-
--t5xxl path to the the t5xxl text encoder.
205
+
--clip_g path to the clip-l text encoder
206
+
--t5xxl path to the the t5xxl text encoder
206
207
--vae [VAE] path to vae
207
208
--taesd [TAESD_PATH] path to taesd. Using Tiny AutoEncoder for fast decoding (low quality)
208
209
--control-net [CONTROL_PATH] path to control net model
209
-
--embd-dir [EMBEDDING_PATH] path to embeddings.
210
-
--stacked-id-embd-dir [DIR] path to PHOTOMAKER stacked id embeddings.
211
-
--input-id-images-dir [DIR] path to PHOTOMAKER input id images dir.
210
+
--embd-dir [EMBEDDING_PATH] path to embeddings
211
+
--stacked-id-embd-dir [DIR] path to PHOTOMAKER stacked id embeddings
212
+
--input-id-images-dir [DIR] path to PHOTOMAKER input id images dir
212
213
--normalize-input normalize PHOTOMAKER input id images
213
-
--upscale-model [ESRGAN_PATH] path to esrgan model. Upscale images after generate, just RealESRGAN_x4plus_anime_6B supported by now.
214
+
--upscale-model [ESRGAN_PATH] path to esrgan model. Upscale images after generate, just RealESRGAN_x4plus_anime_6B supported by now
214
215
--upscale-repeats Run the ESRGAN upscaler this many times (default 1)
0 commit comments