Releases: invoke-ai/InvokeAI
v5.7.0
This release upgrades the Workflow Editor's Linear View to a more fully-featured Form Builder. It also includes many other fixes and enhancements, including the adoption of @skunkworxdark's excellent metadata nodes into Invoke's core nodes.
The launcher has recently been updated to v1.4.1, fixing a minor memory leak.
Form Builder
Nodeologists may now create more sophisticated UIs for their workflows using the Form Builder. This replaces the older Linear View feature.
In addition to Node Fields, you may add Heading, Text, Container and Divider elements to the form. Some form elements are configurable. For example, Containers support row or column layouts, and certain Node Field types can render different UI components.
Here's a brief demo of the Form Builder, touching on the core functionality:
Screen.Recording.2025-02-27.at.2.32.12.pm.mov
Your existing workflows with the Linear View fields will automatically be migrated to the new format.
We'll be iterating on the Form Builder and extending its capabilities in future updates.
Other Changes
@skunkworxdark's Metadata Nodes ship with Invoke
We are pleased to bring this popular node pack into the core Invoke repo! Thanks to @skunkworxdark for allowing us to adopt these nodes, and for their continued support of the project.
After you update to v5.7.0, if you have the node pack installed as custom nodes, you will see an error when on start up. It's saying that you already have these nodes installed. Everything should work fine - but you'll need to delete the node pack to get rid of the error.
Enhancements
- Increase default VAE tile size to 1024, reducing "grid" artifacts in images generated on the Upscaling tab.
- Failed or canceled queue items may be retried via the queue tab.
- Canvas color picker now supports transparency.
- Canvas color picker shows RGBA values next to it.
- Minor redesign/improved styles throughout the Workflow Editor.
- When attempting to load a workflow while you have unsaved changes, a dialog will appear asking to you confirm. Previously it would just load the workflow and you'd lose any unsaved work.
- When a node has an invalid field, its title will be error-colored.
- Less ginormous image field component in nodes.
- Node fields now have editable descriptions.
- Double-click a node to zoom to it.
- Click the bullseye icon in a Form Builder node field to zoom to the node.
- ❗Minor Breaking Change: Board fields now have an
Auto
option in the drop-down. When set toAuto
, the auto-add board will be used for that board field.Auto
is the new default. Workflows that previously hadNone (Uncategorized)
selected will now haveAuto
selected. - Add
Dynamic Prompts (Random)
andDynamic Prompts (Combinatorial)
modes to theString Generator
node. - Add
Image Generator
node withImages from Board
mode. Select a board and category to run a batch over its images.
Fixes
- Canvas mouse cursor disappears when certain layer types and tools are selected.
- Canvas color picker doesn't work when certain layer types are selected.
- Sometimes mask layers don't render until you zoom or pan.
- When using shift-click to draw a straight line, if the canvas was moved too much between the clicks, the line got cut off.
- Incorrect node suggestions when dropping an edge into empty space.
- When loading a workflow with fields that reference unavailable models, the fields were not always reset correctly.
- If an image collection field referenced images that were deleted, it was impossible to delete them without emptying the whole collection.
- Lag/stutters in the Add Node popover.
- When deleting a board and its images, we didn't check if any of the deleted images were used in an image collection field, potentially leading to errors when attempting to use a nonexistent image.
Internal
- Upgraded
reactflow
to v12. This major release provides no new user-facing features, but does feature improved performance. - Upgraded
@reduxjs/toolkit
to latest. A new utility allows for more efficient cache management and yields a minor perf improvement to gallery load times. - Numerous performance improvements throughout the workflow editor. Many code paths were revised and components restructured to improve performance. Some CSS transitions were disabled for performance reasons.
- Substantial performance improvement for batch queuing logic (i.e. the stuff that happens between clicking Invoke and the progress bar starts moving).
- Improved custom node loading. For each node pack, if an error occurs while loading it, importing of that pack's nodes will stop and Invoke will skip to the next node pack. This may result in only some nodes from a pack loading, but the app will still run. Previously, any error prevented Invoke from starting up.
Installing and Updating
The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Follow the Quick Start guide to get started with the launcher.
If you don't want to use the launcher, or need a headless install, you can follow the manual install guide.
The launcher has recently been updated to v1.4.1, fixing a minor memory leak.
What's Changed
- Increase default VAE tile size in upscaling tab by @RyanJDick in #7644
- feat: workflow builder by @psychedelicious in #7608
- chore: bump version to v5.7.0a1 by @psychedelicious in #7642
- Fix container build (frontend) by @ebr in #7647
- perf(ui): workflow editor misc by @psychedelicious in #7645
- feat: retry queue items by @psychedelicious in #7649
- fix,feat(ui): canvas improvements by @psychedelicious in #7651
- fix(ui): omnipresent pencil on board name by @maryhipp in #7655
- ui: workflow builder iteration by @psychedelicious in #7654
- workflow builder iteration 2 by @psychedelicious in #7657
- workflow builder iteration 3 by @psychedelicious in #7658
- chore: bump version to v5.7.0rc1 by @psychedelicious in #7663
- workflow builder iteration 4 by @psychedelicious in #7664
- fix(ui): star button not working on Chrome by @psychedelicious in #7669
- fix: weblate merge conflict issue by @psychedelicious in #7670
- fix(ui): do not render studio until destination is loaded by @maryhipp in #7672
- fix(ui): reset form initial values when workflow is saved by @psychedelicious in #7678
- feat(ui): use auto-add board as default in workflow editor by @psychedelicious in #7677
- (ui): add actions for copying image and opening image in new tab by @maryhipp in #7681
- fix(ui): make sure notes node exists like we do for invocation nodes by @maryhipp in #7684
- refactor(ui): form layout styling by @psychedelicious in #7680
- feat(ui): async batch generators & board -> image generator by @psychedelicious in #7685
- ui: translations update from weblate by @weblate in #7679
- chore: bump version to v5.7.0rc2 by @psychedelicious in #7687
- fix(api): fix args in other places that use get_all_board_image_names_for_board by @maryhipp in #7690
- fix(backend): ValuesToInsertTuple.retried_from_item_id should be an int by @ebr in #7689
- revert: images from board requires a board (does not work on uncategorized) by @psychedelicious in #7694
- fix(ui): image usage checks collection fields by @psychedelicious in #7695
- feat(app): do not pull PIL image from disk in image primitive by @psychedelicious in #7696
- feat(app): adopt metadata linked nodes by @psychedelicious in #7697
- feat(app): improved custom node loading by @psychedelicious in #7698
- ui: translations update from weblate by @weblate in #7692
- chore: bump version to v5.7.0 by @psychedelicious in #7699
- fix(ui): form element settings obscured by container by @psychedelicious in #7701
Full Changelog: v5.6.2...v5.7.0
v5.7.0rc2
This release introduces the Workflow Builder, a form builder that replaces the Workflow Editor's Linear View, plus many other fixes and enhancements.
The launcher has recently been updated to v1.4.1, fixing a minor memory leak.
Changes since v5.7.0rc1
- Tweaked form builder layout and styling, various minor fixes.
- Fixed issue where you could scroll a container during drag-and-drop and be unable to scroll back.
- Fixed issue where Star button in gallery didn't work on Chrome.
- Click the bullseye icon in a Form Builder node field to zoom to the node.
- ❗Minor Breaking Change: Board fields now have an
Auto
option in the drop-down. When set toAuto
, the auto-add board will be used for that board field.Auto
is the new default. Workflows that previously hadNone (Uncategorized)
selected will now now haveAuto
selected. - Add
Dynamic Prompts (Random)
andDynamic Prompts (Combinatorial)
modes to theString Generator
node. - Add
Image Generator
node withImages from Board
mode. Select a board and category to run a batch over all images in the board. - Substantial performance improvement for batch queuing logic (i.e. the stuff that happens between clicking Invoke and the progress bar starts moving).
Workflow Builder
We will expand on these notes for the stable release, but for now, here is a broad overview of the builder.
- Workflows Linear View is replaced with a more fully-featured form builder.
- The new drag-and-drop form builder supports a number of element types (more to come in future releases):
- Heading
- Text
- Container (row or column layout)
- Divider
- Node Fields
- Workflows with the Linear View fields will automatically be migrated to the new format when you open them.
- Certain node fields added to Builder are configurable. Integers and floats can render as number inputs, sliders, or both. String inputs can render as single-line inputs or multi-line text areas.
Other Changes
Enhancements
- Increase default VAE tile size to 1024, reducing "grid" artifacts in images generated on the Upscaling tab.
- Failed or canceled queue items may be retried via the queue tab.
- Canvas color picker now supports transparency.
- Canvas color picker shows RGBA values next to it.
- Minor redesign/improved styles throughout the Workflow Editor.
- When attempting to load a workflow while you have unsaved changes, a dialog will appear asking to you confirm. Previously it would just load the workflow and you'd lose any unsaved work.
- When a node has an invalid field, its title will be error-colored.
- Less ginormous image field component in nodes.
- Node fields now have editable descriptions.
- Double-click a node to zoom to it.
- Click the bullseye icon in a Form Builder node field to zoom to the node.
- ❗Minor Breaking Change: Board fields now have an
Auto
option in the drop-down. When set toAuto
, the auto-add board will be used for that board field.Auto
is the new default. Workflows that previously hadNone (Uncategorized)
selected will now now haveAuto
selected. - Add
Dynamic Prompts (Random)
andDynamic Prompts (Combinatorial)
modes to theString Generator
node. - Add
Image Generator
node withImages from Board
mode. Select a board and category to run a batch over all images in the board. - Substantial performance improvement for batch queuing logic (i.e. the stuff that happens between clicking Invoke and the progress bar starts moving).
Fixes
- Canvas mouse cursor disappears when certain layer types and tools are selected.
- Canvas color picker doesn't work when certain layer types are selected.
- Sometimes mask layers don't render until you zoom or pan.
- When using shift-click to draw a straight line, if the canvas was moved too much between the clicks, the line got cut off.
- Incorrect node suggestions when dropping an edge into empty space.
- When loading a workflow with fields that reference unavailable models, the fields were not always reset correctly.
- If an image collection field referenced images that were deleted, it was impossible to delete them without emptying the whole collection.
- Lag/stutters in the Add Node popover.
Internal
- Upgraded
reactflow
to v12. This major release provides no new user-facing features, but does feature improved performance. - Upgraded
@reduxjs/toolkit
to latest. A new utility allows for more efficient cache management and yields a minor perf improvement to gallery load times. - Numerous performance improvements throughout the workflow editor. Many code paths were revised and components restructured to improve performance. Some CSS transitions were disabled for performance reasons.
- Substantial performance improvement for batch queuing logic (i.e. the stuff that happens between clicking Invoke and the progress bar starts moving).
Installing and Updating
The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Follow the Quick Start guide to get started with the launcher.
If you don't want to use the launcher, or need a headless install, you can follow the manual install guide.
The launcher has recently been updated to v1.4.1, fixing a minor memory leak.
What's Changed
- Increase default VAE tile size in upscaling tab by @RyanJDick in #7644
- feat: workflow builder by @psychedelicious in #7608
- chore: bump version to v5.7.0a1 by @psychedelicious in #7642
- Fix container build (frontend) by @ebr in #7647
- perf(ui): workflow editor misc by @psychedelicious in #7645
- feat: retry queue items by @psychedelicious in #7649
- fix,feat(ui): canvas improvements by @psychedelicious in #7651
- fix(ui): omnipresent pencil on board name by @maryhipp in #7655
- ui: workflow builder iteration by @psychedelicious in #7654
- workflow builder iteration 2 by @psychedelicious in #7657
- workflow builder iteration 3 by @psychedelicious in #7658
- chore: bump version to v5.7.0rc1 by @psychedelicious in #7663
- workflow builder iteration 4 by @psychedelicious in #7664
- fix(ui): star button not working on Chrome by @psychedelicious in #7669
- fix: weblate merge conflict issue by @psychedelicious in #7670
- fix(ui): do not render studio until destination is loaded by @maryhipp in #7672
- fix(ui): reset form initial values when workflow is saved by @psychedelicious in #7678
- feat(ui): use auto-add board as default in workflow editor by @psychedelicious in #7677
- (ui): add actions for copying image and opening image in new tab by @maryhipp in #7681
- fix(ui): make sure notes node exists like we do for invocation nodes by @maryhipp in #7684
- refactor(ui): form layout styling by @psychedelicious in #7680
- feat(ui): async batch generators & board -> image generator by @psychedelicious in #7685
- ui: translations update from weblate by @weblate in #7679
- chore: bump version to v5.7.0rc2 by @psychedelicious in #7687
Full Changelog: v5.6.2...v5.7.0rc2
v5.7.0rc1
This release introduces the Workflow Builder, a form builder that replaces the Workflow Editor's Linear View.
It also includes a number of fixes and enhancement. The launcher has also been updated to v1.4.1, fixing a minor memory leak.
Workflow Builder
We will expand on these notes for the stable release, but for now, here is a broad overview of the builder.
- Workflows Linear View is replaced with a more fully-featured form builder.
- The new drag-and-drop form builder supports a number of element types (more to come in future releases):
- Heading
- Text
- Container (row or column layout)
- Divider
- Node Fields
- Workflows with the Linear View fields will automatically be migrated to the new format when you open them.
- Certain node fields added to Builder are configurable. Integers and floats can render as number inputs, sliders, or both. String inputs can render as single-line inputs or multi-line text areas.
Other Changes
Enhancements
- Increase default VAE tile size to 1024, reducing "grid" artifacts in images generated on the Upscaling tab.
- Failed or canceled queue items may be retried via the queue tab.
- Canvas color picker now supports transparency.
- Canvas color picker shows RGBA values next to it.
- Minor redesign/improved styles throughout the Workflow Editor.
- When attempting to load a workflow while you have unsaved changes, a dialog will appear asking to you confirm. Previously it would just load the workflow and you'd lose any unsaved work.
- When a node has an invalid field, its title will be error-colored.
- Less ginormous image field component in nodes.
- Node fields now have editable descriptions.
- Double-click a node to zoom to it.
Fixes
- Canvas mouse cursor disappears when certain layer types and tools are selected.
- Canvas color picker doesn't work when certain layer types are selected.
- Sometimes mask layers don't render until you zoom or pan.
- When using shift-click to draw a straight line, if the canvas was moved too much between the clicks, the line got cut off.
- Incorrect node suggestions when dropping an edge into empty space.
- When loading a workflow with fields that reference unavailable models, the fields were not always reset correctly.
- If an image collection field referenced images that were deleted, it was impossible to delete them without emptying the whole collection.
- Lag/stutters in the Add Node popover.
Internal
- Upgraded
reactflow
to v12. This major release provides no new user-facing features, but does feature improved performance. - Upgraded
@reduxjs/toolkit
to latest. A new utility allows for more efficient cache management and yields a minor perf improvement to gallery load times. - Numerous performance improvements throughout the workflow editor. Many code paths were revised and components restructured to improve performance. Some CSS transitions were disabled for performance reasons.
Installing and Updating
The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Follow the Quick Start guide to get started with the launcher.
If you don't want to use the launcher, or need a headless install, you can follow the manual install guide.
What's Changed
- Increase default VAE tile size in upscaling tab by @RyanJDick in #7644
- feat: workflow builder by @psychedelicious in #7608
- chore: bump version to v5.7.0a1 by @psychedelicious in #7642
- Fix container build (frontend) by @ebr in #7647
- perf(ui): workflow editor misc by @psychedelicious in #7645
- feat: retry queue items by @psychedelicious in #7649
- fix,feat(ui): canvas improvements by @psychedelicious in #7651
- fix(ui): omnipresent pencil on board name by @maryhipp in #7655
- ui: workflow builder iteration by @psychedelicious in #7654
- workflow builder iteration 2 by @psychedelicious in #7657
- workflow builder iteration 3 by @psychedelicious in #7658
- chore: bump version to v5.7.0rc1 by @psychedelicious in #7663
Full Changelog: v5.6.2...v5.7.0rc1
v5.6.2
This minor release includes the following enhancements and fixes:
- Make the Upscaling tab's Scheduler and CFG Scale settings independent from the Canvas tab. We've found that the best Scheduler and CFG Scale settings for Canvas rarely work well for Upscaling, and vice-versa. Separating the settings prevents your Canvas settings from causing bad upscale results.
- Fixed issue with Multiply Image Channel node loading images with different channel counts. Thanks @dunkeroni!
- Fixed typos in docs. Thanks @maximevtush!
- Fixed issue where the app scrolls out of view, especially when using the launcher. Again. Hopefully.
- Update internal build toolchain dependencies.
- Updated translations. Thanks @Harvester62, @Linos1391, @Ery4z!
Installing and Updating
The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Follow the Quick Start guide to get started with the launcher.
If you don't want to use the launcher, or need a headless install, you can follow the manual install guide.
What's Changed
- Fix: Multiply Image Channel allows RGB inputs again by @dunkeroni in #7633
- fix: typos in documentation files by @maximevtush in #7629
- feat(ui): separate upscaling settings so that tab does not inherit from main generation settings by @maryhipp in #7635
- fix(ui): prevent overflow on document root by @psychedelicious in #7636
- Add metadata field extractor node by @jazzhaiku in #7638
- ui: translations update from weblate by @weblate in #7622
- Upgrade vite, vitest, and related plugins to latest versions by @ebr in #7640
- chore: bump version to v5.6.2 by @psychedelicious in #7641
New Contributors
- @maximevtush made their first contribution in #7629
- @jazzhaiku made their first contribution in #7638
Full Changelog: v5.6.1...v5.6.2
v5.6.1
This release includes a handful of minor improvements and fixes.
- Improvements to memory management defaults, resulting in fewer OOMs.
- Expanded FLUX LoRA compatibility.
- On-demand model cache clearing via button on the Queue tab.
- Canvas Adjust Image filter (i.e. levels, hue, etc). Thanks @dunkeroni!
- Button to cancel all queue items except current. Thanks @rikublock!
- Copy Canvas/Bbox as image via Canvas right-click menu.
- Paste image into Canvas/Bbox via normal paste hotkey. You will be prompted for where the image should be placed.
- Allow
Collect
nodes to be connected directly toIterate
nodes. - Allow
Any
type node inputs to accept collections. For example, theMetadata Item
node's value field now accepts collections. - Improved error messages when invalid graphs are queued.
- LoRA Loader node LoRA collection input is now optional, supporting @skunkworxdark's metadata nodes. Thanks @skunkworxdark!
- Fixed issues where staging area got stuck if one image failed to load (e.g. if it was deleted).
- Updated translations. Thanks @Harvester62, @Linos1391, @rikublock, @Ery4z!
Installing and Updating
The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Follow the Quick Start guide to get started with the launcher.
If you don't want to use the launcher, or need a headless install, you can follow the manual install guide.
What's Changed
- Fix bug with some LoRA variants when applied to bitsandbytes NF4 quantized models by @RyanJDick in #7577
- docs: typo in manual docs install command by @psychedelicious in #7586
- Improve MaskOutput dimension consistency by @RyanJDick in #7591
- Support FLUX OneTrainer LoRA formats (incl. DoRA) by @RyanJDick in #7590
- Fix T5EncoderField initialization in SD3 model loader by @RyanJDick in #7604
- Make the default max RAM cache size more conservative by @RyanJDick in #7603
- Add endpoint for emptying the model cache by @RyanJDick in #7602
- Feature: Adjust Image filter by @dunkeroni in #7594
- feat(ui): add cancel all except current queue item functionality by @rikublock in #7395
- ui: translations update from weblate by @weblate in #7600
- LoRA Collection Loader make optional LoRA Collection input by @skunkworxdark in #7579
- feat(ui): support copy of canvas by @psychedelicious in #7617
- feat: better graph validation errors by @psychedelicious in #7614
- feat(ui): support pasting directly to canvas by @psychedelicious in #7619
- ui: translations update from weblate by @weblate in #7621
- chore: bump version to v5.6.1rc1 by @psychedelicious in #7618
- fix(ui): restore missing translation by @maryhipp in #7625
- feat(ui): safe clipboard handling by @psychedelicious in #7626
- docs: cleanup faq by @psychedelicious in #7627
- docs: install troubleshooting by @psychedelicious in #7628
- fix(ui): [object object] in OOM toast by @maryhipp in #7630
- feat(ui): canvas image error handling by @psychedelicious in #7632
- chore: bump version to v5.6.1 by @psychedelicious in #7631
Full Changelog: v5.6.0...v5.6.1
v5.6.1rc1
This release includes a handful of minor improvements and fixes.
- Improvements to memory management defaults, resulting in fewer OOMs.
- Expanded FLUX LoRA compatibility.
- On-demand model cache clearing via button on the Queue tab.
- Canvas Adjust Image filter (i.e. levels, hue, etc). Thanks @dunkeroni!
- Button to cancel all queue items except current. Thanks @rikublock!
- Copy Canvas/Bbox as image via Canvas right-click menu.
- Paste image into Canvas/Bbox via normal paste hotkey. You will be prompted for where the image should be placed.
- Allow
Collect
nodes to be connected directly toIterate
nodes. - Allow
Any
type node inputs to accept collections. For example, theMetadata Item
node's value field now accepts collections. - Improved error messages when invalid graphs are queued.
- LoRA Loader node LoRA collection input is now optional, supporting @skunkworxdark's metadata nodes. Thanks @skunkworxdark!
- Updated translations. Thanks @Harvester62, @Linos1391, @rikublock, @Ery4z!
Installing and Updating
The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Follow the Quick Start guide to get started with the launcher.
If you don't want to use the launcher, or need a headless install, you can follow the manual install guide.
What's Changed
- Fix bug with some LoRA variants when applied to bitsandbytes NF4 quantized models by @RyanJDick in #7577
- docs: typo in manual docs install command by @psychedelicious in #7586
- Improve MaskOutput dimension consistency by @RyanJDick in #7591
- Support FLUX OneTrainer LoRA formats (incl. DoRA) by @RyanJDick in #7590
- Fix T5EncoderField initialization in SD3 model loader by @RyanJDick in #7604
- Make the default max RAM cache size more conservative by @RyanJDick in #7603
- Add endpoint for emptying the model cache by @RyanJDick in #7602
- Feature: Adjust Image filter by @dunkeroni in #7594
- feat(ui): add cancel all except current queue item functionality by @rikublock in #7395
- ui: translations update from weblate by @weblate in #7600
- LoRA Collection Loader make optional LoRA Collection input by @skunkworxdark in #7579
- feat(ui): support copy of canvas by @psychedelicious in #7617
- feat: better graph validation errors by @psychedelicious in #7614
- feat(ui): support pasting directly to canvas by @psychedelicious in #7619
- ui: translations update from weblate by @weblate in #7621
- chore: bump version to v5.6.1rc1 by @psychedelicious in #7618
Full Changelog: v5.6.0...v5.6.1rc1
v5.6.0
This release brings major improvements to Invoke's memory management, new Blur and Noise Canvas filters, and expanded batch capabilities in Workflows.
Memory Management Improvements (aka Low-VRAM mode)
The goal of these changes is to allow users with low-VRAM GPUs to run even the beefiest models, like the 24GB unquantised FLUX dev model.
Despite the focus on low-VRAM GPUs and the colloquial name "Low-VRAM mode", most users benefit from these improvements to Invoke's memory management.
Low-VRAM mode works on systems with dedicated GPUs (Nvidia GPUs on Windows/Linux and AMD GPUs on Linux). It allows you to generate even if your GPU doesn't have enough VRAM to hold full models.
Low-VRAM mode involves 4 features, each of which can be configured or fine-tuned:
- Partial model loading
- Dynamic RAM and VRAM cache sizes
- Working memory
- Keeping a copy of models in RAM
Most users should only need to enable partial loading by adding this line to their invokeai.yaml
:
enable_partial_loading: true
🚨 Windows users should also disable the Nvidia sysmem fallback.
For more details and instructions for fine-tuning, see the Low-VRAM mode docs.
Thanks to @RyanJDick for designing and implementing these improvements!
Workflow Batches
We've expanded the capabilities for Batches in Workflows:
- Float, integer and string batch data types
- Batch collection generators
- Grouped (aka zipped) batches
Float, integer and string batch data types
There's a new batch node for each of the new data types. They work the same as the existing image batch node.

You can add a list of values directly in the node, but you'll probably find generators to be a nicer way to set up your batch.
Batch collection generators
These are essentially nodes that run in the frontend and generate a list of values to use in a batch node. Included in the release are these generators:
- Arithmetic Sequence (float, integer): Generate a sequence of
count
numbers, starting fromstart
, that increase or decrease bystep
. - Linear Distribution (float, integer): Generate a distribution of
count
numbers, starting withstart
and ending withend
. - Uniform Random Distribution (float, integer): Generation a random distribution of
count
numbers frommin
tomax
. You can set a seed for reproducible sequences. - Parse String (float, integer, string): Split the
input
on the specified string, parsing each value as a float, integer or string. You can load the input from a.txt
file. Use\n
as the split string to split on new lines.
Screen.Recording.2025-01-21.at.9.27.05.pm.mov
You'll notice the different handle icon for batch generators. These nodes cannot connect to non-batch nodes, which run in the backend.
Grouped (aka zipped) batches
When you use multiple batches, we run the graph once for every possible combination of values in the batch collections. In mathematical terms, we "take the Cartesian product" of all batch collections.
Consider this simple workflow that joins two strings:
We have two batch collections, each with two strings. This results in 2 * 2 = 4
runs, one for each possible combination of the strings. We get these outputs:
- "a cute cat"
- "a cute dog"
- "a ferocious cat"
- "a ferocious dog"
But what if we wanted to group or "zip" up the two string collections into a single collection, executing the graph once for each pair of strings? This is now possible - we can set both nodes to the same batch group:

This results in 2 runs, one for each "pair" of strings. We get these outputs:
- "a cute cat"
- "a ferocious dog"
You can use grouped and ungrouped batches arbitrarily - go wild! The Invoke button tooltip lets you know how many executions you'll end up with for the given batch nodes.
Keep in mind that grouped batch collections must have the same size, else we cannot zip them up into one collection. The Invoke button grey out and let you know there is a mismatch.
Details and technical explanation
On the backend, we first zip each group's batch collections into a single collection. Ungrouped batch collections remain as-is.
Then, we take the product of all batch collections. If there is only a single collection (i.e. a single ungrouped batch node, or multiple batch nodes all with the same group), the product operation outputs the single collection as-is.
There are 5 slots for groups, plus a 6th ungrouped option:
- None: Batch nodes will always be used as separate collections for the Cartesian product operation.
- Groups 1 - 5: Batch nodes within a given group will first be zipped into a single collection, before the the Cartesian product operation.
All Changes
Fixes
- Fix issue where excessively long board names could cause performance issues.
- Fix error when using DPM++ schedulers with certain models. Thanks @Vargol!
- Fix (maybe, hopefully) the app scrolling off screen when run via launcher.
- Fix link to
Scale
setting's support docs. - Fix image quality degradation when inpainting an image repeatedly.
- Fix issue with transparent Canvas filter previews blend with unfiltered parent layer.
Enhancements
- Support float, integer and string batch data types.
- Add batch data generators.
- Support grouped (aka zipped) batches.
- Reduce peak memory during FLUX model load.
- Add Noise and Blur filters to Canvas. Adding noise or blurring before generation can add a lot detail, especially when generating from a rough sketch. Thanks @dunkeroni!
- Reworked error handling when installing models from a URL.
- Updated first run screen and OOM error toast with links to Low-VRAM mode docs.
- Add a small handful of nodes designed to support inpainting in workflows. See #7583 for more details and an example workflow.
Internal
- Tidied some unused variables. Thanks @rikublock!
- Added typegen check to CI pipeline. Thanks @rikublock!
Docs
- Added stereogram nodes to Community Nodes docs. Thanks @simonfuhrmann!
- Updated installation-related docs (quick start, manual install, dev install).
- Add Low-VRAM mode docs.
Installing and Updating
The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Follow the Quick Start guide to get started with the launcher.
If you already have the launcher, you can use it to update your existing install.
We've just updated the launcher to v1.3.2. Review the launcher releases for a changelog. To update the launcher itself, download the latest version from the quick start guide - the download links there are kept up to date.
Legacy Scripts (not recommended!)
We recommend using the launcher, as described in the previous section!
To install or update with the outdated legacy scripts 😱, download the latest legacy scripts and follow the legacy scripts instructions.
What's Changed
- Update Readme with new Installer Instructions by @hipsterusername in #7455
- docs: fix installation docs home by @psychedelicious in #7470
- docs: fix installation docs home again by @psychedelicious in #7471
- feat(ci): add typegen check workflow by @rikublock in #7463
- docs: update download links for launcher by @psychedelicious in #7489
- Add Stereogram Nodes to communityNodes.md by @simonfuhrmann in #7493
- Partial Loading PR1: Tidy ModelCache by @RyanJDick in #7492
- Partial Loading PR2: Add utils to support partial loading of models from CPU to GPU by @RyanJDick in #7494
- Partial Loading PR3: Integrate 1) partial loading, 2) quantized models, 3) model patching by @RyanJDick in #7500
- Correct Scale Informational Popover by @hipsterusername in #7499
- docs: install guides by @psychedelicious in #7508
- docs: no need to specify version for dev env setup by @psychedelicious in #7510
- feat(ui): reset canvas layers only resets the layers by @psychedelicious in #7511
- refactor(ui): mm model install error handling by @psychedelicious in #7512
- fix(api): limit board_name length to 300 characters by @maryhipp in #7515
- fix(app): remove obsolete DEFAULT_PRECISION variable by @rikublock in #74...
v5.6.0rc4
This release brings major improvements to Invoke's memory management, new Blur and Noise Canvas filters, and expanded batch capabilities in Workflows.
Changes since v5.6.0rc3
- Fixed issue preventing you from typing in textarea fields in the workflow editor.
Changes since v5.6.0rc2
- Reduce peak memory during FLUX model load.
- Add
keep_ram_copy_of_weights
config option to reduce average RAM usage. - Revise the default logic for the model cache RAM limit to be more conservative.
- Support float, integer and string batch data types.
- Add batch data generators.
- Support grouped (aka zipped) batches.
- Fix image quality degradation when inpainting an image repeatedly.
- Fix issue with transparent Canvas filter previews blend with unfiltered parent layer.
- Add Noise and Blur filters to Canvas. Adding noise or blurring before generation can add a lot detail, especially when generating from a rough sketch. Thanks @dunkeroni!
Memory Management Improvements (aka Low-VRAM mode)
The goal of these changes is to allow users with low-VRAM GPUs to run even the beefiest models, like the 24GB unquantised FLUX dev model.
Despite the focus on low-VRAM GPUs and the colloquial name "Low-VRAM mode", most users benefit from these improvements to Invoke's memory management.
Low-VRAM mode works on systems with dedicated GPUs (Nvidia GPUs on Windows/Linux and AMD GPUs on Linux). It allows you to generate even if your GPU doesn't have enough VRAM to hold full models.
Low-VRAM mode involves 3 features, each of which can be configured or fine-tuned:
- Partial model loading
- Dynamic RAM and VRAM cache sizes
- Working memory
- Keeping a copy of models in RAM
Most users should only need to enable partial loading by adding this line to your invokeai.yaml
file:
enable_partial_loading: true
🚨 Windows users should also disable the Nvidia sysmem fallback.
For more details and instructions for fine-tuning, see the Low-VRAM mode docs.
Thanks to @RyanJDick for designing and implementing these improvements!
Workflow Batches
We've expanded the capabilities for Batches in Workflows:
- Float, integer and string batch data types
- Batch collection generators
- Grouped (aka zipped) batches
Float, integer and string batch data types
There's a new batch node for of the new data types. They work the same as the existing image batch node.
You can add a list of values directly in the node, but you'll probably find generators to be a nicer way to set up your batch.
Batch collection generators
These are essentially nodes that run in the frontend and generate a list of values to use in a batch node. Included in this release are these generators for floats and integers:
- Arithmetic Sequence: Generate a sequence of
count
numbers, starting fromstart
, that increase or decrease bystep
. - Linear Distribution: Generate a distribution of
count
numbers, starting withstart
and ending withend
. - Uniform Random Distribution: Generation a random distribution of
count
numbers frommin
tomax
. The values are generated randomly when you click Invoke. - Parse String: Split the
input
on the specified character, parsing each value as a number. Non-numbers are ignored.
Screen.Recording.2025-01-17.at.12.26.52.pm.mov
You'll notice the different handle icon for batch generators. These nodes cannot connect to non-batch nodes, which run in the backend.
In the future, we can explore more batch generators. Some ideas:
- Parse File (string, float, integer): Select a file and parse it, splitting on the specified character.
- Board (image): Output all images on a board.
Grouped (aka zipped) batches
When you use multiple batches, we run the graph once for every possible combination of values. In math-y speak, we "take the Cartesian product" of all batch collections.
Consider this simple workflow that joins two strings:
We have two batch collections, each with two strings. This results in 2 * 2 = 4
runs, one for each possible combination of the strings. We get these outputs:
- "a cute cat"
- "a cute dog"
- "a ferocious cat"
- "a ferocious dog"
But what if we wanted to group or "zip" up the two string collections into a single collection, executing the graph once for each pair of strings? This is now possible - we can set both nodes to the same batch group:
This results in 2 runs, one for each "pair" of strings. We get these outputs:
- "a cute cat"
- "a ferocious dog"
It's a bit technical, but if you try it a few times you'll quickly gain an intuition for how things combine. You can use grouped and ungrouped batches arbitrarily - go wild! The Invoke button tooltip lets you know how many executions you'll end up with for the given batch nodes.
Keep in mind that grouped batch collections must have the same size, else we cannot zip them up into one collection. The Invoke button grey out and let you know there is a mismatch.
Details and technical explanation
On the backend, we first zip all grouped batch collections into a single collection. Ungrouped batch collections remain as-is.
Then, we take the product of all batch collections. If there is only a single collection (i.e. a single ungrouped batch nodes, or multiple batch nodes all with the same group), we still do the product operation, but the result is the same as if we had skipped it.
There are 5 slots for groups, plus a 6th ungrouped option:
- None: Batch nodes will always be used as separate collections for the Cartesian product operation.
- Groups 1 - 5: Batch nodes within a given group will first be zipped into a single collection, before the the Cartesian product operation.
All Changes
The launcher itself has been updated to fix a handful of issues, including requiring an install every time you start the launcher and systems with AMD GPUs using CPU. The latest launcher version is v1.2.1.
Fixes
- Fix issue where excessively long board names could cause performance issues.
- Fix error when using DPM++ schedulers with certain models. Thanks @Vargol!
- Fix (maybe, hopefully) the app scrolling off screen when run via launcher.
- Fix link to
Scale
setting's support docs. - Fix image quality degradation when inpainting an image repeatedly.
- Fix issue with transparent Canvas filter previews blend with unfiltered parent layer.
Enhancements
- Support float, integer and string batch data types.
- Add batch data generators.
- Support grouped (aka zipped) batches.
- Reduce peak memory during FLUX model load.
- Add Noise and Blur filters to Canvas. Adding noise or blurring before generation can add a lot detail, especially when generating from a rough sketch. Thanks @dunkeroni!
- Reworked error handling when installing models from a URL.
- Updated first run screen and OOM error toast with links to Low-VRAM mode docs.
Internal
- Tidied some unused variables. Thanks @rikublock!
- Added typegen check to CI pipeline. Thanks @rikublock!
Docs
- Added stereogram nodes to Community Nodes docs. Thanks @simonfuhrmann!
- Updated installation-related docs (quick start, manual install, dev install).
- Add Low-VRAM mode docs.
Installing and Updating
The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Follow the Quick Start guide to get started with the launcher.
If you already have the launcher, you can use it to update your existing install.
We've just updated the launcher to v1.2.1 with a handful of fixes. To update the launcher itself, download the latest version from the quick start guide - the download links are kept up to date.
Legacy Scripts (not recommended!)
We recommend using the launcher, as described in the previous section!
To install or update with the outdated legacy scripts 😱, download the latest legacy scripts and follow the legacy scripts instructions.
What's Changed
- Update Readme with new Installer Instructions by @hipsterusername in #7455
- docs: fix installation docs home by @psychedelicious in #7470
- docs: fix installation docs home again by @psychedelicious in #7471
- feat(ci): add typegen check workflow by @rikublock in #7463
- docs: update download links for launcher by @psychedelicious in #7489
- Add Stereogram Nodes to communityNodes.md by @simonfuhrmann in #7493
- Partial Loading PR1: Tidy ModelCache by @RyanJDick in #7492
- Partial Loading PR2: Add utils to support partial loading of models from CPU to GPU by @RyanJDick in https://github.com/inv...
v5.6.0rc3
This release brings major improvements to Invoke's memory management, new Blur and Noise Canvas filters, and expanded batch capabilities in Workflows.
Changes since previous release candidate (v5.6.0rc2)
- Reduce peak memory during FLUX model load.
- Add
keep_ram_copy_of_weights
config option to reduce average RAM usage. - Revise the default logic for the model cache RAM limit to be more conservative.
- Support float, integer and string batch data types.
- Add batch data generators.
- Support grouped (aka zipped) batches.
- Fix image quality degradation when inpainting an image repeatedly.
- Fix issue with transparent Canvas filter previews blend with unfiltered parent layer.
- Add Noise and Blur filters to Canvas. Adding noise or blurring before generation can add a lot detail, especially when generating from a rough sketch. Thanks @dunkeroni!
Memory Management Improvements (aka Low-VRAM mode)
The goal of these changes is to allow users with low-VRAM GPUs to run even the beefiest models, like the 24GB unquantised FLUX dev model.
Despite the focus on low-VRAM GPUs and the colloquial name "Low-VRAM mode", most users benefit from these improvements to Invoke's memory management.
Low-VRAM mode works on systems with dedicated GPUs (Nvidia GPUs on Windows/Linux and AMD GPUs on Linux). It allows you to generate even if your GPU doesn't have enough VRAM to hold full models.
Low-VRAM mode involves 3 features, each of which can be configured or fine-tuned:
- Partial model loading
- Dynamic RAM and VRAM cache sizes
- Working memory
- Keeping a copy of models in RAM
Most users should only need to enable partial loading by adding this line to your invokeai.yaml
file:
enable_partial_loading: true
🚨 Windows users should also disable the Nvidia sysmem fallback.
For more details and instructions for fine-tuning, see the Low-VRAM mode docs.
Thanks to @RyanJDick for designing and implementing these improvements!
Workflow Batches
We've expanded the capabilities for Batches in Workflows:
- Float, integer and string batch data types
- Batch collection generators
- Grouped (aka zipped) batches
Float, integer and string batch data types
There's a new batch node for of the new data types. They work the same as the existing image batch node.
You can add a list of values directly in the node, but you'll probably find generators to be a nicer way to set up your batch.
Batch collection generators
These are essentially nodes that run in the frontend and generate a list of values to use in a batch node. Included in this release are these generators for floats and integers:
- Arithmetic Sequence: Generate a sequence of
count
numbers, starting fromstart
, that increase or decrease bystep
. - Linear Distribution: Generate a distribution of
count
numbers, starting withstart
and ending withend
. - Uniform Random Distribution: Generation a random distribution of
count
numbers frommin
tomax
. The values are generated randomly when you click Invoke. - Parse String: Split the
input
on the specified character, parsing each value as a number. Non-numbers are ignored.
Screen.Recording.2025-01-17.at.12.26.52.pm.mov
You'll notice the different handle icon for batch generators. These nodes cannot connect to non-batch nodes, which run in the backend.
In the future, we can explore more batch generators. Some ideas:
- Parse File (string, float, integer): Select a file and parse it, splitting on the specified character.
- Board (image): Output all images on a board.
Grouped (aka zipped) batches
When you use multiple batches, we run the graph once for every possible combination of values. In math-y speak, we "take the Cartesian product" of all batch collections.
Consider this simple workflow that joins two strings:
We have two batch collections, each with two strings. This results in 2 * 2 = 4
runs, one for each possible combination of the strings. We get these outputs:
- "a cute cat"
- "a cute dog"
- "a ferocious cat"
- "a ferocious dog"
But what if we wanted to group or "zip" up the two string collections into a single collection, executing the graph once for each pair of strings? This is now possible - we can set both nodes to the same batch group:
This results in 2 runs, one for each "pair" of strings. We get these outputs:
- "a cute cat"
- "a ferocious dog"
It's a bit technical, but if you try it a few times you'll quickly gain an intuition for how things combine. You can use grouped and ungrouped batches arbitrarily - go wild! The Invoke button tooltip lets you know how many executions you'll end up with for the given batch nodes.
Keep in mind that grouped batch collections must have the same size, else we cannot zip them up into one collection. The Invoke button grey out and let you know there is a mismatch.
Details and technical explanation
On the backend, we first zip all grouped batch collections into a single collection. Ungrouped batch collections remain as-is.
Then, we take the product of all batch collections. If there is only a single collection (i.e. a single ungrouped batch nodes, or multiple batch nodes all with the same group), we still do the product operation, but the result is the same as if we had skipped it.
There are 5 slots for groups, plus a 6th ungrouped option:
- None: Batch nodes will always be used as separate collections for the Cartesian product operation.
- Groups 1 - 5: Batch nodes within a given group will first be zipped into a single collection, before the the Cartesian product operation.
All Changes
The launcher itself has been updated to fix a handful of issues, including requiring an install every time you start the launcher and systems with AMD GPUs using CPU. The latest launcher version is v1.2.1.
Fixes
- Fix issue where excessively long board names could cause performance issues.
- Fix error when using DPM++ schedulers with certain models. Thanks @Vargol!
- Fix (maybe, hopefully) the app scrolling off screen when run via launcher.
- Fix link to
Scale
setting's support docs. - Fix image quality degradation when inpainting an image repeatedly.
- Fix issue with transparent Canvas filter previews blend with unfiltered parent layer.
Enhancements
- Support float, integer and string batch data types.
- Add batch data generators.
- Support grouped (aka zipped) batches.
- Reduce peak memory during FLUX model load.
- Add Noise and Blur filters to Canvas. Adding noise or blurring before generation can add a lot detail, especially when generating from a rough sketch. Thanks @dunkeroni!
- Reworked error handling when installing models from a URL.
- Updated first run screen and OOM error toast with links to Low-VRAM mode docs.
Internal
- Tidied some unused variables. Thanks @rikublock!
- Added typegen check to CI pipeline. Thanks @rikublock!
Docs
- Added stereogram nodes to Community Nodes docs. Thanks @simonfuhrmann!
- Updated installation-related docs (quick start, manual install, dev install).
- Add Low-VRAM mode docs.
Installing and Updating
The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Follow the Quick Start guide to get started with the launcher.
If you already have the launcher, you can use it to update your existing install.
We've just updated the launcher to v1.2.1 with a handful of fixes. To update the launcher itself, download the latest version from the quick start guide - the download links are kept up to date.
Legacy Scripts (not recommended!)
We recommend using the launcher, as described in the previous section!
To install or update with the outdated legacy scripts 😱, download the latest legacy scripts and follow the legacy scripts instructions.
What's Changed
- Update Readme with new Installer Instructions by @hipsterusername in #7455
- docs: fix installation docs home by @psychedelicious in #7470
- docs: fix installation docs home again by @psychedelicious in #7471
- feat(ci): add typegen check workflow by @rikublock in #7463
- docs: update download links for launcher by @psychedelicious in #7489
- Add Stereogram Nodes to communityNodes.md by @simonfuhrmann in #7493
- Partial Loading PR1: Tidy ModelCache by @RyanJDick in #7492
- Partial Loading PR2: Add utils to support partial loading of models from CPU to GPU by @RyanJDick in #7494
- Partial Loading PR3: Integrate 1) partial loading, 2) quant...
v5.6.0rc2
This release brings major improvements to Invoke's memory management, plus a few minor fixes.
Memory Management Improvements (aka Low-VRAM mode)
The goal of these changes is to allow users with low-VRAM GPUs to run even the beefiest models, like the 24GB unquantised FLUX dev model.
Despite the focus on low-VRAM GPUs and the colloquial name "Low-VRAM mode", most users benefit from these improvements to Invoke's memory management.
Low-VRAM mode works on systems with dedicated GPUs (Nvidia GPUs on Windows/Linux and AMD GPUs on Linux). It allows you to generate even if your GPU doesn't have enough VRAM to hold full models.
Low-VRAM mode involves 3 features, each of which can be configured or fine-tuned:
- Partial model loading
- Dynamic RAM and VRAM cache sizes
- Working memory
Most users should only need to enable partial loading by adding this line to your invokeai.yaml
file:
enable_partial_loading: true
🚨 Windows users should also disable the Nvidia sysmem fallback.
For more details and instructions for fine-tuning, see the Low-VRAM mode docs.
Thanks to @RyanJDick for designing and implementing these improvements!
Changes since previous release candidate (v5.6.0rc1)
- Fix some model loading errors that occurred in edge cases.
- Fix error when using DPM++ schedulers with certain models. Thanks @Vargol!
- Deprecate the
ram
andvram
settings in favor of newmax_cache_ram_gb
andmax_cache_vram_gb
settings. This is eases the upgrade path for users who had manually configuredram
andvram
in the past. - Fix (maybe, hopefully) the app scrolling off screen when run via launcher.
The launcher itself has also been updated to fix a handful of issues, including requiring an install every time you start the launcher and systems with AMD GPUs using CPU.
Other Changes
- Fixed issue where excessively long board names could cause performance issues.
- Reworked error handling when installing models from a URL.
- Fix error when using DPM++ schedulers with certain models. Thanks @Vargol!
- Fix (maybe, hopefully) the app scrolling off screen when run via launcher.
- Updated first run screen and OOM error toast with links to Low-VRAM mode docs.
- Fixed link to
Scale
setting's support docs. - Tidied some unused variables. Thanks @rikublock!
- Added typegen check to CI pipeline. Thanks @rikublock!
- Added stereogram nodes to Community Nodes docs. Thanks @simonfuhrmann!
- Updated installation-related docs (quick start, manual install, dev install).
- Add Low-VRAM mode docs.
Installing and Updating
The new Invoke Launcher is the recommended way to install, update and run Invoke. It takes care of a lot of details for you - like installing the right version of python - and runs Invoke as a desktop application.
Follow the Quick Start guide to get started with the launcher.
If you already have the launcher, you can use it to update your existing install.
We've just updated the launcher to v1.2.1 with a handful of fixes. To update the launcher itself, download the latest version from the quick start guide - the download links are kept up to date.
Legacy Scripts (not recommended!)
We recommend using the launcher, as described in the previous section!
To install or update with the outdated legacy scripts 😱, download the latest legacy scripts and follow the legacy scripts instructions.
What's Changed
- Update Readme with new Installer Instructions by @hipsterusername in #7455
- docs: fix installation docs home by @psychedelicious in #7470
- docs: fix installation docs home again by @psychedelicious in #7471
- feat(ci): add typegen check workflow by @rikublock in #7463
- docs: update download links for launcher by @psychedelicious in #7489
- Add Stereogram Nodes to communityNodes.md by @simonfuhrmann in #7493
- Partial Loading PR1: Tidy ModelCache by @RyanJDick in #7492
- Partial Loading PR2: Add utils to support partial loading of models from CPU to GPU by @RyanJDick in #7494
- Partial Loading PR3: Integrate 1) partial loading, 2) quantized models, 3) model patching by @RyanJDick in #7500
- Correct Scale Informational Popover by @hipsterusername in #7499
- docs: install guides by @psychedelicious in #7508
- docs: no need to specify version for dev env setup by @psychedelicious in #7510
- feat(ui): reset canvas layers only resets the layers by @psychedelicious in #7511
- refactor(ui): mm model install error handling by @psychedelicious in #7512
- fix(api): limit board_name length to 300 characters by @maryhipp in #7515
- fix(app): remove obsolete DEFAULT_PRECISION variable by @rikublock in #7473
- Partial Loading PR 3.5: Fix pre-mature model drops from the RAM cache by @RyanJDick in #7522
- Partial Loading PR4: Enable partial loading (behind config flag) by @RyanJDick in #7505
- Partial Loading PR5: Dynamic cache ram/vram limits by @RyanJDick in #7509
- ui: translations update from weblate by @weblate in #7480
- chore: bump version to v5.6.0rc1 by @psychedelicious in #7521
- Bugfix: Offload of GGML-quantized model in
torch.inference_mode()
cm by @RyanJDick in #7525 - Deprecate
ram
/vram
configs for smoother migration path to dynamic limits by @RyanJDick in #7526 - docs: fix pypi indices for manual install for AMD by @psychedelicious in #7528
- Bugfix: Do not rely on
model.device
if model could be partially loaded by @RyanJDick in #7529 - Fix for DEIS / DPM++ config clash by setting algorithm type - fixes #6368 by @Vargol in #7440
- Whats new 5.6 by @maryhipp in #7527
- fix(ui): prevent canvas & main panel content from scrolling by @psychedelicious in #7532
- docs,ui: low vram guide & first run blurb by @psychedelicious in #7533
- docs: fix incorrect macOS launcher fix command by @psychedelicious in #7536
- chore: bump version to v5.6.0rc2 by @psychedelicious in #7538
New Contributors
- @simonfuhrmann made their first contribution in #7493
Full Changelog: v5.5.0...v5.6.0rc2