Skip to content

Conversation

lstein
Copy link
Collaborator

@lstein lstein commented Jul 7, 2023

This is a WIP branch for v3.0.0 betas. Small bugs will be fixed in this branch and merged back into main at the time of the 3.0.0 final release.

@lstein lstein marked this pull request as draft July 7, 2023 21:50
Lincoln Stein and others added 11 commits July 9, 2023 18:35
Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
- gpu_mem_reserved now indicates the amount of VRAM that will be reserved
  for model caching (similar to max_cache_size).
To be consistent with max_cache_size, the amount of memory to hold in
VRAM for model caching is now controlled by the max_vram_cache_size
configuration parameter.
@lstein lstein marked this pull request as ready for review July 11, 2023 20:01
@lstein lstein merged commit e0a7ec6 into main Jul 11, 2023
@lstein lstein deleted the release/invokeai-3-0-beta branch July 11, 2023 20:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants