-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Enhancement] distinguish v1 from v2 LoRA models #3175
Conversation
- Previously the user's preferred precision was used to select which version branch of a diffusers model would be downloaded. Half-precision would try to download the 'fp16' branch if it existed. - Turns out that with waifu-diffusion this logic doesn't work, as 'fp16' gets you waifu-diffusion v1.3, while 'main' gets you waifu-diffusion v1.4. Who knew? - This PR adds a new optional "revision" field to `models.yaml`. This can be used to override the diffusers branch version. In the case of Waifu diffusion, INITIAL_MODELS.yaml now specifies the "main" branch. - This PR also quenches the NSFW nag that downloading diffusers sometimes triggers. - Closes #3160
- Attempting to run a prompt with a LoRA based on SD v1.X against a model based on v2.X will now throw an `IncompatibleModelException`. To import this exception: `from ldm.modules.lora_manager import IncompatibleModelException` (maybe this should be defined in ModelManager?) - Enhance `LoraManager.list_loras()` to accept an optional integer argument, `token_vector_length`. This will filter the returned LoRA models to return only those that match the indicated length. Use: ``` 768 => for models based on SD v1.X 1024 => for models based on SD v2.X ``` Note that this filtering requires each LoRA file to be opened by `torch.safetensors`. It will take ~8s to scan a directory of 40 files. - Added new static methods to `ldm.modules.kohya_lora_manager`: - check_model_compatibility() - vector_length_from_checkpoint() - vector_length_from_checkpoint_file()
Maybe create file with info about lora like sha file for checkpoint models? |
H'mmm, makes sense. But we'd have to update the file everytime someone added or removed a LoRA file. |
No, i mean creating I saw logic as: |
I've created a lora cache file in the top-level loras directory that holds information on the lora vector length. It speeds up scanning of the directory from 8s to 0.1s, and uses file locking to avoid one invokeai process overwriting the file while another invokeai has it open. It does not address the possibility that the user will delete the original lora file and replace it with another one of the exact same name derived from a different sized model. @blessedcoolant I also implemented a routine called |
You could try the following. In This should effectively call the lora models again when a new model is loaded. And if the backend sends back only compatible models, it should work as intended. |
Worked like a charm on the first try. Thanks so much! I think this is PR is ready for a review now. |
@lstein The frontend is not built correctly. |
I've rebuilt the frontend and added the missing .js file. @blessedcoolant Do you mind giving this PR a review? It should be working now. |
Distinguish LoRA/LyCORIS files based on what version of SD they were built on top of
Attempting to run a prompt with a LoRA based on SD v1.X against a model based on v2.X will now throw an
IncompatibleModelException
. To import this exception:from ldm.modules.lora_manager import IncompatibleModelException
(maybe this should be defined in ModelManager?)Enhance
LoraManager.list_loras()
to accept an optional integer argument,token_vector_length
. This will filter the returned LoRA models to return only those that match the indicated length. Use:768 => for models based on SD v1.X 1024 => for models based on SD v2.X
Note that this filtering requires each LoRA file to be opened by
torch.safetensors
. It will take ~8s to scan a directory of 40 files.Added new static methods to
ldm.modules.kohya_lora_manager
:- check_model_compatibility()
- vector_length_from_checkpoint()
- vector_length_from_checkpoint_file()
You can now create subdirectories within the
loras
directory and organize the model files.