Second Lora #2076
Replies: 1 comment 1 reply
-
No. I chickened out of implementing that on the last round of changes I did for LoRAs in SHARK 1.0 (adding the weight, and fixing them being applied very incorrectly), even though it is an obvious feature Shark should have. In SHARK 1.0 any LoRA has to be baked into the .vmfb since it effects the weights, and a new .vmfb has to be built anytime weights, or a few other key parameters, such as image size, are changed. And you have probably notice that buildng .vmfbs is memory intensive, disk hungry and slow. That is painful enough with just one LoRA and its Weight, with multiple it would be worse. Not only that, but the .vmfb names are built out of those key parameters, including the LoRA name, if the name gets long enough, Windows hits its default path length limit of 260 characters. I would have had fundamentally change the whole naming system for .vmfbs, and that's not well or obviously factored in the currently code. Since the SHARK/Studio 2 work had already started at that point, and the intention there is to have a much more on the fly method of loading checkpoint weights, LoRAs and other embeddings, (thanks Turbine!), without having to bake them into a .vmfb, I decided not to try and hack something together in SHARK 1.0. Sorry. 😬 |
Beta Was this translation helpful? Give feedback.
-
Maybe I'm a moron, but is there a way to use two Lora's at once currently?
Beta Was this translation helpful? Give feedback.
All reactions