Merge LoRA weights to LLM at initialization time on-device (Gemma) #5255
Labels
platform:android
Issues with Android as Platform
platform:ios
MediaPipe IOS issues
platform:javascript
MediaPipe Javascript issues
stat:awaiting googler
Waiting for Google Engineer's Response
task:LLM inference
Issues related to MediaPipe LLM Inference Gen AI setup
type:feature
Enhancement in the New Functionality or Request for a New Solution
Have I written custom code (as opposed to using a stock example script provided in MediaPipe)
No
OS Platform and Distribution
Web, Android, IOS
MediaPipe Tasks SDK version
No response
Task name (e.g. Image classification, Gesture recognition etc.)
GenAI
Programming Language and version (e.g. C++, Python, Java)
TypeScript, Java, Swift
Describe the actual behavior
Couldn't find a way to merge lora weights to Gemma
Describe the expected behaviour
I want to be able to add a LoRA adapter to Gemma locally (on-device)
Standalone code/steps you may have used to try to get what you need
I'm experimenting something on Web with mediapipe that require having multiple LoRA files, each file trained for a different task. I want to select a LoRA file and merge it to Gemma at initialization time locally on web. I went through the code, I saw some .proto file with
lora_path
orlora_rank
fields but I haven't seen any exposed parameter from theLlmInference
class or its options that can help me specify a LoRA file.One option could be (Maybe) to use the LlmGPUCalculatorOptions.lora_path. However, the current API doesn't expose anything that make this possible, I don't even know if it could work e.i: if that field is meant for this purpose. Will this option work? If so, I can open a PR for it. If not, how can I achieve this?
Other info / Complete Logs
No response
The text was updated successfully, but these errors were encountered: