How to set a system prompt for RAG implementation for Inference for Gemma 2b on IOS ? #5277
Labels
platform:ios
MediaPipe IOS issues
stat:awaiting googler
Waiting for Google Engineer's Response
task:LLM inference
Issues related to MediaPipe LLM Inference Gen AI setup
type:feature
Enhancement in the New Functionality or Request for a New Solution
Have I written custom code (as opposed to using a stock example script provided in MediaPipe)
None
OS Platform and Distribution
IOS
MediaPipe Tasks SDK version
No response
Task name (e.g. Image classification, Gesture recognition etc.)
LLM inference
Programming Language and version (e.g. C++, Python, Java)
SwiftUI
Describe the actual behavior
Currently we can load the gemma 2b on IOS and chat with it in general . But if we want to set some system prompt like You will act as this agent or a bot and your name is this , then How can user set this . As there is only an function to genrate response taking users query .
Describe the expected behaviour
Currently we can load the gemma 2b on IOS and chat with it in general . But if we want to set some system prompt like You will act as this agent or a bot and your name is this , then How can user set this .
Standalone code/steps you may have used to try to get what you need
.
Other info / Complete Logs
No response
The text was updated successfully, but these errors were encountered: