Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to set a system prompt for RAG implementation for Inference for Gemma 2b on IOS ? #5277

Open
omkar806 opened this issue Mar 29, 2024 · 3 comments
Assignees
Labels
platform:ios MediaPipe IOS issues stat:awaiting googler Waiting for Google Engineer's Response task:LLM inference Issues related to MediaPipe LLM Inference Gen AI setup type:feature Enhancement in the New Functionality or Request for a New Solution

Comments

@omkar806
Copy link

Have I written custom code (as opposed to using a stock example script provided in MediaPipe)

None

OS Platform and Distribution

IOS

MediaPipe Tasks SDK version

No response

Task name (e.g. Image classification, Gesture recognition etc.)

LLM inference

Programming Language and version (e.g. C++, Python, Java)

SwiftUI

Describe the actual behavior

Currently we can load the gemma 2b on IOS and chat with it in general . But if we want to set some system prompt like You will act as this agent or a bot and your name is this , then How can user set this . As there is only an function to genrate response taking users query .

Describe the expected behaviour

Currently we can load the gemma 2b on IOS and chat with it in general . But if we want to set some system prompt like You will act as this agent or a bot and your name is this , then How can user set this .

Standalone code/steps you may have used to try to get what you need

.

Other info / Complete Logs

No response

@kuaashish kuaashish assigned kuaashish and unassigned ayushgdev Apr 1, 2024
@kuaashish kuaashish added task:LLM inference Issues related to MediaPipe LLM Inference Gen AI setup platform:ios MediaPipe IOS issues type:support General questions labels Apr 1, 2024
@kuaashish
Copy link
Contributor

Hi @omkar806,

At present, this feature is unavailable. Should you wish to incorporate it, we kindly ask you to submit a feature request outlining the potential benefits for the community upon its implementation. Subsequently, we will review and forward your request to the appropriate team. Based on the ensuing discussion and demand, we can consider implementing it in the near future.

Thank you!!

@kuaashish kuaashish added the stat:awaiting response Waiting for user response label Apr 2, 2024
@omkar806
Copy link
Author

omkar806 commented Apr 5, 2024

okay I will add a feature request for this

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Waiting for user response label Apr 5, 2024
@schmidt-sebastian
Copy link
Collaborator

We are working on making it easier to build more advanced use case on top of our LLM Inference. That being said, you can already tell the model to act like an agent by simply including this instruction in the prompt you are using. We will have more examples for this in the coming weeks.

@schmidt-sebastian schmidt-sebastian added type:feature Enhancement in the New Functionality or Request for a New Solution and removed type:support General questions labels Apr 19, 2024
@kuaashish kuaashish added the stat:awaiting googler Waiting for Google Engineer's Response label Apr 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
platform:ios MediaPipe IOS issues stat:awaiting googler Waiting for Google Engineer's Response task:LLM inference Issues related to MediaPipe LLM Inference Gen AI setup type:feature Enhancement in the New Functionality or Request for a New Solution
Projects
None yet
Development

No branches or pull requests

4 participants