You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
I really support the Idea for a language translation layer in #4 and would like to build on it with a suggestion of my own.
I'm always frustrated when a model trained with e.g. a medical dataset struggles with languages other than english, and sometimes even with english.
Describe the solution you'd like
Currently, it's possible to include a second model in a chat, allowing both models to respond to input. However, I think it would be beneficial to not only have the option to use a model simultaneously for responses but also sequentially with a second 'system prompt.' This means the second model would process the output of the first model along with a separate system prompt. Such a feature could also help with translation issues by enabling the addition of a second model for desired language translation, along with a prompt, like 'translate the following into Spanish: ' or 'Refine the following text: '
This would also allow Incorporating specialized models for particular tasks to boost the precision and relevance of responses. For example, a model dedicated to fact-checking or identifying objects in images could be added, and the output further processed with another model.
Describe alternatives you've considered
Writing a python script that implements the solution myself. However, this would be less convenient and would limit use cases, such as if I want to use this feature from my phone or give my roommates access to it. Therefore It would be amazing to be able to use this functionality from the webui.
Additional context
Add any other context or screenshots about the feature request here.
The text was updated successfully, but these errors were encountered:
tjbck
transferred this issue from open-webui/open-webui
May 24, 2024
Is your feature request related to a problem? Please describe.
I really support the Idea for a language translation layer in #4 and would like to build on it with a suggestion of my own.
I'm always frustrated when a model trained with e.g. a medical dataset struggles with languages other than english, and sometimes even with english.
Describe the solution you'd like
Currently, it's possible to include a second model in a chat, allowing both models to respond to input. However, I think it would be beneficial to not only have the option to use a model simultaneously for responses but also sequentially with a second 'system prompt.' This means the second model would process the output of the first model along with a separate system prompt. Such a feature could also help with translation issues by enabling the addition of a second model for desired language translation, along with a prompt, like 'translate the following into Spanish: ' or 'Refine the following text: '
This would also allow Incorporating specialized models for particular tasks to boost the precision and relevance of responses. For example, a model dedicated to fact-checking or identifying objects in images could be added, and the output further processed with another model.
Describe alternatives you've considered
Writing a python script that implements the solution myself. However, this would be less convenient and would limit use cases, such as if I want to use this feature from my phone or give my roommates access to it. Therefore It would be amazing to be able to use this functionality from the webui.
Additional context
Add any other context or screenshots about the feature request here.
The text was updated successfully, but these errors were encountered: