System-wide custom model settings (lightweight GPTs) #1291
Replies: 9 comments 7 replies
-
@davidmarble Thanks so much for checking out the project, I'm glad it could serve your family as it does mine. Also great write up, it's clear what you're asking for and providing options, really appreciate how comprehensive this is. To me, this makes the most sense as a replacement of all the Header options into 1 dropdown. I like the config idea, and we could start off with a new YAML. This makes it as simple as ChatGPT where there is only one LLM selector. Perhaps an advanced mode could be made accessible through settings, if allowed by the admin, when the custom config is used. |
Beta Was this translation helpful? Give feedback.
-
Configuring description, prefix, temperature, max tokens, and other parameters even the URL, individually for each model, is indeed a very necessary requirement. I currently hardcode these. |
Beta Was this translation helpful? Give feedback.
-
It would be nice to change models and model versions without having to re-enter prompts (especially) and other settings. I know this is a bit complicated by different models having different settings but there is a lot of commonality. On the other hand, you want it to be dirt simple for the most common case where the user isn't going to want to mess with settings. And you want it to work with prompt libraries and GPT-like functionality, again without making it complicated. Perhaps the way OpenAPI presents GPTs could be a good way to handle prompts too. You see the GPTs/personalities in the top left and have a way to explore to add more. If you never use them, you never see them. |
Beta Was this translation helpful? Give feedback.
-
I’m glad this request was already input. I would be okay even with just name tonstart |
Beta Was this translation helpful? Give feedback.
-
Is it already possible to add a predefined custom system message for a model? |
Beta Was this translation helpful? Give feedback.
-
Hi, would like to follow up on this, when is the planned release? There is a limitation when using OpenAI with plugins where we can't set a system message, hope this feature would at least allow us to write a predefined custom message in the config file instead. |
Beta Was this translation helpful? Give feedback.
-
also very interested in this development id luv to see the option to add a custom system message and ability to lock it down for other users eg. restrict users to using chatgpt with custom prompt. |
Beta Was this translation helpful? Give feedback.
-
Is there any update here? |
Beta Was this translation helpful? Give feedback.
-
Was officially implemented in #2578 |
Beta Was this translation helpful? Give feedback.
-
@danny-avila This project is awesome. I compared a handful of options before settling on deploying this on a VPS as a way to help family members explore GPTs under a single OpenAI API key and minimal setup for them.
User Story
As an administrator, I'd like more options to customize the set of models offered to all users on my LibreChat instance.
Proposed Features
Mockup
Based on Example 2 below
Example YAML configs
I haven't considered how this could best fit within the current dotenv extends/overwrites docker yaml config approach, or if it merits a re-think of how config is done (e.g. migrate env to yaml or put model config in a separate new yaml file or some other approach).
Config docs draft
Instructions: Add models and add/edit options as needed. For each endpoint, e.g. OpenAI,
defaults
lists the default settings used by LibreChat. You can set defaults for all models of that endpoint type with thedefaults
key, or override values within individual model definitions. Setdefault_model: true
on 1 model to make it the default for all users. Ifdefault_model
is not set, the default will be ____ (not sure what the logic is). Consult endpoint provider API docs for documentation about options.Example 1 (minimal)
Example 2 (custom options)
Example 3 (modified defaults)
Example 4 (modified defaults + custom options)
Note the use of a multiline YAML string for instructions. Prob a good idea to include an example prompt_prefix like this in the config file or instructions.
Additional Notes
Think of this as somewhere between a nice config enhancement only possible for a self-hosted ChatGPT-like experience (multiple instances of the same model with customized defaults available to all users of the LibreChat instance) and supporting lightweight custom GPT characteristics like a highly customized system prompt. I could even see this laying a foundation for later supporting more off-the-shelf GPTs-like enhancements.
While the config in this proposal is presented as YAML, I could see this kind of config being part of a future web console for admins, perhaps stored in mongo. That probably wouldn't replace the need for an initial easily-modifiable config file though.
Beta Was this translation helpful? Give feedback.
All reactions