Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow choosing a default text and/or vision model, or remember state of global and Vision dialogs #90

Closed
devinprater opened this issue Mar 21, 2024 · 4 comments · Fixed by #92

Comments

@devinprater
Copy link

It would be highly beneficial if the addon could remember the last model I selected, such as GPT-4-turbo or Claude 3, and use it as the default for subsequent interactions. This would save me from having to manually select the desired model each time I use the addon.

Alternatively, Implementing a feature that allows users to create a list of favorite models would greatly streamline the model selection process. By having quick access to my preferred models, I can easily switch between them based on my specific needs without navigating through the entire list every time.

Also, I have noticed that the addon often defaults to the GPT-4-vision model, which is more expensive compared to other models. While GPT-4-vision has its advantages, it may not always be necessary for text-based interactions. Considering that models like Claude 3 also possess vision capabilities, it would be helpful if the addon could intelligently select a cost-effective default model based on the type of input (text or vision) to optimize usage and prevent unnecessary expenses.

Please let me know if you require any further information or clarification regarding this issue. I appreciate everything this addon has let me do, from simple requests like how many days are in a month, for my job, to describing images in books. This is honestly one of my most used addons, and I hope that these enhancements, whichever one we go with, helps make it even better for the community.

@PratikP1
Copy link

Just to clarify, OpenAI's GPT 4 vision model is only chosen when image analysis is requested. Otherwise, the first option in the list is chosen. This request is certainly valid if I have multiple API keys configured for different services.

@aaclause
Copy link
Owner

Hi @devinprater, thank you for your suggestion!

For the last three versions, the add-on should auto-select the last multi-modal and non-multi-modal models used. Could you check your add-on version and try again?

About favorite models: that's a great idea! I'll add an option "Mark as Favorite/Unfavorite this model" to the context menu of the model list. Favorite models will be first on the list.
I won't do more for now as this add-on will continue in a dedicated app, see #69 (comment).

aaclause added a commit that referenced this issue Mar 30, 2024
Fixes #90

Favorite AI models by focusing on the list and using `Shift+Space` or the context menu. Favorites appear at the top.
@aaclause
Copy link
Owner

Hi @devinprater,
The favorite models feature has just been implemented. See https://github.com/aaclause/nvda-OpenAI/releases/tag/0.7z
Favorite AI models by focusing on the list and using Shift+Space or the context menu. Favorites appear at the top.
Is it OK for you?
Thanks

@devinprater
Copy link
Author

devinprater commented Mar 31, 2024 via email

aaclause added a commit that referenced this issue Apr 1, 2024
Fixes #90

Favorite AI models by focusing on the list and using `Shift+Space` or the context menu. Favorites appear at the top.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants