Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can I use the custom model? such as Vicuna or Chatglm-6B #415

Open
BlackLuny opened this issue Apr 25, 2023 · 4 comments
Open

Can I use the custom model? such as Vicuna or Chatglm-6B #415

BlackLuny opened this issue Apr 25, 2023 · 4 comments

Comments

@BlackLuny
Copy link

As a user of the Bloop I would like to request the ability to customize the default GPT-2 model with a custom model of my choice for conversation taking.

Currently, Bloop uses the GPT-2 model by default to take conversations based on the provided input. While the GPT-2 model is powerful and produces high-quality text, it may not always be the best fit for every use case. Some users may want to use a different model that is better suited to their specific needs or data.

Therefore, I suggest adding a feature that allows users to specify a custom model that Bloop can use to take conversations. This would enable users to use their own models or models trained specifically for their use case, potentially leading to better results and increased accuracy.

I believe that this feature would be a valuable addition to the Bloop repository and would greatly enhance its usefulness and versatility. Thank you for considering my request.
What's the problem?

What's the solution?
A clear and concise description of what you'd like to happen.

Additional context
Add any other context or screenshots about the feature request here.

@Chrusciki
Copy link

i agree, let me use my self hosted LLMs instead of openai.

@nekomeowww
Copy link

Vote up for such features.

I have my own codellama and bigcode instance running, these models helped me a lot when I need to deal with huge amount of code review, inspection, audition, and pair programming.

If it is not viable for bloop to add support for this much varieties of models, is it possible to support custom endpoint? When custom endpoint is supported:

  1. Users can port their loved models into OpenAI flavored API specs so Bloop.ai can benefit from.
  2. Users can build up their reversed proxy when need to bypass firewall rules to access Bloop.ai and OpenAI in restricted networks.

Or is proxy supported? If possible, user can defined their proxy PAC files and then re-route all the LLM related request from Bloop.ai to their desired endpoints and hosts.

@ibrahimkettaneh
Copy link

Bloop has been very intuitive to use and does it's functionality excellently.

I would greatly appreciate having this feature as well in any form you may see suitable such as implementation where instead of reaching to an openai url, you reach to a custom url.

Thank you very much for your work and efforts. They are sincerely appreciated.

@ggordonhall
Copy link
Contributor

We've recently open-sourced the LLM backend (https://github.com/BloopAI/bloop/tree/oss/server/bleep/src/llm). It currently only supports OpenAI, but feel free to open a PR adding support for a local LLM provider!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants