-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Supported models #1
Comments
Hi, @veeral-patel, this was built with local LLMs, using either text-generation-webui (https://github.com/oobabooga/text-generation-webui) or a HTTP server I wrote a while ago, that serves models from Hugging Face (https://github.com/paolorechia/learn-langchain). That being said, it's not complex to write a new client. For instance, see the client for the text-generation-webui: If you want to use with OpenAI, you'd have to write a similar client here in this module. |
Thanks! How's the performance with these models for code writing? Just curious, I've only used OpenAI |
For simple examples, WizardLM performs quite ok: https://huggingface.co/TheBloke/wizardLM-7B-HF Of course, it’s not as good as OpenAI gpt3.5-turbo. This guy here has a nice table comparing them: |
Cool! If it's not working I can write a plug-in for GPT3.5 Also, how well does this project integrate code into a repository? Can it modify the relevant files itself for a feature or do we need to do it by hand |
Not yet - that's one of my long term goals too, but I still haven't figured out how I would tackle this feature. |
Take a look at AutoPR |
Thanks for this project! Which models are supported? Is it just GPT4 or others too?
The text was updated successfully, but these errors were encountered: