New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
best working local llm #1336
Comments
In general, local models aren't powerful enough yet to run OpenDevin. I hear llama3 is promising though! |
Fine-tuning of llm instructions is very important. Both ph3 and llama3 are suitable for use after fine-tuning in conjunction with this project. Some commercial companies have already had successful cases. |
I tried llama3 but it was complaining about the short context lenght. I never trained a model, i don't even know on what data i should train it on or how to train it at all. maybe someone gonna train one for the opendevin community. |
The cost of fine-tuning instructions for a model of about 7B is not high. A graphics card of 12~24GB VRAM is enough. It can run for 5 hours. Many platforms also have free GPU time to use. lora. |
Same problem here. I tried llama3 on ollama but everytime it failed to make complete app. |
Hey guys,
i tried many different local llm's using oobabooga web ui + opendevin but never got it to work rlly. I guess the models were just too weak. What are your expierience? Which models worked best and actually succeed your task well? Maybe you can link the used model + given task and a quick description of the result or even uploading the actual code written by opendevin.
My specs are i5 11400, 32gb ram, rtx 3080.
The text was updated successfully, but these errors were encountered: