-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
make run doesn't work as expected #561
Comments
Same Issue here |
Same here. I can't "make start-backend" either because "import charset_normalizer as chardet, ModuleNotFoundError: No module named 'charset_normalizer'" |
@Redrum624 Your issue seems different. It sounds like this #469 Please can you try from these steps, the section "If pipenv doesn't work for you, you can also run" when you're in the opendevin directory:
|
|
@Hicortab there was a very recent fix for a connection issue, can you make sure your git repo is newly updated? Although that error looks like it already is updated, I'm not sure. |
It may be related to the fact that the backend server takes too long to load and download dependencies, and the log output is suppressed so the user isn't informed about it. |
It seems there are few more packages required to be installed. Details of the error messages and respective packages are as mentioned below: Error -2: Error-3: Error-4: Error-5: Error-6: After installing above packages, llama_index is looking for a 1_Pooling/config.json in tmp location and that is missing. If anyone has any thoughts on this, please share. Thank you |
When I start a fresh environment with the newest opendevin commit, I also get that error after running. after just running Full error log:
|
@jay-c88, ditto error message. |
Nope, I don't even have time to check before uvicorn exits with the error. |
same error here.. also no container running @dheerajbhadani after command |
I managed to fix the problem by manually adding the missing json file from https://huggingface.co/BAAI/bge-small-en-v1.5/blob/main/1_Pooling/config.json I hope this works for you as well. |
I reinstalled the repository as one user suggested and reinstalled everything via a virtual environment but now my problem has moved to vueJS. user@userPCs:~/Desktop/OpenDevin$ run
unable to load configuration from /home/user/Scrivania/OpenDevin/frontend/vite.config.ts |
My initial error with VueJS was that I didn't have it (which is actually false because until I opened the issue it worked perfectly), I tried to execute a whole series of commands to try to solve the problem independently but I think of having made things worse. All commands I've ran: 05 cd Scrivania/OpenDevin/ |
Thanks, that solves the missing file, but now I get a different error.
|
Thanks.. its working.. but have to run start-backend and start-frontend separately. |
Doesn't work my side :
Three last lines output :
|
Same Issue |
Hi @Kurtisone and @Mgrsc, Can you paste the content of config.json? |
Hi @dheerajbhadani, It's worse, I downloaded the webpage I guess.. |
Yep, it's much better if I create the config.json and copy/paste the content, thank you ! |
Yeah, that was the issue. {
"word_embedding_dimension": 384,
"pooling_mode_cls_token": true,
"pooling_mode_mean_tokens": false,
"pooling_mode_max_tokens": false,
"pooling_mode_mean_sqrt_len_tokens": false
} |
#573 here is the solution. tons of issues are related to the huggingfaceembedding... |
OpenDevin git:(main) ✗ make start-frontend
node:internal/errors:478 Error: ENOSPC: System limit for number of file watchers reached, watch '/local/home/jdesa/Agent/OpenDevin/frontend/vite.config.ts' |
Same here. I tried the various solutions proposed but I still get this error. |
@madambovarix how about this?
If it doesn't work, please open a new issue. |
Yes, that did work, thanks ! I had to edit uvloop out of the dependancies tho since I'm on Windows and uvloop isn't supported since their last update, but it finally worked ! |
Duplicate with #568 |
helllo ,anyone know where is the Macos dir show store this config.json? |
Describe the bug
When I use the make run command the server seems to start correctly but when I try to go to http://localhost:3001 it says that it is not possible to connect to my ws://localhost:3001/ws giving me a whole series of errors regarding ws
Setup and configuration
I have correctly performed all the installation steps therefore: install node, python, docker and performed the steps such as make build successfully.
As the LLM API key I used the OpenAI one for the rest this is the model of my laptop: Hp pavilion 15-cb017nl. The only change that has been made is to have installed an SSD and another 8GB of RAM
The text was updated successfully, but these errors were encountered: