Replies: 2 comments 1 reply
-
If you are running the docker-compose example, you should specify http://api:8080 as address. See the image over here: |
Beta Was this translation helpful? Give feedback.
-
Hi, I am facing similar issue but with http://127.0.0.1:8080/v1/chat/completions. Model: ggml-gpt4all-j.bin can someone help me on this? |
Beta Was this translation helpful? Give feedback.
-
Hello,
I've been working on setting up Flowise and LocalAI locally on my machine using Docker. Although I'm not an expert in coding, I've managed to get some systems running locally. I've ensured the appropriate model files are in the /models directory and the setup for Pinecone and OpenAI API is correct.
Both Flowise and LocalAI are accessible at their respective addresses: http://localhost:3000/ for Flowise and http://localhost:8080/ for LocalAI. However, I'm encountering an issue when trying to connect to the LocalAI server from the ChatLocalAI module in the Conversational Retrieval QA Chain on Flowise. Specifically, I'm receiving an ECONNREFUSED error on 127.0.0.1:8080 when I send a request.
To troubleshoot, I've tested this setup on two different PCs, but I'm consistently encountering the same error. Interestingly, I was able to communicate with a model directly in VSCode, but I've been unsuccessful in doing so within Flowise.
Any guidance on how to resolve this issue would be greatly appreciated.
Beta Was this translation helpful? Give feedback.
All reactions