Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Not able to create brain on AWS Linux #2596

Open
srigurubyo opened this issue May 15, 2024 · 1 comment
Open

[Bug]: Not able to create brain on AWS Linux #2596

srigurubyo opened this issue May 15, 2024 · 1 comment
Labels
area: backend Related to backend functionality or under the /backend directory bug Something isn't working

Comments

@srigurubyo
Copy link

srigurubyo commented May 15, 2024

What happened?

After installing and logging into the Quivr on Amazon Linux with default userid/password, not able to create the first brain. Request your help as we are stuck with this issue for more than a week.

I am trying to use gemma:2b with Ollama but (as per my limited knowledge) Quivr is not able to establish the backend connection with Ollama (running on the same machine)

ollama.service file content is:

`After=network-online.target
[Service]
ExecStart=/usr/local/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=/home/linuxbrew/.linuxbrew/bin:/home/linuxbrew/.linuxbrew/sbin:/opt/amazon/openmpi/bin/:/opt/amazon/efa/bin/:/opt/conda/bin:/usr/local/cud$

[Install]
WantedBy=default.target
`

Quivr's .env file content is:

;QUIVR Configuration
This file is used to configure the Quivr stack. It is used by the docker-compose.yml file to configure the stack.

OPENAI. Update this to use your API key. To skip OpenAI integration use a fake key, for example: tk-aabbccddAABBCCDDEeFfGgHhIiJKLmnopjklMNOPqQqQqQqQ
OPENAI_API_KEY=sk-proj-nEz9GxjWybQ4qAsScYfHT3BlbkFJ8aP6FOwg2Gxps0nhk4w5
OPENAI_API_KEY=tk-aabbccddAABBCCDDEeFfGgHhIiJKLmnopjklMNOPqQqQqQqQ
LOCAL
OLLAMA_API_BASE_URL=http://127.0.0.1:11434 # Uncomment to activate ollama. This is the local url for the ollama api

#FRONTEND

NEXT_PUBLIC_ENV=local
NEXT_PUBLIC_BACKEND_URL=http://13.127.87.10:5050
NEXT_PUBLIC_SUPABASE_URL=http://13.127.87.10:54321
NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZS1kZW1vIiwicm9sZSI6ImFub24iLCJleHAiOjE5ODM4MTI5OTZ9.CRXP1A7WOeoJeXxjNni43kdQwgnWNReilDMblYTn_I0
NEXT_PUBLIC_CMS_URL=https://cms.quivr.app
NEXT_PUBLIC_FRONTEND_URL=http://13.127.87.10:3000
NEXT_PUBLIC_AUTH_MODES=password

#BACKEND

SUPABASE_URL=http://host.docker.internal:54321
SUPABASE_SERVICE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZS1kZW1vIiwicm9sZSI6InNlcnZpY2Vfcm9sZSIsImV4cCI6MTk4MzgxMjk5Nn0.EGIM96RAZx35lJzdJsyH-qQwv8Hdp7fsn3W0YpN81IU
PG_DATABASE_URL=postgresql://postgres:postgres@host.docker.internal:54322/postgres
ANTHROPIC_API_KEY=null
JWT_SECRET_KEY=super-secret-jwt-token-with-at-least-32-characters-long
AUTHENTICATE=true
TELEMETRY_ENABLED=true
CELERY_BROKER_URL=redis://redis:6379/0
CELEBRY_BROKER_QUEUE_NAME=quivr-preview.fifo
QUIVR_DOMAIN=http://13.127.87.10:3000/
#COHERE_API_KEY=CHANGE_ME

#RESEND
RESEND_API_KEY=
RESEND_EMAIL_ADDRESS=onboarding@resend.dev
RESEND_CONTACT_SALES_FROM=contact_sales@resend.dev
RESEND_CONTACT_SALES_TO=

CRAWL_DEPTH=1

PREMIUM_MAX_BRAIN_NUMBER=30
PREMIUM_MAX_BRAIN_SIZE=10000000
PREMIUM_DAILY_CHAT_CREDIT=100

#BRAVE SEARCH API KEY
BRAVE_SEARCH_API_KEY=CHANGE_ME

Error log is

backend-core | INFO: 223.184.87.182:52842 - "PUT /user/identity HTTP/1.1" 200 OK backend-core | INFO: 223.184.87.182:52842 - "OPTIONS /brains/ HTTP/1.1" 200 OK backend-core | INFO: 223.184.87.182:52842 - "POST /brains/ HTTP/1.1" **500 Internal Server Error backend-core | ERROR: Exception in ASGI application** backend-core | Traceback (most recent call last): backend-core | File "/usr/local/lib/python3.11/site-packages/urllib3/connection.py", line 198, in _new_conn backend-core | sock = **connection.create_connection**( backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | File "/usr/local/lib/python3.11/site-packages/urllib3/util/connection.py", line 85, in create_connection backend-core | raise err backend-core | File "/usr/local/lib/python3.11/site-packages/urllib3/util/connection.py", line 73, in create_connection backend-core | sock.connect(sa) backend-core | ConnectionRefusedError: [Errno 111] Connection refused backend-core | backend-core | The above exception was the direct cause of the following exception: backend-core | backend-core | Traceback (most recent call last): backend-core | File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 793, in urlopen backend-core | response = self._make_request( backend-core | ^^^^^^^^^^^^^^^^^^^ backend-core | File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 496, in _make_request backend-core | conn.request( backend-core | File "/usr/local/lib/python3.11/site-packages/urllib3/connection.py", line 400, in request backend-core | self.endheaders() backend-core | File "/usr/local/lib/python3.11/http/client.py", line 1281, in endheaders backend-core | self._send_output(message_body, encode_chunked=encode_chunked) backend-core | File "/usr/local/lib/python3.11/http/client.py", line 1041, in _send_output backend-core | self.send(msg) backend-core | File "/usr/local/lib/python3.11/http/client.py", line 979, in send backend-core | self.connect() backend-core | File "/usr/local/lib/python3.11/site-packages/urllib3/connection.py", line 238, in connect backend-core | self.sock = self._new_conn() backend-core | ^^^^^^^^^^^^^^^^ backend-core | File "/usr/local/lib/python3.11/site-packages/urllib3/connection.py", line 213, in _new_conn backend-core | raise NewConnectionError( backend-core | urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f99f1b51710>: Failed to establish a new connection: [Errno 111] Connection refused backend-core | backend-core | The above exception was the direct cause of the following exception: backend-core | backend-core | Traceback (most recent call last): backend-core | File "/usr/local/lib/python3.11/site-packages/requests/adapters.py", line 486, in send backend-core | resp = conn.urlopen( backend-core | ^^^^^^^^^^^^^ backend-core | File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 847, in urlopen backend-core | retries = retries.increment( backend-core | ^^^^^^^^^^^^^^^^^^ backend-core | File "/usr/local/lib/python3.11/site-packages/urllib3/util/retry.py", line 515, in increment backend-core | raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='127.0.0.1', port=11434): Max retries exceeded with url: /api/embeddings (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f99f1b51710>: Failed to establish a new connection: [Errno 111] Connection refused')) backend-core | backend-core | During handling of the above exception, another exception occurred: backend-core | backend-core | Traceback (most recent call last): backend-core | File "/usr/local/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py", line 164, in _process_emb_response backend-core | res = requests.post( backend-core | ^^^^^^^^^^^^^^ backend-core | File "/usr/local/lib/python3.11/site-packages/requests/api.py", line 115, in post backend-core | return request("post", url, data=data, json=json, **kwargs) backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | File "/usr/local/lib/python3.11/site-packages/requests/api.py", line 59, in request backend-core | return session.request(method=method, url=url, **kwargs) backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | File "/usr/local/lib/python3.11/site-packages/requests/sessions.py", line 589, in request backend-core | resp = self.send(prep, **send_kwargs) backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | File "/usr/local/lib/python3.11/site-packages/requests/sessions.py", line 703, in send backend-core | r = adapter.send(request, **kwargs) backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | File "/usr/local/lib/python3.11/site-packages/requests/adapters.py", line 519, in send backend-core | raise ConnectionError(e, request=request) backend-core | requests.exceptions.ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=11434): Max retries exceeded with url: /api/embeddings (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f99f1b51710>: Failed to establish a new connection: [Errno 111] Connection refused')) backend-core | backend-core | During handling of the above exception, another exception occurred: backend-core | backend-core | Traceback (most recent call last): backend-core | File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 411, in run_asgi backend-core | result = await app( # type: ignore[func-returns-value] backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__ backend-core | return await self.app(scope, receive, send) backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | File "/usr/local/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__ backend-core | await super().__call__(scope, receive, send) backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__ backend-core | await self.middleware_stack(scope, receive, send) backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in __call__ backend-core | raise exc backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in __call__ backend-core | await self.app(scope, receive, _send) backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/cors.py", line 93, in __call__ backend-core | await self.simple_response(scope, receive, send, request_headers=headers) backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/cors.py", line 148, in simple_response backend-core | await self.app(scope, receive, send) backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 65, in __call__ backend-core | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app backend-core | raise exc backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app backend-core | await app(scope, receive, sender) backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 756, in __call__ backend-core | await self.middleware_stack(scope, receive, send) backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 776, in app backend-core | await route.handle(scope, receive, send) backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 297, in handle backend-core | await self.app(scope, receive, send) backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 77, in app backend-core | await wrap_app_handling_exceptions(app, request)(scope, receive, send) backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app backend-core | raise exc backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app backend-core | await app(scope, receive, sender) backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 72, in app backend-core | response = await func(request) backend-core | ^^^^^^^^^^^^^^^^^^^ backend-core | File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 278, in app backend-core | raw_response = await run_endpoint_function( backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 191, in run_endpoint_function backend-core | return await dependant.call(**values) backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | File "/code/modules/brain/controller/brain_routes.py", line 103, in create_new_brain backend-core | new_brain = brain_service.create_brain( backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | File "/code/modules/brain/service/brain_service.py", line 142, in create_brain backend-core | return self.create_brain_integration(user_id, brain) backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | File "/code/modules/brain/service/brain_service.py", line 192, in create_brain_integration backend-core | created_brain = self.brain_repository.create_brain(brain) backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | File "/code/modules/brain/repository/brains.py", line 24, in create_brain backend-core | brain_meaning = embeddings.embed_query(string_to_embed) backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | File "/usr/local/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py", line 224, in embed_query backend-core | embedding = self._embed([instruction_pair])[0] backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | File "/usr/local/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py", line 199, in _embed backend-core | return [self._process_emb_response(prompt) for prompt in iter_] backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | File "/usr/local/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py", line 199, in <listcomp> backend-core | return [self._process_emb_response(prompt) for prompt in iter_] backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ backend-core | File "/usr/local/lib/python3.11/site-packages/langchain_community/embeddings/ollama.py", line 170, in _process_emb_response backend-core | raise ValueError(f"Error raised by inference endpoint: {e}") backend-core | ValueError: Error raised by inference endpoint: HTTPConnectionPool(host='127.0.0.1', port=11434): Max retries exceeded with url: /api/embeddings (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f99f1b51710>: Failed to establish a new connection: [Errno 111] Connection refused')) backend-core | INFO: 127.0.0.1:49220 - "GET /healthz HTTP/1.1" 200 OK backend-core | INFO: 223.184.87.182:52855 - "GET /user/identity HTTP/1.1" 200 OK backend-core | INFO: 223.184.87.182:52856 - "GET /chat HTTP/1.1" 200 OK backend-core | INFO: 223.184.87.182:52854 - "GET /onboarding HTTP/1.1" 200 OK backend-core | INFO: 223.184.87.182:52853 - "GET /user HTTP/1.1" 200 OK backend-core | /usr/local/lib/python3.11/site-packages/pydantic/main.py:347: UserWarning: Pydantic serializer warnings: backend-core | Expected enumbut gotstr- serialized value may not be as expected backend-core | return self.__pydantic_serializer__.to_python( backend-core | INFO: 223.184.87.182:52853 - "GET /brains/ HTTP/1.1" 200 OK backend-core | INFO: 223.184.87.182:52856 - "GET /chat HTTP/1.1" 200 OK backend-core | INFO: 223.184.87.182:52855 - "GET /onboarding HTTP/1.1" 200 OK backend-core | INFO: 223.184.87.182:52854 - "GET /user/identity HTTP/1.1" 200 OK backend-core | INFO: 223.184.87.182:52853 - "GET /user HTTP/1.1" 200 OK backend-core | INFO: 223.184.87.182:52853 - "GET /brains/ HTTP/1.1" 200 OK backend-core | INFO: 223.184.87.182:52869 - "GET /chat HTTP/1.1" 200 OK backend-core | INFO: 223.184.87.182:52868 - "GET /onboarding HTTP/1.1" 200 OK backend-core | INFO: 223.184.87.182:52866 - "GET /user/identity HTTP/1.1" 200 OK backend-core | INFO: 223.184.87.182:52867 - "GET /user HTTP/1.1" 200 OK backend-core | /usr/local/lib/python3.11/site-packages/pydantic/main.py:347: UserWarning: Pydantic serializer warnings: backend-core | Expectedenumbut gotstr- serialized value may not be as expected backend-core | return self.__pydantic_serializer__.to_python( backend-core | INFO: 223.184.87.182:52867 - "GET /brains/ HTTP/1.1" 200 OK backend-core | INFO: 127.0.0.1:53192 - "GET /healthz HTTP/1.1" 200 OK connection.create_connectionbackend-core | INFO: 223.184.87.182:52878 - "GET /user/identity HTTP/1.1" 200 OK backend-core | INFO: 223.184.87.182:52879 - "GET /chat HTTP/1.1" 200 OK backend-core | INFO: 223.184.87.182:52880 - "GET /onboarding HTTP/1.1" 200 OK backend-core | INFO: 223.184.87.182:52877 - "GET /user HTTP/1.1" 200 OK backend-core | INFO: 223.184.87.182:52877 - "GET /brains/ HTTP/1.1" 200 OK backend-core | INFO: 223.184.87.182:52880 - "GET /user/identity HTTP/1.1" 200 OK backend-core | INFO: 223.184.87.182:52877 - "GET /user HTTP/1.1" 200 OK backend-core | INFO: 127.0.0.1:47156 - "GET /healthz HTTP/1.1" 200 OK backend-core | INFO: 127.0.0.1:50366 - "GET /healthz HTTP/1.1" 200 OK

Thanks in advance for your help here!

@srigurubyo srigurubyo added the bug Something isn't working label May 15, 2024
Copy link

dosubot bot commented May 15, 2024

To address the issue of Quivr not being able to create a brain due to a connection refusal with Ollama, follow these steps:

  1. Check Ollama Service: Ensure Ollama is running on port 11434 by executing curl http://127.0.0.1:11434 on your AWS Linux machine. If it's not responding, Ollama needs troubleshooting.

  2. Update Quivr's .env File: Change the OLLAMA_API_BASE_URL in Quivr's .env file from http://127.0.0.1:11434 to http://host.docker.internal:11434. This adjustment is necessary because Quivr runs in Docker, and localhost or 127.0.0.1 references the container's internal loopback interface, not the host machine.

  3. Review Firewall and Security Settings: Make sure the AWS Linux firewall and AWS security group settings allow traffic on port 11434. Restrictions here could prevent connectivity between Quivr and Ollama.

  4. Ensure Docker Network Configuration: Verify that all services requiring interaction are on the same Docker network. This might involve adjusting your docker-compose.yml to ensure they are explicitly defined to use the same network.

  5. Increase Logging for Troubleshooting: Enhance the logging level for both Quivr and Ollama to capture more detailed diagnostics, aiding in pinpointing the exact nature of the connectivity issue.

  6. Check Docker Compose Dependencies: In your docker-compose.yml, ensure that dependencies are correctly defined, particularly if Quivr's operation depends on Ollama being accessible.

These steps are targeted to help diagnose and resolve the connectivity issue between Quivr and Ollama. If the problem persists, providing additional logs or error details would be crucial for further analysis.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@dosubot dosubot bot added the area: backend Related to backend functionality or under the /backend directory label May 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area: backend Related to backend functionality or under the /backend directory bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant