-
-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Unable to chat and create brain #2266
Comments
🤖 Hey @DanielCow96, great to see you back! How have you been? Based on the error logs you've provided, it seems like there are two main issues you're encountering:
To address the rate limit issue, you can consider the following options:
For more information, you can refer to these similar issues in the quivr repository: [Bug]: Help! I am unable to use the chatgpt engine properly, [Bug]: Cannot choose other model than gpt-3.5-turbo, [Bug]: 429 Too Many Requests, and [BUG] Fix rate limiting: limit requests per user. Regarding the issue of not being able to add a new brain, it's possible that you've reached the maximum number of brains allowed for your user account. The maximum number of brains a user can create in the quivr application is determined by the user's settings under the key "max_brains". If this setting is not specified, the default value is 5. You may need to delete some of your existing brains or request an increase in your quota if that's possible. I hope this helps! If you have any more questions or need further clarification, feel free to ask. Sources
|
Thanks for your contributions, we'll be closing this issue as it has gone stale. Feel free to reopen if you'd like to continue the discussion. |
What happened?
Just installed and unable to add new brain or start chatting
backend-core | [INFO] models.databases.supabase.user_usage [user_usage.py:161]: None
backend-core | [ERROR] models.databases.supabase.user_usage [user_usage.py:162]: {'code': 'XX000', 'details': None, 'hint': None, 'message': 'called
Result::unwrap()
on anErr
value: InvalidPosition'}backend-core | [ERROR] models.databases.supabase.user_usage [user_usage.py:163]: Error while checking if user is a premium user. Stripe needs to be configured.
backend-core | [ERROR] models.databases.supabase.user_usage [user_usage.py:166]: {'code': 'XX000', 'details': None, 'hint': None, 'message': 'called
Result::unwrap()
on anErr
value: InvalidPosition'}backend-core | INFO: 127.0.0.1:35872 - "GET /healthz HTTP/1.1" 200 OK
backend-core | 2024-02-26 08:30:03,687:INFO - HTTP Request: POST https://ovbvcnwemowuuuaebizd.supabase.co/functions/v1/telemetry "HTTP/1.1 200 OK"
backend-core | 2024-02-26 08:30:04,221:INFO - HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 429 Too Many Requests"
backend-core | 2024-02-26 08:30:04,221:INFO - Retrying request to /embeddings in 0.799906 seconds
backend-core | 2024-02-26 08:30:05,513:INFO - HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 429 Too Many Requests"
backend-core | 2024-02-26 08:30:05,513:INFO - Retrying request to /embeddings in 1.857299 seconds
backend-core | 2024-02-26 08:30:07,749:INFO - HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 429 Too Many Requests"
backend-core | INFO: 172.20.0.1:35864 - "POST /brains/ HTTP/1.1" 500 Internal Server Error
backend-core | ERROR: Exception in ASGI application
backend-core | Traceback (most recent call last):
backend-core | File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 428, in run_asgi
backend-core | result = await app( # type: ignore[func-returns-value]
backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core | File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in call
backend-core | return await self.app(scope, receive, send)
backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core | File "/usr/local/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in call
backend-core | await super().call(scope, receive, send)
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 123, in call
backend-core | await self.middleware_stack(scope, receive, send)
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in call
backend-core | raise exc
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in call
backend-core | await self.app(scope, receive, _send)
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/cors.py", line 91, in call
backend-core | await self.simple_response(scope, receive, send, request_headers=headers)
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/cors.py", line 146, in simple_response
backend-core | await self.app(scope, receive, send)
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in call
backend-core | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
backend-core | raise exc
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
backend-core | await app(scope, receive, sender)
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 758, in call
backend-core | await self.middleware_stack(scope, receive, send)
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 778, in app
backend-core | await route.handle(scope, receive, send)
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 299, in handle
backend-core | await self.app(scope, receive, send)
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 79, in app
backend-core | await wrap_app_handling_exceptions(app, request)(scope, receive, send)
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
backend-core | raise exc
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
backend-core | await app(scope, receive, sender)
backend-core | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 74, in app
backend-core | response = await func(request)
backend-core | ^^^^^^^^^^^^^^^^^^^
backend-core | File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 299, in app
backend-core | raise e
backend-core | File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 294, in app
backend-core | raw_response = await run_endpoint_function(
backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core | File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
backend-core | return await dependant.call(**values)
backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core | File "/code/modules/brain/controller/brain_routes.py", line 113, in create_new_brain
backend-core | new_brain = brain_service.create_brain(
backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core | File "/code/modules/brain/service/brain_service.py", line 145, in create_brain
backend-core | created_brain = self.brain_repository.create_brain(brain)
backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core | File "/code/modules/brain/repository/brains.py", line 20, in create_brain
backend-core | brain_meaning = embeddings.embed_query(string_to_embed)
backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core | File "/usr/local/lib/python3.11/site-packages/langchain_community/embeddings/openai.py", line 697, in embed_query
backend-core | return self.embed_documents([text])[0]
backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core | File "/usr/local/lib/python3.11/site-packages/langchain_community/embeddings/openai.py", line 668, in embed_documents
backend-core | return self._get_len_safe_embeddings(texts, engine=engine)
backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core | File "/usr/local/lib/python3.11/site-packages/langchain_community/embeddings/openai.py", line 494, in _get_len_safe_embeddings
backend-core | response = embed_with_retry(
backend-core | ^^^^^^^^^^^^^^^^^
backend-core | File "/usr/local/lib/python3.11/site-packages/langchain_community/embeddings/openai.py", line 116, in embed_with_retry
backend-core | return embeddings.client.create(**kwargs)
backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core | File "/usr/local/lib/python3.11/site-packages/openai/resources/embeddings.py", line 113, in create
backend-core | return self._post(
backend-core | ^^^^^^^^^^^
backend-core | File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1200, in post
backend-core | return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
backend-core | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-core | File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 889, in request
backend-core | return self._request(
backend-core | ^^^^^^^^^^^^^^
backend-core | File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 965, in _request
backend-core | return self._retry_request(
backend-core | ^^^^^^^^^^^^^^^^^^^^
backend-core | File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1013, in _retry_request
backend-core | return self._request(
backend-core | ^^^^^^^^^^^^^^
backend-core | File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 965, in _request
backend-core | return self._retry_request(
backend-core | ^^^^^^^^^^^^^^^^^^^^
backend-core | File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1013, in _retry_request
backend-core | return self._request(
backend-core | ^^^^^^^^^^^^^^
backend-core | File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 980, in _request
backend-core | raise self._make_status_error_from_response(err.response) from None
backend-core | openai.RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}
backend-core | INFO: 127.0.0.1:47982 - "GET /healthz HTTP/1.1" 200 OK
Relevant log output
No response
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered: