-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AsyncQdrantClient calls blocking event loop #615
Comments
This is kinda expected. You can't scale indefinitely by just adding parallel calls. But also considering that
there might be two possibilities, depending on your configuration it of the collection: either bottleneck is in disk, or bottleneck is on the client side. Could you please try to do the same with gRPC? |
Thanks for the response @generall I know its not possible to scale indefinitely using parallel calls as you will bottleneck at some point. However this number should be very high compared to the tests I have ran and should not increase linearly for such small amount of requests (like the /sleep endpoint hits 110ms instead of 100ms). Also the request I'm sending qdrant is the most basic kind of request (a batch recommend request would be heavier on the CPU side). Even scaling that basic request to 2/3/5 parallel calls the latency increases almost linearly. Regarding the CPU consumption, I've tried this with multiple configs for collections and the latency results are always the same. Just the CPU and memory usage changes on the pods. Also as mentioned the db is entirely in memory. I've experimented with different shard_numbers, replication_factor and segment_count but the latency is always the same. I think I've assigned more than sufficient hardware resources to the DB as well. I've gone through the optimization section of the docs as well. Is there some specific configuration of the collection I should try? (assuming the db is the bottleneck) |
If storage is all in memory and CPU usage on DB side is small, I don't think DB is a bettleneck, actually. I would try to check what's the client CPU usage. You can check |
Currently trying to utilise qdrant for a production usecase and this requires building an API for real time vector search. It is crucial for this to be async so that it can scale well for multiple users. But when using the async client with FastAPI and Uvicorn, latency increases as number of concurrent users increase.
Current Behavior
Latency of the endpoint increases as concurrent users increase.
Steps to Reproduce
4.The /recommend endpoint does not do this. At 1 concurrent users the p99 latency is 10ms, 65ms at 10 users and 260ms for 50 concurrent users. (this is unexpected if the requests are processed concurrently)
Uvicorn version: 0.27.1
FastAPI version: 0.110.0
Qdrant version: 1.7.3 (both server and client)
Qdrant is deployed on k8s with 10 pods and 20vCPUs each. All vectors are in memory and utilization of pods is <1 vCPU during testing.
Expected Behavior
The requests should be processed concurrently. Maybe the async call using the client is blocking the event loop and preventing FastAPI from processing other requests
The text was updated successfully, but these errors were encountered: