-
Notifications
You must be signed in to change notification settings - Fork 170
Query performance compared to DSE Search #52
Comments
As you probably know, DSE is not an open source project, and its license doesn't allow to use it to improve this project. However, if you can give us information about your use case, environments, query patterns, etc., we will do our best to help you with performance, and any feedback to improve the project will be more than welcome. |
I am doing a simple test on a table with two indexes - one int and one date. I got 3 c* nodes and I've been switching the nodes between DSE+Search and plain cassandra + lucene. The same query expressed with DSE Solr seem to sustain over 5000 concurrent requests within 2 seconds of response time. Lucene index based search is struggling to get over 50 concurrent requests. Just to be sure, I tried to make staggered requests where each request is 7ms apart and your lucene based queries are responding within 15ms average. But with other concurrent reads or writes, the query performance struggles. This suggests there may be a concurrency management issue. If only the reads and writes can be throttled during access in some way, perhaps the overall performance can be improved? A profiling tool to test the concurrency issues may reveal something insightful as well. |
One more thing I noticed - the latency issue I was experiencing apparently does not apply on a single node cassandra. With a single node, the searches on lucene indexes are as fast as the DSE-Solr counterpart. Once I move to a multi-node setup, then the latency dramatically increases. |
As a workaround, I've applied a semaphore block on IndexQueryHandler around the process method to limit concurrency and this at least does not crash outright. I am now able to get over 1000 concurrent requests with 7.5s average latency. I can live with this for my use-case till the concurrency issue is resolved. |
First of all, thanks for your feedback. Could you give us more information about your tests? Which is your replication factor and consistency level? Are you using vnodes? Are your using queries and/or filters? Are you using rotational or SSD disks? How dependent from caching are your tests? Please feel free to send us whatever you deem relevant to andres at stratio dot com. Could you please give us more info about your changes in the query handler? There is a |
My tests has the following configs: Each node has:
For the single node test, I used a much less powerful VM but the queries were very fast even on high concurrency. This seems to indicate a network based congestion. My current workaround involves gating the calls to LuceneStorageProxy.getRangeSlice effecitvely throttling it to avoid tripping over with the concurrency issue. |
Ok, thanks. Which is the read consistency level? How many columns have the rows? Are you using a primary key composed by partition and clustering key, or only by partition key? How many rows must be returned by each query? Are you using a |
Read consistency: one Each query returns between 1 to 5 rows. The query is limited to max of 100 rows. |
Any updates/new findings on this issue? |
benchmark DSE vs Stratio would be nice (-: |
👍 |
When evaluating the query performance for the same schema between DSE Search with Solr and this project, we are noticing the DSE to be at least 10 times faster in the same cluster. Would there be any reason why DSE performs the Solr queries much faster?
The text was updated successfully, but these errors were encountered: