Skip to content
This repository has been archived by the owner on May 27, 2020. It is now read-only.

Query performance compared to DSE Search #52

Closed
samyem opened this issue Oct 23, 2015 · 11 comments
Closed

Query performance compared to DSE Search #52

samyem opened this issue Oct 23, 2015 · 11 comments

Comments

@samyem
Copy link

samyem commented Oct 23, 2015

When evaluating the query performance for the same schema between DSE Search with Solr and this project, we are noticing the DSE to be at least 10 times faster in the same cluster. Would there be any reason why DSE performs the Solr queries much faster?

@adelapena
Copy link
Contributor

As you probably know, DSE is not an open source project, and its license doesn't allow to use it to improve this project. However, if you can give us information about your use case, environments, query patterns, etc., we will do our best to help you with performance, and any feedback to improve the project will be more than welcome.

@samyem
Copy link
Author

samyem commented Oct 24, 2015

I am doing a simple test on a table with two indexes - one int and one date. I got 3 c* nodes and I've been switching the nodes between DSE+Search and plain cassandra + lucene. The same query expressed with DSE Solr seem to sustain over 5000 concurrent requests within 2 seconds of response time. Lucene index based search is struggling to get over 50 concurrent requests. Just to be sure, I tried to make staggered requests where each request is 7ms apart and your lucene based queries are responding within 15ms average. But with other concurrent reads or writes, the query performance struggles. This suggests there may be a concurrency management issue. If only the reads and writes can be throttled during access in some way, perhaps the overall performance can be improved? A profiling tool to test the concurrency issues may reveal something insightful as well.

@samyem
Copy link
Author

samyem commented Oct 24, 2015

One more thing I noticed - the latency issue I was experiencing apparently does not apply on a single node cassandra. With a single node, the searches on lucene indexes are as fast as the DSE-Solr counterpart. Once I move to a multi-node setup, then the latency dramatically increases.

@samyem
Copy link
Author

samyem commented Oct 25, 2015

As a workaround, I've applied a semaphore block on IndexQueryHandler around the process method to limit concurrency and this at least does not crash outright. I am now able to get over 1000 concurrent requests with 7.5s average latency. I can live with this for my use-case till the concurrency issue is resolved.

@adelapena
Copy link
Contributor

First of all, thanks for your feedback.

Could you give us more information about your tests? Which is your replication factor and consistency level? Are you using vnodes? Are your using queries and/or filters? Are you using rotational or SSD disks? How dependent from caching are your tests? Please feel free to send us whatever you deem relevant to andres at stratio dot com.

Could you please give us more info about your changes in the query handler? There is a concurrencyFactor variable at LuceneStorageProxy that could be modified for this. Maybe this concurrency factor could be specified in a per-query basis.

@samyem
Copy link
Author

samyem commented Oct 26, 2015

My tests has the following configs:
Replication factor: 2
Nodes: 3
Disk: Local SSD
No VNodes
Only filters are used - no queries
No Cache - each query in the tests produces unique parameters

Each node has:

  • Baremetal 40 core Xeon processors
  • 200GB RAM

For the single node test, I used a much less powerful VM but the queries were very fast even on high concurrency. This seems to indicate a network based congestion. My current workaround involves gating the calls to LuceneStorageProxy.getRangeSlice effecitvely throttling it to avoid tripping over with the concurrency issue.

@adelapena
Copy link
Contributor

Ok, thanks. Which is the read consistency level? How many columns have the rows? Are you using a primary key composed by partition and clustering key, or only by partition key? How many rows must be returned by each query? Are you using a limit clause to reduce the amount of collected rows?

@samyem
Copy link
Author

samyem commented Oct 26, 2015

Read consistency: one
Table has 34 columns
7 of them have lucene indexes (3 long, 1 integer, 1 UUID, 1 date, 1 string)
2 columns are in partition key and 2 more in clustering key. The clustering key is timestamp based.

Each query returns between 1 to 5 rows. The query is limited to max of 100 rows.

@neerajBaji
Copy link

Any updates/new findings on this issue?

@kovalenko-boris
Copy link

benchmark DSE vs Stratio would be nice (-:

@sandvige
Copy link

sandvige commented Apr 5, 2017

👍

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants