Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Describe internals, how es segments/source/fields are stored/merged and search-speed compared to normal-es #24

Closed
ddorian opened this issue May 28, 2016 · 4 comments

Comments

@ddorian
Copy link

ddorian commented May 28, 2016

Is _source saved in es? If yes, is it possible to disable it (and only store in cassandra) ?

@ddorian
Copy link
Author

ddorian commented May 28, 2016

Now that I reread it, it says "elasticsearch on top of cassandra".
I think that mens _source is saved in cassandra columns, right ?
And how are es-indexes/segments stored ? Is the search time ~equal to normal es ? Is a segment stored in a sstable ?

Meaning more description of internals ?

@ddorian ddorian changed the title _source in es? Describe internals, how es segments/source/fields are stored/merged and search-speed compared to normal-es May 28, 2016
@vroyer
Copy link
Collaborator

vroyer commented May 29, 2016

Hi,

Yes, _source is stored in cassandra, each field in a cassandra column stored in SSTable. Lucene files are still managed by the original elasticsearch code, and search features remain unchanged. During the fetch phase of a search, requested field are retreived from the underlying cassandra table through a CQL request.

If you update a cassandra cell, a secondary index rebuild the document from the updated row and index it in elasticsearch. Search then remain unchanged. Of course, to avoid duplicate results if your cassandra replication factor > 1, every document is indexed with a token fields (murmur3 of the partition key), and a token filter is added to every search request to avoid duplicate results. This filter is computed from the routing table of the coordinator node, see token_ranges.

Hope this help.
Thanks'.

@ddorian ddorian closed this as completed Jul 3, 2016
@ddorian
Copy link
Author

ddorian commented Jul 24, 2016

Do you do something like Datastax Solr does when doing global queries to remove duplicates ?

Due to the fact that you can have several replicas of the same document in your Solr data-center (RF > 1), a proper token range predicate is added to each query before sending it to execution. Each query returns documents with row keys within the appended token range(s). Token ranges are selected in such a way that each token range is handled by exactly one node. Therefore, there is no need for additional duplicate removal and merging the results is a simple and fast union.

I mean, you say you do, but on this page https://github.com/vroyer/elassandra/blob/master/cross-datacenter-replication.md you say:

Looking at cluster state with the elasticHQ plugin, you can notice that the total number of document is twice the one available in kibana. This is because all elassandra nodes are primary, so replicated data are indexed twice in a primary shards.

And isn't elasitcHQ doing just a count there ? Doesn't the count have automatic token filtering ?

Thanks

@ddorian ddorian reopened this Jul 24, 2016
@vroyer
Copy link
Collaborator

vroyer commented Aug 9, 2016

ElasticHQ issue a stats request and get shards information including size on disk for primary shards, so no filtering is involved. Of course, this is erroneous in elassandra, because a shards may contains data for primary and non-primary token ranges.
Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants