Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[storage] limitations of Cassandra search on LIMIT and complex queries #166

Open
vprithvi opened this issue May 16, 2017 · 14 comments
Open

[storage] limitations of Cassandra search on LIMIT and complex queries #166

vprithvi opened this issue May 16, 2017 · 14 comments
Labels
area/storage help wanted

Comments

@vprithvi
Copy link
Member

@vprithvi vprithvi commented May 16, 2017

When querying for traces using serviceName, operationName and a tag with the default LIMIT of 20, some results might be omitted.

This is because of this logic which does the following:

  1. Retrieve all traceIDs matching the operation name
  2. Retrieve all traceIDs matching tags
  3. Intersect 1 & 2

Because Cassandra doesn't guarantee ordering, this could eliminate results.

I propose that we do the following instead (or in addition to what we do now),

  1. Retrieve all traceIds matching tags
  2. Filter by operation name

The reason for retrieving traceIds matching tags first targets the use case when somebody is searching for a jaeger-debug-id or some other tag with low cardinality, guaranteeing them a result when it exists.

@yurishkuro
Copy link
Member

@yurishkuro yurishkuro commented May 16, 2017

This is a hack that will work for very sparse tags like jaeger-debug-id, but will not work in other cases, e.g. when searching by a tag like "error=true" or http.status_code, because "retrieve all traces" becomes impossible due to volume.

@vprithvi
Copy link
Member Author

@vprithvi vprithvi commented May 16, 2017

I see value in having this hack with the limit parameter (in addition to the current behavior) so that it still retrieves results for low cardinality tags.

@Dieterbe
Copy link
Contributor

@Dieterbe Dieterbe commented Nov 29, 2017

I have a problem and I'm not sure if it's the same as what's discussed here. (the descriptions of current and desired logic don't mention how the limit comes into play) but what i'm seeing is that much fewer results are being returned when doing a tag search, i have to "artificially" raise the limit to get the amount i want. (e.g. with limit 20 i may get 1 result, with limit 200 i get 26 results), but the problem is only when doing tag searching, it's fine if i don't have a tag clause in the query.

@yurishkuro
Copy link
Member

@yurishkuro yurishkuro commented Nov 29, 2017

this is a known (and hard to solve) issue in Cassandra storage implementation

@rbtcollins
Copy link
Contributor

@rbtcollins rbtcollins commented Nov 29, 2017

So, this may be hard to solve, but I want to suggest that its critical to usability: its a non-obvious limitation that will cause lots of head-scratching and push-back from users of a deployment.

Can you perhaps detail the Cassandra limitations that drive this behaviour somewhere? Also, what is the recommended backend? We went with Cassandra because the Uber blog post suggests Uber is running Cassandra :)

@yurishkuro
Copy link
Member

@yurishkuro yurishkuro commented Nov 29, 2017

"its critical to usability" - fwiw Zipkin lives with the same limitation for years. You need to bump the number if you need more exotic searches. We could rename it from LIMIT to something amorphous like "search depth" in the UI.

I wouldn't say Cassandra is the recommended backend, it's mostly an operational preference for people. But Elastic doesn't have that LIMIT problem because of how ES itself implements it (fanout to all nodes where each node returns LIMIT results). The benefit of Cassandra is higher throughput.

The main issue for say a query with two tags is that we're maintaining exact-match Cassandra indices, e.g. {service-name}-{tag-key}-{tag-value} => {trace-id}. So if you do a search by two tags, we execute two queries, both with the LIMIT provided in the input, and then intersect the resulting sets of trace IDs. Cassandra 3.4+ supports SASI indices that we thought would address this issue (they sort of work similar to ES and you need to fan out request to all nodes in the cluster), but their performance turned out to be even worse than ES, and not just on writes, but on reads (update: it is possible we didn't use it correctly).

We've discussed a possible hack of repeating queries for each tag by gradually increasing LIMIT for each query until the intersection is also of LIMIT size. Never had a chance to try to implement it, not even sure how well it would work.

So in summary - we have no plans to fix this just yet. Silver lining - we're looking into other solutions based on aggregations that could make the whole point of searching for individual traces less important.

@Dieterbe
Copy link
Contributor

@Dieterbe Dieterbe commented Jun 7, 2018

I do agree that it's a usability problem.
It's easy to forget about this limitation and then people run into issues like:

  1. un-filtered search -> get results, look at a trace. copy paste a tag
  2. search for the tag
  3. no results ??

this makes jaeger look like an unreliable piece of software and people don't want to use it.

@gouthamve
Copy link
Contributor

@gouthamve gouthamve commented Aug 23, 2018

Hi, this is causing a lot of inconsistent results, and not giving me what I want. See the behaviour here: https://youtu.be/m7qZJIyCmGY

Essentially, this is giving me only the last 3-4 traces and the inconsistency b/w queries is worrying especially because all the spans might not yet be added to the traces being shown and I'd want to see older traces.

Could we atleast show a warning that the results will be off and point to this ticket maybe? Quite frankly, I wouldn't have been able to find this issue (thanks @Dieterbe for the pointer).

@tiffon
Copy link
Member

@tiffon tiffon commented Aug 31, 2018

@gouthamve, thanks to you and the others on this thread for calling out this issue.

This is definitely a severe issue, and it's great to know the extent it's affecting you.

I'll break the problems described into two broad categories:

  1. Challenges with search in Cassandra
  2. Challenges with results containing incomplete traces

For # 1, we have two tracks for addressing it. For the longer-term, we're currently prototyping a more robust (and expressive) search, and we expect to be able go live with it by the end of the year. It should be able to address 1 as well as lay the ground work for looking at aggregated data. In the shorter term, we're looking at ways to keep users more informed about the limitations of the Cassandra search. To this end, we created Inform users of jaegertracing/jaeger#166 when Cassandra is the backing store.

The UI ticket (ui-243) is definitely not a solution, but would you say it would have been helpful to be aware that it's a known issue?

Determining a resolution to # 2 is still a work in progress. One of the main challenges is, it's impossible to know, with 100% certainty, when a trace is complete. One approach we're considering is to show the number of spans associated with a trace when viewing search results and to update that number, in real time. The idea being if that number goes up while a user is viewing the search results, then the trace is probably not complete. But, whether this is the right approach or not is still TBD. I wish I had better news, on this front.

Lastly, your feedback is super useful; thanks again for letting us know this came up in a severe fashion.

@rbtcollins
Copy link
Contributor

@rbtcollins rbtcollins commented Aug 31, 2018

Re: 2 - is there a separate ticket for that? I have some thoughts but don't think this is the right ticket.

@tiffon
Copy link
Member

@tiffon tiffon commented Aug 31, 2018

@rbtcollins Great! Currently, we don't have a ticket for issues around incomplete traces. Can you start one to capture your thoughts?

@yurishkuro yurishkuro changed the title jaeger-query might not return search results due to LIMIT parameter [storage] limitations of Cassandra search on LIMIT and complex queries Nov 10, 2018
@yurishkuro yurishkuro added the help wanted label Nov 10, 2018
@yurishkuro
Copy link
Member

@yurishkuro yurishkuro commented Nov 10, 2018

I can see three things we could do here (higher priority first):

  • Make it clear in the documentation the limitation of Cassandra search and recommend Elasticsearch to people who are interested in the search use case (as opposed to data-mining driven navigation)
  • For Cassandra users, implement jaegertracing/jaeger-ui#243
  • Consider a different Cassandra storage implementation that uses SASI indices (we had an earlier experience with SASI that resulted in unusable query latency, but I think we did it incorrectly. There has been improvements in Zipkin on that front, including using a new tokenizer specifically implemented in Cassandra to support Zipkin tag search).

We should discuss it at the next project call next Friday.

@dobegor
Copy link

@dobegor dobegor commented May 3, 2019

Is there any progress regarding this? ES users still can't specify tags alongside minDuration.

@yurishkuro
Copy link
Member

@yurishkuro yurishkuro commented May 7, 2019

this might be fixed by #1477, once released

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/storage help wanted
Projects
None yet
Development

No branches or pull requests

8 participants