Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Search] Caching issue with searching last traces - no new traces are displayed #2476

Closed
MurzNN opened this issue May 15, 2023 · 5 comments
Closed
Labels
stale Used for stale issues / PRs

Comments

@MurzNN
Copy link
Contributor

MurzNN commented May 15, 2023

Describe the bug
I have a problem that the search query for the last traces in Tempo by Grafana UI sometimes got cached or load-balanced, because I'm stopping to see new traces.

After several clicks on "Run query" I see the changing list of traces, but it contains old traces, without new ones, and seems there are 2 cached results, changing each other.

And in the same time I can search my new traces by trace_id, but they are still missing in the list of last traces!

Restarting the Tempo container helps to temporarily resolve the issue, and all last traces appear. But the problem returns after some time again.

To Reproduce
Steps to reproduce the behavior:

  1. I have configured Grafana and Tempo locally with Docker, using the setup like this https://github.com/grafana/tempo/blob/main/example/docker-compose/local/docker-compose.yaml
  2. Push some traces from your app to Tempo.
  3. Open in Grafana UI Explore > Tempo > Search > Run query
  4. See the last traces.
  5. Continue adding more traces, and tracking them by clicking "Run query" again and again.
  6. After some time you will stop seeing fresh traces. Seems the query becomes cached, and even 2 or 3 cached versions, because after repeatedly clicking "Run query" you will start seeing two different cached lists of traces, but without new traces.

Expected behavior
The result of Search should always display all last traces.

Environment:

  • Infrastructure: local
  • Deployment tool: Docker Compose

Additional Context

@mapno
Copy link
Member

mapno commented May 16, 2023

Hi! Tempo does not necessarily return the most recent traces in the requested time range. It simply returns the first matching traces it finds. You’ve searched with no parameters so it will return 20 (default) random traces in the time period.

You can try by increasing the number of results with query_frontend.search.default_result_limit = 20 or by reducing the search window to the last X minutes.

@MurzNN
Copy link
Contributor Author

MurzNN commented May 16, 2023

Thank you for so surprising information!
So, is there any way to build a search query to get all recent traces, ordered by timestamp desc with limit?

@github-actions
Copy link
Contributor

This issue has been automatically marked as stale because it has not had any activity in the past 60 days.
The next time this stale check runs, the stale label will be removed if there is new activity. The issue will be closed after 15 days if there is no new activity.
Please apply keepalive label to exempt this Issue.

@github-actions github-actions bot added the stale Used for stale issues / PRs label Jul 16, 2023
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Aug 1, 2023
@anarcher
Copy link

I can't find the latest traces either, and am seeing random results. Is there any room for improvement here?

@joe-elliott
Copy link
Member

I can't find the latest traces either, and am seeing random results. Is there any room for improvement here?

Yes. The original decision to return the first N results that matched was due to the poor efficiency of searching the old v2 backend. I do believe with the newer backend we can start moving toward deterministic results, but this is not on our roadmap at the moment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale Used for stale issues / PRs
Projects
None yet
Development

No branches or pull requests

4 participants