Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

compactor without GCS permissions fail silently #379

Closed
jahrlin opened this issue Nov 27, 2020 · 1 comment · Fixed by #397
Closed

compactor without GCS permissions fail silently #379

jahrlin opened this issue Nov 27, 2020 · 1 comment · Fixed by #397
Assignees

Comments

@jahrlin
Copy link

jahrlin commented Nov 27, 2020

Describe the bug
If the compactor does not have access to the GCS bucket to read blocks it will silently fail without any logs that indicates that it failed reading from GCS. It is completely silent after:

level=info ts=2020-11-27T14:45:38.226122896Z caller=compactor.go:97 msg="enabling compaction"
level=info ts=2020-11-27T14:45:38.226173262Z caller=tempodb.go:349 msg="compaction and retention enabled."

To Reproduce
Steps to reproduce the behavior:

  1. Use GCS backend with a private bucket
  2. Deploy Tempo in a GKE cluster, set serviceAccountName for ingester and querier, but NOT for compactor
  3. Send traces and wait for compactor to begin compacting

Expected behavior
Compactor should log an error stating that it cannot read from GCS due to lack of permissions.

Environment:

  • Kubernetes on GKE
  • jsonnet (we use Kustomize to patch)
  • Tempo v0.3.0

Additional Context
The clue I got was from port-forward to 3100 on the compactor and scrolling through /metrics, eventually I saw this:

tempodb_gcs_request_duration_seconds_bucket{operation="GET",status_code="403",le="5.12"} 1

which led me to realize we had not set serviceAccountName for the compactor deployment

@joe-elliott
Copy link
Member

This is odd. I suppose we need to look for an error return in the GCS client we're not logging or returning correctly. Thanks for issue. Will look into it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants