Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Retry TaskRef resolution on etcd errors #4392

Merged
merged 1 commit into from
Dec 3, 2021
Merged

Conversation

lbernick
Copy link
Member

Changes

Prior to this change, querying etcd for a Task referenced by a TaskRun
could fail due to an error "etcdserver: leader changed", as reported in #4116.
I believe this error is transient.

This error doesn't appear to be wrapped in any error type exported by
the Kubernetes Go client or Knative, so I've reported this upstream to
the Go client in #kubernetes/kubernetes/106491.
To address the user issue until the upstream issue can be resolved,
this commit uses string comparison to detect and retry the error.
A similar strategy is used in Knative for retrying this error
during their integration tests (see
https://github.com/knative/pkg/blob/517ef0292b5362ec316d227a866821bc31ec99a8/reconciler/retry.go#L50-L66).

We don't currently have a strategy for reproducing this error, so it's
not known whether this commit will solve the problem.
We will have to see if this error continues to occur after this commit is released.

Co-authored-by: Dibyo Mukherjee dibyajyoti@google.com @dibyom
Co-authored-by: Scott Seaward sbws@google.com @sbwsg

/kind bug

Submitter Checklist

As the author of this PR, please check off the items in this checklist:

  • Docs included if any changes are user facing
  • Tests included if any functionality added or changed
  • Follows the commit message standard
  • Meets the Tekton contributor standards (including
    functionality, content, code)
  • Release notes block below has been filled in or deleted (only if no user facing changes)

Release Notes

Retry Task resolution on transient etcd errors

@tekton-robot tekton-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. kind/bug Categorizes issue or PR as related to a bug. labels Nov 23, 2021
@tekton-robot tekton-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Nov 23, 2021
@tekton-robot
Copy link
Collaborator

The following is the coverage report on the affected files.
Say /test pull-tekton-pipeline-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/reconciler/pipelinerun/pipelinerun.go 82.6% 82.7% 0.1
pkg/reconciler/taskrun/resources/taskref.go 77.3% 75.6% -1.7
pkg/reconciler/taskrun/taskrun.go 79.8% 79.9% 0.1

@ghost
Copy link

ghost commented Dec 1, 2021

Something I'm not sure about: Does the taskrun or pipelinerun automatically get requeued for processing again if the returned error isn't permanent? If not, is there a risk that the tr or pr ends up just "hanging around" waiting for further updates to trigger re-reconcile?

Copy link
Member

@afrittoli afrittoli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank for this change @lbernick.
I like the approach to add a new function - so we can add to it when we find more transient errors to be handled.

I agree with @sbwsg that is would be nice to name it something that starts with Is.

/approve

@tekton-robot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: afrittoli

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@tekton-robot tekton-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Dec 2, 2021
@tekton-robot
Copy link
Collaborator

The following is the coverage report on the affected files.
Say /test pull-tekton-pipeline-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/reconciler/pipelinerun/pipelinerun.go 82.6% 82.7% 0.1
pkg/reconciler/taskrun/resources/taskref.go 77.3% 75.6% -1.7
pkg/reconciler/taskrun/taskrun.go 79.8% 79.9% 0.1

@ghost
Copy link

ghost commented Dec 3, 2021

/lgtm

@tekton-robot tekton-robot assigned ghost Dec 3, 2021
@tekton-robot tekton-robot added the lgtm Indicates that a PR is ready to be merged. label Dec 3, 2021
Prior to this change, querying etcd for a Task referenced by a TaskRun
could fail due to an error "etcdserver: leader changed", as reported in tektoncd#4116.
I believe this error is transient.

This error doesn't appear to be wrapped in any error type exported by
the Kubernetes Go client or Knative, so I've reported this upstream to
the Go client in #kubernetes/kubernetes/106491.
To address the user issue until the upstream issue can be resolved,
this commit uses string comparison to detect and retry the error.
A similar strategy is used in Knative for retrying this error
during their integration tests (see
https://github.com/knative/pkg/blob/517ef0292b5362ec316d227a866821bc31ec99a8/reconciler/retry.go#L50-L66).

We don't currently have a strategy for reproducing this error, so it's
not known whether this commit will solve the problem.
We will have to see if this error continues to occur after this commit is released.

Co-authored-by: Dibyo Mukherjee dibyajyoti@google.com
Co-authored-by: Scott Seaward sbws@google.com
@tekton-robot tekton-robot removed the lgtm Indicates that a PR is ready to be merged. label Dec 3, 2021
@tekton-robot
Copy link
Collaborator

The following is the coverage report on the affected files.
Say /test pull-tekton-pipeline-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/reconciler/pipelinerun/pipelinerun.go 82.6% 82.7% 0.1
pkg/reconciler/taskrun/resources/taskref.go 77.3% 75.6% -1.7
pkg/reconciler/taskrun/taskrun.go 79.8% 79.9% 0.1

@lbernick
Copy link
Member Author

lbernick commented Dec 3, 2021

Something I'm not sure about: Does the taskrun or pipelinerun automatically get requeued for processing again if the returned error isn't permanent? If not, is there a risk that the tr or pr ends up just "hanging around" waiting for further updates to trigger re-reconcile?

My understanding is that the taskrun/pipelinerun will be requeued if the error returned is not permanent (based on the sentence "If event is an error ... the error will be returned from Reconciler and key will requeue back into the reconciler key queue" in the knative generated reconciler docs and also this commit).

@ghost
Copy link

ghost commented Dec 3, 2021

Awesome, thanks a lot!

/lgtm

@tekton-robot tekton-robot added the lgtm Indicates that a PR is ready to be merged. label Dec 3, 2021
@tekton-robot tekton-robot merged commit 02ab879 into tektoncd:main Dec 3, 2021
@lbernick lbernick deleted the etcd branch January 24, 2022 15:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. kind/bug Categorizes issue or PR as related to a bug. lgtm Indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants