Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bubble up the image related error reason to taskrun status #4846

Merged

Conversation

yuzp1996
Copy link
Contributor

@yuzp1996 yuzp1996 commented May 8, 2022

Changes

Bubble up the image related error reason to taskrun status

When pod is pending because of failure of pulling image the users
can not get the clear reason or message from the taskrun status.

Now I will check the pod status and judge if the error is caused
by image if so the error reason and message will bubble up to
taskrun status. Then user can know what happened though the taskrun
status.

Related issue: #4802

/kind bug

Submitter Checklist

As the author of this PR, please check off the items in this checklist:

  • Docs included if any changes are user facing
  • Tests included if any functionality added or changed
  • Follows the commit message standard
  • Meets the Tekton contributor standards (including
    functionality, content, code)
  • Release notes block below has been filled in
    (if there are no user facing changes, use release note "NONE")

Release Notes

NONE

@tekton-robot tekton-robot added the release-note Denotes a PR that will be considered when it comes time to generate release notes. label May 8, 2022
@tekton-robot tekton-robot added size/S Denotes a PR that changes 10-29 lines, ignoring generated files. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels May 8, 2022
@tekton-robot
Copy link
Collaborator

Hi @yuzp1996. Thanks for your PR.

I'm waiting for a tektoncd member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@yuzp1996 yuzp1996 changed the title WIP [WIP] : fix https://github.com/tektoncd/pipeline/issues/4802 May 8, 2022
@tekton-robot tekton-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label May 8, 2022
@abayer
Copy link
Contributor

abayer commented May 10, 2022

/ok-to-test

@tekton-robot tekton-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels May 10, 2022
@tekton-robot
Copy link
Collaborator

The following is the coverage report on the affected files.
Say /test pull-tekton-pipeline-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/pod/status.go 90.8% 89.9% -0.8

@yuzp1996 yuzp1996 force-pushed the feat/bubble-pod-status-to-taskrun branch from 91f2482 to 325fb2a Compare May 12, 2022 01:00
@tekton-robot tekton-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels May 12, 2022
@yuzp1996 yuzp1996 changed the title [WIP] : fix https://github.com/tektoncd/pipeline/issues/4802 Bubble up the image related error reason to taskrun status May 12, 2022
@tekton-robot tekton-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label May 12, 2022
@tekton-robot
Copy link
Collaborator

The following is the coverage report on the affected files.
Say /test pull-tekton-pipeline-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/pod/status.go 90.8% 89.7% -1.1

@yuzp1996 yuzp1996 force-pushed the feat/bubble-pod-status-to-taskrun branch from 325fb2a to f5dd728 Compare May 12, 2022 01:06
@tekton-robot
Copy link
Collaborator

The following is the coverage report on the affected files.
Say /test pull-tekton-pipeline-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/pod/status.go 90.8% 89.7% -1.1

@tekton-robot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: vdemeester

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@tekton-robot tekton-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. release-note-none Denotes a PR that doesnt merit a release note. kind/bug Categorizes issue or PR as related to a bug. and removed release-note Denotes a PR that will be considered when it comes time to generate release notes. labels May 12, 2022
@afrittoli
Copy link
Member

@yuzp1996 Thanks for your PR!
I updated the PR description to meet the template, please use the template for next PR, it helps a lot when PRs and producing new releases.

@yuzp1996
Copy link
Contributor Author

@yuzp1996 Thanks for your PR! I updated the PR description to meet the template, please use the template for next PR, it helps a lot when PRs and producing new releases.

Thanks for your reminder and correction. I will use the template for my next PR.

@yuzp1996 yuzp1996 force-pushed the feat/bubble-pod-status-to-taskrun branch from f5dd728 to 46ed31e Compare May 12, 2022 14:22
@tekton-robot
Copy link
Collaborator

The following is the coverage report on the affected files.
Say /test pull-tekton-pipeline-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/pod/status.go 90.8% 91.2% 0.5

@yuzp1996
Copy link
Contributor Author

/retest

4 similar comments
@yuzp1996
Copy link
Contributor Author

/retest

@yuzp1996
Copy link
Contributor Author

/retest

@yuzp1996
Copy link
Contributor Author

/retest

@yuzp1996
Copy link
Contributor Author

/retest

@yuzp1996
Copy link
Contributor Author

Hi, @vdemeester @afrittoli @imjasonh I am sorry to bother you guys. Is there anything I need to do to merge this PR?

I have read the contributor document and maybe there needs a label lgtm?

Is anything I need to do please tell me. Thanks!

}

func isImageErrorReason(reason string) bool {
// Reference from https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/images/types.go#L26
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would you mind using a permalink here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, what do you mean about permalink? I guess you mean it is not a good choice put the link here?

Maybe putting the link in the commit message is a better way? Putting the message in the comment or the commit message is the only two ways I can think of.

If you have a better choice, I‘d love to use it. 😁

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://docs.github.com/en/repositories/working-with-files/using-files/getting-permanent-links-to-files

If the code you're linking to changes, line 26 might not point to the same function anymore. If you replace the link in this comment with a permanent link, it'll always point to what you want :) Just press y when you're on the page you want to link to.

@@ -314,6 +317,8 @@ func updateIncompleteTaskRunStatus(trs *v1beta1.TaskRunStatus, pod *corev1.Pod)
markStatusRunning(trs, ReasonExceededNodeResources, "TaskRun Pod exceeded available resources")
case isPodHitConfigError(pod):
markStatusFailure(trs, ReasonCreateContainerConfigError, "Failed to create pod due to config error")
case isPullImageError(pod):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wondering if it would make sense to mark the taskrun status failed for some of these errors? e.g. "invalidImageName" is unlikely to succeed or be retryable

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I had thought about this, I tried generating an invalidImageName error for the pod and trying to check the pod's phase. I found that the phase is Pending and not Failed.

image

I think this is because of the following reason.
"This happens because it a fixable error from the Pod standpoint (you can mutate the image of a container in a Pod), and thus doesn't seem to be considered as a "terminal" error."

So I thought maybe we could borrow the status of the pod and not mark the taskrun failed?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that could definitely work, and I'm happy to approve the PR as is since that's a reasonable route, however the user isn't really meant to interact with the taskrun pod, i.e. the taskrun controller would be responsible for updating the pod and it wouldn't know how to do so here, so I think it's most reasonable to mark as failed.
Curious what others think about this

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, @lbernick do you think what kind of error happened we should mark the taskrun as failed?

I think the ErrInvalidImageName and ErrImageInspect is not fixable error so maybe we should mark the taskrun as failed when the pod has one of the two errors?

var (
	// ErrImagePullBackOff - Container image pull failed, kubelet is backing off image pull
	ErrImagePullBackOff = errors.New("ImagePullBackOff")

	// ErrImageInspect - Unable to inspect image
	ErrImageInspect = errors.New("ImageInspectError")

	// ErrImagePull - General image pull error
	ErrImagePull = errors.New("ErrImagePull")

	// ErrImageNeverPull - Required Image is absent on host and PullPolicy is NeverPullImage
	ErrImageNeverPull = errors.New("ErrImageNeverPull")

	// ErrRegistryUnavailable - Get http error when pulling image from registry
	ErrRegistryUnavailable = errors.New("RegistryUnavailable")

	// ErrInvalidImageName - Unable to parse the image name.
	ErrInvalidImageName = errors.New("InvalidImageName")
)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about this a bit more:

  • If we mark the TaskRun as pending still, it's not a great user experience because the user isn't intended to interact with the pod and modify the image name, and it'll eventually fail due to timeout (as I mentioned previously)
  • However, if we mark the TaskRun as failed, the pod will still be running, and if in theory someone edited the image, the pod could run to completion. If that happened, we wouldn't want the TaskRun to display as failed.

I think the best approach for right now is to mark the TaskRun as pending, preserving existing behavior. Then, we should add functionality that cancels the TaskRun when the pod encounters an error such as InvalidImageName (not as part of this PR-- can just create an issue to track). I think someone is implementing this functionality to cancel pods when the TaskRuns time out (#4618) which could probably be reused here.

What do you think? If you agree, happy to LGTM.

Copy link
Member

@chmouel chmouel Feb 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have started to see this issue on clusters, when InvalidImageName end up having the PR running forever (or until default timeout of 60mn). I think it would be a better user experience if the taskruns would get canceled when encountering those.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@chmouel would you mind creating an issue to track this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep here: #6105

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

awesome thank you!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Uhm, I don't think I would like to support in Tekton a case where the users edit the underlying Pod to fix an ErrInvalidImageName error. I would consider that as a permanent error from Tekton perspective.
However, I do agree that we should not let a Pod running when the TaskRun is marked as failed.
If k8s behaviour is to wait even in case of invalid image name, we would then need to stop the pod somehow because we could fail the TaskRun.

When pod is pending because of failure of pulling image the users
can not get the clear reason or message from the taskrun status.

Now I will check the pod status and judge if the error is caused
by image if so the error reason and message will bubble up to
taskrun status. Then user can know what happend though the taskrun
status.

Related issue: tektoncd#4802

Signed-off-by: yuzhipeng <yuzp1996@qq.com>
@yuzp1996 yuzp1996 force-pushed the feat/bubble-pod-status-to-taskrun branch from 46ed31e to a0df00c Compare May 18, 2022 15:10
@tekton-robot
Copy link
Collaborator

The following is the coverage report on the affected files.
Say /test pull-tekton-pipeline-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/pod/status.go 90.8% 91.2% 0.5

@lbernick
Copy link
Member

/lgtm

@abayer
Copy link
Contributor

abayer commented May 19, 2022

/retest

1 similar comment
@yuzp1996
Copy link
Contributor Author

/retest

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. kind/bug Categorizes issue or PR as related to a bug. lgtm Indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note-none Denotes a PR that doesnt merit a release note. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants