Skip to content

Conversation

@ashish-amarnath
Copy link

Implement executionhook controller

Signed-off-by: Ashish Amarnath ashisham@vmware.com

@k8s-ci-robot k8s-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jan 6, 2020
@k8s-ci-robot
Copy link

Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA.

It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.


Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. label Jan 6, 2020
@ashish-amarnath
Copy link
Author

/check cla

@k8s-ci-robot k8s-ci-robot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label Jan 6, 2020
@k8s-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: ashish-amarnath
To complete the pull request process, please assign kow3ns
You can assign the PR to them by writing /assign @kow3ns in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ashish-amarnath
Copy link
Author

/check cla

@ashish-amarnath ashish-amarnath force-pushed the exec-hook-controller branch 3 times, most recently from 32ec242 to 470307f Compare January 9, 2020 22:07
Signed-off-by: Adnan Abdulhussein <prydonius@gmail.com>
* QoL improvements in makefile

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>

untested change

Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Jan 9, 2020
Signed-off-by: Ashish Amarnath <ashisham@vmware.com>
Copy link

@kow3ns kow3ns left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good overall. One comment on the API and the testing is a bit sparse, but its a great start!

// The last error encountered when executing the action. The hook controller might
// update this field each time it retries the execution.
// +optional
Error *HookError `json:"error,omitempty" protobuf:"bytes,5,opt,name=error"`
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There can be only one error? That is not an unreasonable assumption. However, this pattern seems to be similar to the Conditions pattern implemented by most built in types. Is there a reason to go with a singular error over Status.Conditions?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The expectation is that the error represents the error condition encountered executing the hook action on a specific container in a pod. So there is a 1:1 mapping between where the hook action was executed (Pod,Container) and an error condition, if any. For that reason, using a singular error was thought of as a better fit. However, using Status.Conditions would be more appropriate if/ when we aggregate the errors into the status of the ExecutionHook object.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we were given recommendations not to use Conditions during the API review, although I couldn't remember the reason behind that. @thockin @jingxu97 do you want to comment on this?

// More error types could be added, e.g., Forbidden, Unauthorized, AlreadyInProgress, etc.
const (
// The execution hook times out
Timeout ErrorType = "Timeout"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another reason to consider using the Conditions pattern. This proposal might encourage end users to build a state machine off the error conditions (which as you point out in the comment) are likely to be extended and change. Across the rest of the API we generally avoid this as we've learned that its brittle.

Copy link
Author

@ashish-amarnath ashish-amarnath Jan 17, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the suggestion. I will revisit this as I iterate on the implementation. WDYT?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kow3ns If Conditions pattern is used, end user can still add more condition types?

@xing-yang
Copy link

Hi @ashish-amarnath, should this PR be dependent on #5?

@ashish-amarnath
Copy link
Author

ashish-amarnath commented Jan 17, 2020

@xing-yang PR #4 (this PR) is the super set of changes in PR #5.
PR #5 was created to breakup the API changes from the controller implementation and unit tests, as it was suggested during last weeks meeting.

I would prefer getting PR #4 (this PR) merged and closing PR #5

@ashish-amarnath ashish-amarnath changed the title [WIP] Implement executionhook controller mplement executionhook controller Jan 17, 2020
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jan 17, 2020
@ashish-amarnath ashish-amarnath changed the title mplement executionhook controller Implement executionhook controller Jan 17, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 16, 2020
@xing-yang
Copy link

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 17, 2020
@k8s-triage-robot
Copy link

The lifecycle/frozen label can not be applied to PRs.

This bot removes lifecycle/frozen from PRs because:

  • Commenting /lifecycle frozen on a PR has not worked since March 2021
  • PRs that remain open for >150 days are unlikely to be easily rebased

You can:

  • Rebase this PR and attempt to get it merged
  • Close this PR with /close

Please send feedback to sig-contributor-experience at kubernetes/community.

/remove-lifecycle frozen

@k8s-ci-robot k8s-ci-robot removed the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Aug 17, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 15, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 15, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants