Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[OCPCLOUD-802] Add Spot terminationion notice handler #308

Merged
merged 9 commits into from Mar 26, 2020

Conversation

JoelSpeed
Copy link

@JoelSpeed JoelSpeed commented Mar 20, 2020

This adds a spot termination handler to the AWS actuator image.

This binary will run on nodes and poll the AWS metadata service to check if a node has been marked for termination or not.

Based on the spot-instances proposal

This has a similar purpose to the project https://github.com/aws/aws-node-termination-handler, however differs in a number of ways:

  • This version only concerns itself with spot-termination events, not scheduled maintenance
  • This version does not include reporting to slack/email etc
  • This version deletes Machines rather than draining them
    • This leverages Machine API to handle the draining since this is already well implemented there

@openshift-ci-robot openshift-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 20, 2020
@JoelSpeed JoelSpeed force-pushed the spot-termination branch 2 times, most recently from ae271ed to 866178c Compare March 20, 2020 17:04
switch resp.StatusCode {
case http.StatusNotFound:
// Instance not terminated yet
klog.V(2).Infof("Instance not marked for termination")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we prepend the machine/node here?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Semi tempted to switch this over to the logr style structured logging that CAPI uses, WDYT?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💯

}

// Will only get here if the termination endpoint returned 200
klog.V(1).Infof("Instance marked for termination, deleting Machine \"%s/%s\"", machine.Namespace, machine.Name)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: may be prepend with machineName everywhere? see comment above

@@ -90,5 +92,17 @@ func (h *handler) run(ctx context.Context, wg *sync.WaitGroup) error {

// getMachineForNodeName finds the Machine associated with the Node name given
func (h *handler) getMachineForNode(ctx context.Context) (*machinev1.Machine, error) {
machineList := &machinev1.MachineList{}
err := h.client.List(ctx, machineList)
Copy link
Member

@enxebre enxebre Mar 23, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know if we need a full blown manager, but a DelegatingClient with the cache as you've suggested could definitely be good.

I do wonder whether that's overkill though, since this program only calls the API twice, once to list, and once to delete, so the only benefit to adding the cache is on that list

That said, I am now also thinking about whether we should Get the machine before Deleting, not sure if the resource version needs to be up to date for a Delete, will check

@enxebre
Copy link
Member

enxebre commented Mar 23, 2020

Had a first pass, this looks great.
Can you elaborate on the desc how this differ from https://github.com/aws/aws-node-termination-handler and why we choose this implementation?

@JoelSpeed
Copy link
Author

/retest

@JoelSpeed
Copy link
Author

@enxebre I believe I have addressed all of your feedback, can I get another review please

@JoelSpeed
Copy link
Author

/unhold

@openshift-ci-robot openshift-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 24, 2020
@JoelSpeed
Copy link
Author

/retest

@enxebre
Copy link
Member

enxebre commented Mar 26, 2020

/approve

@openshift-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: enxebre

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 26, 2020
Copy link

@alexander-demicev alexander-demicev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Mar 26, 2020
@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

Copy link

@elmiko elmiko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

1 similar comment
@openshift-bot
Copy link

/retest

Please review the full test history for this PR and help us cut down flakes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants