Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor operator to match siblings #64

Merged
merged 3 commits into from
Jan 28, 2019

Conversation

bparees
Copy link
Contributor

@bparees bparees commented Jan 22, 2019

No description provided.

@openshift-ci-robot openshift-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jan 22, 2019
@openshift-ci-robot openshift-ci-robot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label Jan 22, 2019
@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jan 22, 2019
@bparees bparees changed the title [WIP] refactor operator to match siblings refactor operator to match siblings Jan 23, 2019
@openshift-ci-robot openshift-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jan 23, 2019
@bparees
Copy link
Contributor Author

bparees commented Jan 23, 2019

/assign @adambkaplan

@adambkaplan i think you have the most experience working in this operator, ptal.

Note that there are definitely potential followups around emitting events when observing config changes, and adding objectrefs to aid in mustgather, but this at least gets us looking like the other operators and keeps us from getting further behind if more config observation code gets added.

Copy link
Contributor

@adambkaplan adambkaplan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few changes needed, and one question/clarification.

}

func (l Listers) ResourceSyncer() resourcesynccontroller.ResourceSyncer {
return nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like openshift/cluster-openshift-apiserver-operator has non-nil returns, which drives the cache sync.
See apiserver interfaces.go and observe_config_controller.go

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this isn't about cache sync. resourcesyncer copies resources between namespaces. this operator does not use it anywhere in the code path.

BuildConfigLister: configInformers.Config().V1().Builds().Lister(),
BuildConfigSynced: configInformers.Config().V1().Builds().Informer().HasSynced,
ConfigMapLister: kubeInformersForOperatorNamespace.Core().V1().ConfigMaps().Lister(),
ConfigMapSynced: kubeInformersForOperatorNamespace.Core().V1().ConfigMaps().Informer().HasSynced,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

two items:

  1. Pass in a ResourceSyncer?
  2. Pre-run cache sync for the operator config object?

See cluster-openshift-apiserver-operator

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. we don't have a resourcesyncer to pass because we don't have any resources we are syncing across namepaces.

  2. i didn't see an obvious use/value in the pre cache sync given that we need to check it in the observers.

}

kubeInformersForOperatorNamespace.Core().V1().ConfigMaps().Informer().AddEventHandler(c.EventHandler())
configInformers.Config().V1().Images().Informer().AddEventHandler(c.EventHandler())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need an event handler for Builds, too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

well that is interesting, because the existing code didn't have one either, so i'm not sure how this was working.....

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sounds like we need an integration test. Given that we'll need to change cluster config to drive the test, not sure we have a clear path to build these tests at present.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it can be done in our e2e suite.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

openshift/origin#21864 starts us down that path. Each test adds 1-2 minutes to the total suite.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually meant it can be added in the e2e suite on this repo, that's what i ended up doing. (the test that sets the cluster config and ensures it propagates to the operand config).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, but I think this gets us halfway there. IMO a full e2e should:

  1. Modify the cluster config
  2. Verify the operator processed the config change
  3. Verify the config change rolls out to the operand
  4. Verifies the expected behavior changed (ex: with BuildDefaults)

I'd love to test steps 3 and 4 in this repo, but atm origin has better test frameworks on this front.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes that would be a full product e2e. what is here is effectively an e2e for the operator behavior.

I don't want to start loading up the controller manager operator e2e suite w/ tests that depend on builds working/build-controller behavior. Origin is the right place for that level of testing.

We should restrict operator e2es to tests that exercise/depend only on the operator behavior. Otherwise we're just setting ourselves up for:

  1. flakes
  2. troublesome multistep merges because we have to change behavior somewhere which means changing a test somewhere else, etc.

@bparees
Copy link
Contributor Author

bparees commented Jan 24, 2019

@adambkaplan added precache syncing though i'm not really sure what purpose it serves.

also added e2e tests to confirm we are observing build+image cluster config objects properly.

still don't think we have any use for resourcesyncer so i have left that out.

Copy link
Contributor

@adambkaplan adambkaplan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Jan 28, 2019
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: adambkaplan, bparees

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-merge-robot openshift-merge-robot merged commit 1273b58 into openshift:master Jan 28, 2019
@bparees bparees deleted the rebase branch January 28, 2019 19:33
vrutkovs pushed a commit to vrutkovs/cluster-openshift-controller-manager-operator that referenced this pull request Oct 12, 2021
Bug 1928141: kube-storage-version-migrator constantly reporting type "Upgradeable" status Unknown
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants