Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug 1965283: Static Resources Controller for Sync #216

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
8 changes: 4 additions & 4 deletions go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -3,22 +3,22 @@ module github.com/openshift/cluster-openshift-controller-manager-operator
go 1.13

require (
github.com/fsnotify/fsnotify v1.4.9 // indirect
github.com/ghodss/yaml v1.0.0
github.com/go-bindata/go-bindata v3.1.2+incompatible
github.com/google/gofuzz v1.2.0 // indirect
github.com/grpc-ecosystem/go-grpc-middleware v1.1.0 // indirect
github.com/grpc-ecosystem/grpc-gateway v1.11.3 // indirect
github.com/openshift/api v0.0.0-20210416130433-86964261530c
github.com/openshift/build-machinery-go v0.0.0-20210209125900-0da259a2c359
github.com/openshift/client-go v0.0.0-20201020074620-f8fd44879f7c
github.com/openshift/library-go v0.0.0-20201102091359-c4fa0f5b3a08
github.com/openshift/client-go v0.0.0-20210331195552-cf6c2669e01f
github.com/openshift/library-go v0.0.0-20210511143654-b9c317a319e0
github.com/prometheus/client_golang v1.7.1
github.com/spf13/cobra v1.0.0
github.com/spf13/cobra v1.1.1
github.com/spf13/pflag v1.0.5
go.uber.org/zap v1.11.0 // indirect
k8s.io/api v0.21.0
k8s.io/apimachinery v0.21.0
k8s.io/apiserver v0.21.0-rc.0 // indirect
k8s.io/client-go v0.21.0
k8s.io/component-base v0.21.0
k8s.io/klog/v2 v2.8.0
Expand Down
119 changes: 64 additions & 55 deletions go.sum

Large diffs are not rendered by default.

52 changes: 13 additions & 39 deletions pkg/operator/operator.go
Original file line number Diff line number Diff line change
Expand Up @@ -6,18 +6,16 @@ import (
"time"

corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/equality"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes"
coreclientv1 "k8s.io/client-go/kubernetes/typed/core/v1"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/flowcontrol"
"k8s.io/client-go/util/workqueue"
"k8s.io/klog/v2"

operatorapiv1 "github.com/openshift/api/operator/v1"
configinformerv1 "github.com/openshift/client-go/config/informers/externalversions/config/v1"
proxyvclient1 "github.com/openshift/client-go/config/listers/config/v1"
operatorclientv1 "github.com/openshift/client-go/operator/clientset/versioned/typed/operator/v1"
Expand All @@ -38,7 +36,8 @@ type OpenShiftControllerManagerOperator struct {
operatorConfigClient operatorclientv1.OperatorV1Interface
proxyLister proxyvclient1.ProxyLister

kubeClient kubernetes.Interface
kubeClient kubernetes.Interface
configMapsGetter coreclientv1.ConfigMapsGetter

// queue only ever has one item, but it has nice error handling backoff/retry semantics
queue workqueue.RateLimitingInterface
Expand All @@ -51,7 +50,7 @@ func NewOpenShiftControllerManagerOperator(
targetImagePullSpec string,
operatorConfigInformer operatorinformersv1.OpenShiftControllerManagerInformer,
proxyInformer configinformerv1.ProxyInformer,
kubeInformersForOpenshiftControllerManager informers.SharedInformerFactory,
kubeInformers v1helpers.KubeInformersForNamespaces,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We use this struct with the static resource sync controller and the resource sync controller, too.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@adambkaplan - would it be possible to specify the repos(s) / packages(s) where the static resource sync controller and resource sync controller are located? I'm guessing they are in library-go, but some quick scans are not proving fruitful to me.

I'd like to do some cross referencing as part of the review.

Thanks

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep, that is it

operatorConfigClient operatorclientv1.OperatorV1Interface,
kubeClient kubernetes.Interface,
recorder events.Recorder,
Expand All @@ -61,20 +60,24 @@ func NewOpenShiftControllerManagerOperator(
operatorConfigClient: operatorConfigClient,
proxyLister: proxyInformer.Lister(),
kubeClient: kubeClient,
configMapsGetter: v1helpers.CachedConfigMapGetter(kubeClient.CoreV1(), kubeInformers),
queue: workqueue.NewNamedRateLimitingQueue(workqueue.DefaultControllerRateLimiter(), "KubeApiserverOperator"),
rateLimiter: flowcontrol.NewTokenBucketRateLimiter(0.05 /*3 per minute*/, 4),
recorder: recorder,
}

operatorConfigInformer.Informer().AddEventHandler(c.eventHandler())
proxyInformer.Informer().AddEventHandler(c.eventHandler())
kubeInformersForOpenshiftControllerManager.Core().V1().ConfigMaps().Informer().AddEventHandler(c.eventHandler())
kubeInformersForOpenshiftControllerManager.Core().V1().ServiceAccounts().Informer().AddEventHandler(c.eventHandler())
kubeInformersForOpenshiftControllerManager.Core().V1().Services().Informer().AddEventHandler(c.eventHandler())
kubeInformersForOpenshiftControllerManager.Apps().V1().Deployments().Informer().AddEventHandler(c.eventHandler())

targetInformers := kubeInformers.InformersFor(util.TargetNamespace)

targetInformers.Core().V1().ConfigMaps().Informer().AddEventHandler(c.eventHandler())
targetInformers.Core().V1().ServiceAccounts().Informer().AddEventHandler(c.eventHandler())
targetInformers.Core().V1().Services().Informer().AddEventHandler(c.eventHandler())
targetInformers.Apps().V1().Deployments().Informer().AddEventHandler(c.eventHandler())

// we only watch some namespaces
kubeInformersForOpenshiftControllerManager.Core().V1().Namespaces().Informer().AddEventHandler(c.namespaceEventHandler())
targetInformers.Core().V1().Namespaces().Informer().AddEventHandler(c.namespaceEventHandler())

// set this bit so the library-go code knows we opt-out from supporting the "unmanaged" state.
management.SetOperatorAlwaysManaged()
Expand All @@ -89,35 +92,6 @@ func (c OpenShiftControllerManagerOperator) sync() error {
if err != nil {
return err
}
// manage status
originalOperatorConfig := operatorConfig.DeepCopy()
reasonString := ""
messageString := ""
switch operatorConfig.Spec.ManagementState {
case operatorapiv1.Removed:
fallthrough
// we equally do not allow removed/unmanaged
case operatorapiv1.Unmanaged:
reasonString = fmt.Sprintf("%sUnsupported", string(operatorConfig.Spec.ManagementState))
messageString = fmt.Sprintf("the controller manager spec was set to %s state, but that is unsupported, and has no effect on this condition", string(operatorConfig.Spec.ManagementState))

// as we are ignoring / not supporting unmanaged/removed, we still process any other inputs to the sync/reconciliation
// unlike what we used to do
case operatorapiv1.Managed:
// we want to empty out the reason/message string if transitioning from the other phases so the default setting above is good
}

for _, condition := range operatorConfig.Status.Conditions {
// do not change the current status as part of noting that unmanaged/removed are not supported
condition.Reason = reasonString
condition.Message = messageString
v1helpers.SetOperatorCondition(&operatorConfig.Status.Conditions, condition)
}
if !equality.Semantic.DeepEqual(operatorConfig.Status, originalOperatorConfig.Status) {
if _, err := c.operatorConfigClient.OpenShiftControllerManagers().UpdateStatus(context.TODO(), operatorConfig, metav1.UpdateOptions{}); err != nil {
return err
}
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The status sync controller takes care of the CVO status reporting for us if the operator moves to Unamanged or Removed.


forceRequeue, err := syncOpenShiftControllerManager_v311_00_to_latest(c, operatorConfig)
if forceRequeue && err != nil {
Expand Down
69 changes: 66 additions & 3 deletions pkg/operator/starter.go
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ import (

metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"

configv1 "github.com/openshift/api/config/v1"
configclient "github.com/openshift/client-go/config/clientset/versioned"
Expand All @@ -16,20 +17,33 @@ import (
operatorclientv1 "github.com/openshift/client-go/operator/clientset/versioned/typed/operator/v1"
operatorinformers "github.com/openshift/client-go/operator/informers/externalversions"
"github.com/openshift/library-go/pkg/controller/controllercmd"
"github.com/openshift/library-go/pkg/operator/resource/resourceapply"
"github.com/openshift/library-go/pkg/operator/resourcesynccontroller"
"github.com/openshift/library-go/pkg/operator/staticresourcecontroller"
"github.com/openshift/library-go/pkg/operator/status"
"github.com/openshift/library-go/pkg/operator/v1helpers"

configobservationcontroller "github.com/openshift/cluster-openshift-controller-manager-operator/pkg/operator/configobservation/configobservercontroller"
"github.com/openshift/cluster-openshift-controller-manager-operator/pkg/operator/usercaobservation"
"github.com/openshift/cluster-openshift-controller-manager-operator/pkg/operator/v311_00_assets"
"github.com/openshift/cluster-openshift-controller-manager-operator/pkg/util"
)

func RunOperator(ctx context.Context, controllerConfig *controllercmd.ControllerContext) error {
kubeClient, err := kubernetes.NewForConfig(controllerConfig.ProtoKubeConfig)
// Increase QPS and burst to avoid client-side rate limits when reconciling RBAC API objects.
// See TODO below for the StaticResourceController
highRateLimitProtoKubeConfig := rest.CopyConfig(controllerConfig.ProtoKubeConfig)
if highRateLimitProtoKubeConfig.QPS < 50 {
highRateLimitProtoKubeConfig.QPS = 50
}
if highRateLimitProtoKubeConfig.Burst < 100 {
highRateLimitProtoKubeConfig.Burst = 100
}
kubeClient, err := kubernetes.NewForConfig(highRateLimitProtoKubeConfig)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The static resource sync controller unfortunately doesn't use listers when reconciling RBAC objects.
This is problematic for ocm because we reconcile a lot of roles/clusterroles and their role bindings.
Long term we should be the ones to improve this in library-go, but for the interim we can increase the QPS and reduce the instances where the sync controller is throttled.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If so, a commit specific link to the file in question might help us to quickly remember where to go if we ever get to this TODO

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think if I update the comment to use StaticResourceController then we should be ok (it is initialized later in this method).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry I'm not following how updating the comment to use StaticResourceController pertains to my question around confirming where the shortcoming in library-go is and ask to somehow point to the location in that code for future reference

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the comments - hopefully things are a little bit clearer.

if err != nil {
return err
}

operatorClient, err := operatorclient.NewForConfig(controllerConfig.KubeConfig)
if err != nil {
return err
Expand All @@ -39,15 +53,26 @@ func RunOperator(ctx context.Context, controllerConfig *controllercmd.Controller
return err
}

kubeInformers := v1helpers.NewKubeInformersForNamespaces(kubeClient, util.TargetNamespace, util.OperatorNamespace, util.UserSpecifiedGlobalConfigNamespace)
// Create kube informers for namespaces that the operator reconciles content from or to.
// The empty string "" adds informers for cluster-scoped resources.
kubeInformers := v1helpers.NewKubeInformersForNamespaces(kubeClient,
"",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's put a comment in code to indicate this means cluster scoped, as you clarified in the PR comment 09eb355#r644180187

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or go with #216 (comment) instead

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll add a comment to say "empty means cluster-scoped"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sounds good thanks

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated

util.TargetNamespace,
util.OperatorNamespace,
util.UserSpecifiedGlobalConfigNamespace,
util.InfraNamespace,
metav1.NamespaceSystem,
Comment on lines +59 to +64
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New additions to this list:

  • Cluster scoped ("")
  • openshift-infra
  • kube-system

)
operatorConfigInformers := operatorinformers.NewSharedInformerFactory(operatorClient, 10*time.Minute)
configInformers := configinformers.NewSharedInformerFactory(configClient, 10*time.Minute)

// OpenShiftControlllerManagerOperator reconciles the state of the openshift-controller-manager
// DaemonSet and associated ConfigMaps.
operator := NewOpenShiftControllerManagerOperator(
os.Getenv("IMAGE"),
operatorConfigInformers.Operator().V1().OpenShiftControllerManagers(),
configInformers.Config().V1().Proxies(),
kubeInformers.InformersFor(util.TargetNamespace),
kubeInformers,
operatorClient.OperatorV1(),
kubeClient,
controllerConfig.EventRecorder,
Expand All @@ -69,6 +94,8 @@ func RunOperator(ctx context.Context, controllerConfig *controllercmd.Controller
controllerConfig.EventRecorder,
)

// ConfigObserver observes the configuration state from cluster config objects and transforms
// them into configuration used by openshift-controller-manager
configObserver := configobservationcontroller.NewConfigObserver(
opClient,
operatorConfigInformers,
Expand All @@ -89,6 +116,9 @@ func RunOperator(ctx context.Context, controllerConfig *controllercmd.Controller
openshiftControllerManagers: operatorClient.OperatorV1().OpenShiftControllerManagers(),
version: os.Getenv("RELEASE_VERSION"),
}

// ClusterOperatorStatusController aggregates the conditions in our openshiftcontrollermanager
// object to the corresponding ClusterOperator object.
clusterOperatorStatus := status.NewClusterOperatorStatusController(
util.ClusterOperatorName,
[]configv1.ObjectReference{
Expand All @@ -105,10 +135,43 @@ func RunOperator(ctx context.Context, controllerConfig *controllercmd.Controller
controllerConfig.EventRecorder,
)

// StaticResourceController uses library-go's resourceapply package to reconcile a set of YAML
// manifests against a cluster.
// TODO: enhance resourceapply to use listers for RBAC APIs.
staticResourceController := staticresourcecontroller.NewStaticResourceController(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a simple controller that takes a bag of YAMLs and applies them, and reports status to the operator config object's status. That gets rolled up to the CVO object via the status sync controller.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added the TODO here to make it clear where in library-go we need to add a future enhancement.

"OpenshiftControllerManagerStaticResources",
v311_00_assets.Asset,
[]string{
"v3.11.0/openshift-controller-manager/informer-clusterrole.yaml",
"v3.11.0/openshift-controller-manager/informer-clusterrolebinding.yaml",
"v3.11.0/openshift-controller-manager/ingress-to-route-controller-clusterrole.yaml",
"v3.11.0/openshift-controller-manager/ingress-to-route-controller-clusterrolebinding.yaml",
"v3.11.0/openshift-controller-manager/tokenreview-clusterrole.yaml",
"v3.11.0/openshift-controller-manager/tokenreview-clusterrolebinding.yaml",
"v3.11.0/openshift-controller-manager/leader-role.yaml",
"v3.11.0/openshift-controller-manager/leader-rolebinding.yaml",
"v3.11.0/openshift-controller-manager/ns.yaml",
"v3.11.0/openshift-controller-manager/old-leader-role.yaml",
"v3.11.0/openshift-controller-manager/old-leader-rolebinding.yaml",
"v3.11.0/openshift-controller-manager/separate-sa-role.yaml",
"v3.11.0/openshift-controller-manager/separate-sa-rolebinding.yaml",
"v3.11.0/openshift-controller-manager/sa.yaml",
"v3.11.0/openshift-controller-manager/svc.yaml",
"v3.11.0/openshift-controller-manager/servicemonitor-role.yaml",
"v3.11.0/openshift-controller-manager/servicemonitor-rolebinding.yaml",
"v3.11.0/openshift-controller-manager/buildconfigstatus-clusterrole.yaml",
"v3.11.0/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml",
},
resourceapply.NewKubeClientHolder(kubeClient),
opClient,
controllerConfig.EventRecorder,
).AddKubeInformers(kubeInformers)

operatorConfigInformers.Start(ctx.Done())
kubeInformers.Start(ctx.Done())
configInformers.Start(ctx.Done())

go staticResourceController.Run(ctx, 1)
go operator.Run(ctx, 1)
go resourceSyncer.Run(ctx, 1)
go configObserver.Run(ctx, 1)
Expand Down
43 changes: 5 additions & 38 deletions pkg/operator/sync_openshiftcontrollermanager_v311_00.go
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@ import (
"k8s.io/apimachinery/pkg/api/equality"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/sets"
"k8s.io/client-go/kubernetes"
appsclientv1 "k8s.io/client-go/kubernetes/typed/apps/v1"
coreclientv1 "k8s.io/client-go/kubernetes/typed/core/v1"
Expand All @@ -38,43 +37,11 @@ func syncOpenShiftControllerManager_v311_00_to_latest(c OpenShiftControllerManag
errors := []error{}
var err error
operatorConfig := originalOperatorConfig.DeepCopy()
clientHolder := resourceapply.NewKubeClientHolder(c.kubeClient)
directResourceResults := resourceapply.ApplyDirectly(clientHolder, c.recorder, v311_00_assets.Asset,
"v3.11.0/openshift-controller-manager/informer-clusterrole.yaml",
"v3.11.0/openshift-controller-manager/informer-clusterrolebinding.yaml",
"v3.11.0/openshift-controller-manager/ingress-to-route-controller-clusterrole.yaml",
"v3.11.0/openshift-controller-manager/ingress-to-route-controller-clusterrolebinding.yaml",
"v3.11.0/openshift-controller-manager/tokenreview-clusterrole.yaml",
"v3.11.0/openshift-controller-manager/tokenreview-clusterrolebinding.yaml",
"v3.11.0/openshift-controller-manager/leader-role.yaml",
"v3.11.0/openshift-controller-manager/leader-rolebinding.yaml",
"v3.11.0/openshift-controller-manager/ns.yaml",
"v3.11.0/openshift-controller-manager/old-leader-role.yaml",
"v3.11.0/openshift-controller-manager/old-leader-rolebinding.yaml",
"v3.11.0/openshift-controller-manager/separate-sa-role.yaml",
"v3.11.0/openshift-controller-manager/separate-sa-rolebinding.yaml",
"v3.11.0/openshift-controller-manager/sa.yaml",
"v3.11.0/openshift-controller-manager/svc.yaml",
"v3.11.0/openshift-controller-manager/servicemonitor-role.yaml",
"v3.11.0/openshift-controller-manager/servicemonitor-rolebinding.yaml",
"v3.11.0/openshift-controller-manager/buildconfigstatus-clusterrole.yaml",
"v3.11.0/openshift-controller-manager/buildconfigstatus-clusterrolebinding.yaml",
)
resourcesThatForceRedeployment := sets.NewString("v3.11.0/openshift-controller-manager/sa.yaml")
forceRollout := false

for _, currResult := range directResourceResults {
if currResult.Error != nil {
errors = append(errors, fmt.Errorf("%q (%T): %v", currResult.File, currResult.Type, currResult.Error))
continue
}

if currResult.Changed && resourcesThatForceRedeployment.Has(currResult.File) {
forceRollout = true
}
}
// TODO - use labels/annotations to force a daemonset rollout
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC this is a different TODO comment from what you had in your last PR in the same spot, and this is a TODO comment you want to keep, correct @adambkaplan ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes - we are passing forceRollout to a method that has been deprecated. We should use labels or annotations to drive rollouts.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the confirmation

forceRollout := false

_, configMapModified, err := manageOpenShiftControllerManagerConfigMap_v311_00_to_latest(c.kubeClient, c.kubeClient.CoreV1(), c.recorder, operatorConfig)
_, configMapModified, err := manageOpenShiftControllerManagerConfigMap_v311_00_to_latest(c.kubeClient, c.configMapsGetter, c.recorder, operatorConfig)
if err != nil {
errors = append(errors, fmt.Errorf("%q: %v", "configmap", err))
}
Expand All @@ -84,12 +51,12 @@ func syncOpenShiftControllerManager_v311_00_to_latest(c OpenShiftControllerManag
errors = append(errors, fmt.Errorf("%q: %v", "client-ca", err))
}

_, serviceCAModified, err := manageOpenShiftServiceCAConfigMap_v311_00_to_latest(c.kubeClient, c.kubeClient.CoreV1(), c.recorder)
_, serviceCAModified, err := manageOpenShiftServiceCAConfigMap_v311_00_to_latest(c.kubeClient, c.configMapsGetter, c.recorder)
if err != nil {
errors = append(errors, fmt.Errorf("%q: %v", "openshift-service-ca", err))
}

_, globalCAModified, err := manageOpenShiftGlobalCAConfigMap_v311_00_to_latest(c.kubeClient, c.kubeClient.CoreV1(), c.recorder)
_, globalCAModified, err := manageOpenShiftGlobalCAConfigMap_v311_00_to_latest(c.kubeClient, c.configMapsGetter, c.recorder)
if err != nil {
errors = append(errors, fmt.Errorf("%q: %v", "openshift-global-ca", err))
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -214,6 +214,7 @@ func TestProgressingCondition(t *testing.T) {

operator := OpenShiftControllerManagerOperator{
kubeClient: kubeClient,
configMapsGetter: kubeClient.CoreV1(),
proxyLister: proxyLister,
recorder: events.NewInMemoryRecorder(""),
operatorConfigClient: controllerManagerOperatorClient.OperatorV1(),
Expand Down
1 change: 1 addition & 0 deletions pkg/util/consts.go
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ const (
MachineSpecifiedGlobalConfigNamespace = "openshift-config-managed"
TargetNamespace = "openshift-controller-manager"
OperatorNamespace = "openshift-controller-manager-operator"
InfraNamespace = "openshift-infra"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As an alternative to my comment at 09eb355#diff-0d623dfd885adb20f991bda4c2453aebd732ca6dbb4d1d4be6e79805c3b48de6R57 we could add a constant here named something like ClusterScoped that is set to the emtpy string ""

VersionAnnotation = "release.openshift.io/version"
ClusterOperatorName = "openshift-controller-manager"
)
3 changes: 0 additions & 3 deletions vendor/github.com/certifi/gocertifi/LICENSE

This file was deleted.