Skip to content

Commit

Permalink
controller/operators: label ConfigMaps, don't assume they are
Browse files Browse the repository at this point in the history
In the past, OLM moved to using a label selector to filter the informers
that track ConfigMaps in the cluster. However, when this was done,
previous ConfigMaps on the cluster that already existed were not
labelled. Therefore, on old clusters there is a mix of data - ConfigMaps
that OLM created and managed but has now forgotten since they are
missing labels, and conformant objects with the label.

We use ConfigMaps to track whether or not Jobs should be labelled - if a
Job has an OwnerReference to a ConfigMap and the ConfigMap has an
OwnerReference to an OLM GVK, we know that the Job is created and
managed by OLM.

During runtime, the two-hop lookup described above is done by using a
ConfigMap informer, so we're light on client calls during the labelling
phase of startup. However, before the recent labelling work went in, the
ConfigMap informer was *already* filtered by label, so our lookups were
dead-ends for the few old ConfigMaps that had never gotten labels in the
past. However, on startup we use live clients to determine if there are
unlabelled objects we need to handle, so we end up in a state where the
live lookup can detect the errant Jobs but the informer-based labellers
can't see them as needing labels.

This commit is technically a performance regression, as it reverts the
unequivocal ConfigMap informer filtering - we see all ConfigMaps on the
cluster during startup, but continue to filter as expected once
everything has labels.

Ideally, we can come up with some policies for cleanup of things like
these Jobs and ConfigMaps in the future; at a minimum all of the OLM
objects should be labelled and visible to the OLM operators from here on
out.

Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
  • Loading branch information
stevekuznetsov committed Nov 30, 2023
1 parent b9638dc commit 8a68d79
Show file tree
Hide file tree
Showing 2 changed files with 14 additions and 4 deletions.
15 changes: 12 additions & 3 deletions pkg/controller/operators/catalog/operator.go
Expand Up @@ -581,11 +581,20 @@ func NewOperator(ctx context.Context, kubeconfigPath string, clock utilclock.Clo
sharedIndexInformers = append(sharedIndexInformers, buPodInformer.Informer())

// Wire ConfigMaps
configMapInformer := informers.NewSharedInformerFactoryWithOptions(op.opClient.KubernetesInterface(), resyncPeriod(), informers.WithTweakListOptions(func(options *metav1.ListOptions) {
options.LabelSelector = install.OLMManagedLabelKey
})).Core().V1().ConfigMaps()
configMapInformer := k8sInformerFactory.Core().V1().ConfigMaps()
op.lister.CoreV1().RegisterConfigMapLister(metav1.NamespaceAll, configMapInformer.Lister())
sharedIndexInformers = append(sharedIndexInformers, configMapInformer.Informer())
configmapsgvk := corev1.SchemeGroupVersion.WithResource("configmaps")
if err := labelObjects(configmapsgvk, configMapInformer.Informer(), labeller.ObjectLabeler[*corev1.ConfigMap, *corev1applyconfigurations.ConfigMapApplyConfiguration](
ctx, op.logger, labeller.Filter(configmapsgvk),
configMapInformer.Lister().List,
corev1applyconfigurations.ConfigMap,
func(namespace string, ctx context.Context, cfg *corev1applyconfigurations.ConfigMapApplyConfiguration, opts metav1.ApplyOptions) (*corev1.ConfigMap, error) {
return op.opClient.KubernetesInterface().CoreV1().ConfigMaps(namespace).Apply(ctx, cfg, opts)
},
)); err != nil {
return nil, err
}

// Wire Jobs
jobInformer := k8sInformerFactory.Batch().V1().Jobs()
Expand Down
3 changes: 2 additions & 1 deletion pkg/controller/operators/labeller/filters.go
Expand Up @@ -59,7 +59,8 @@ func ServiceAccountFilter(isServiceAccountReferenced func(namespace, name string
}

var filters = map[schema.GroupVersionResource]func(metav1.Object) bool{
corev1.SchemeGroupVersion.WithResource("services"): HasOLMOwnerRef,
corev1.SchemeGroupVersion.WithResource("configmaps"): HasOLMOwnerRef,
corev1.SchemeGroupVersion.WithResource("services"): HasOLMOwnerRef,
corev1.SchemeGroupVersion.WithResource("pods"): func(object metav1.Object) bool {
_, ok := object.GetLabels()[reconciler.CatalogSourceLabelKey]
return ok
Expand Down

0 comments on commit 8a68d79

Please sign in to comment.