-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cleanup generate configmap #62
Conversation
Overall this seems reasonable, however, i think there are some tricky edge cases. What if the receive controller is not running as a deployment but as a single pod? |
Btw, do we want to tackle the deletion of config maps for statefulsets that have been removed (e.g. kubectl delete sts hashring1) in a separate pr? |
Do you think we need to tackle 2 cases:
why need case 1 is because we do not know the order of deletion for controller and statefulset. What do you think we just use case 1 for cleanup? |
right now, our readme just mention that we deploy the controller using deployment. Can we tackle it if we have such utilization? |
yes, two different cases looking at two totally orthogonal concepts. One is concerned with the order of deletion for dependent resources and the other with watching resources correctly. We should definitely address number 2, even if in a different PR, to ensure that users/cluster admins are not confused and so that we don't pollute the API. concerning 1: I'd like input from the rest of the @observatorium/maintainers team. WDYT about the assumption from this controller that it's running as a deployment? does this make sense? I think we should model this in the same way that other projects do, e.g. Prometheus Operator since this is a solved problem. The approach that PO takes is to set the owner reference to the resource that triggered the creation of the new resource. In the case of a prometheus statefulset, that would be the prometheus custom resource, not the prometheus operator [0] |
Thanks @squat for your solid consideration. Agree with your ideas that who triggers the creation of the new resource is the owner of new resource. most of operators follow this approach - the customer resource is the owner of operand. But for this case, it is a little complex since we only have one configmap generated for all tenants. in other words, if we define 2 tenants, there are 2 receive sts. which one should be the owner of this generated configmap? Can we consider this use case - Assume that the thanos-receive-controller is only running in observatorium (correct me if we can run it w/o observatorium) so we can use parent custom resource (observatorium CR) as the owner of this generated configmap. That is my previous PR did. WDYT? |
decided offline: let's use the base configmap as the resource owner of the generated configmaps :) |
@@ -547,6 +547,14 @@ func (c *controller) saveHashring(hashring []receive.HashringConfig) error { | |||
ObjectMeta: metav1.ObjectMeta{ | |||
Name: c.options.configMapGeneratedName, | |||
Namespace: c.options.namespace, | |||
OwnerReferences: []metav1.OwnerReference{ | |||
{ | |||
APIVersion: "v1", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we directly use orgCM.APIVersion
and orgCM.Kind
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the current client-go has a problem so that it returns empty if using orgCM.APIVersion
and orgCM.Kind
. We need to have another PR to upgrade client-go. Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's merge and fix accessing .Kind
and .APIVersion
directly in a follow-up
👍 yes I'll do this rn |
fixed: observatorium/operator#10