Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cleanup generate configmap #62

Merged
merged 3 commits into from
Feb 8, 2021
Merged

Cleanup generate configmap #62

merged 3 commits into from
Feb 8, 2021

Conversation

clyang82
Copy link
Contributor

@clyang82 clyang82 commented Feb 3, 2021

@squat
Copy link
Member

squat commented Feb 3, 2021

Overall this seems reasonable, however, i think there are some tricky edge cases. What if the receive controller is not running as a deployment but as a single pod?

@squat
Copy link
Member

squat commented Feb 3, 2021

Btw, do we want to tackle the deletion of config maps for statefulsets that have been removed (e.g. kubectl delete sts hashring1) in a separate pr?

@clyang82
Copy link
Contributor Author

clyang82 commented Feb 3, 2021

Btw, do we want to tackle the deletion of config maps for statefulsets that have been removed (e.g. kubectl delete sts hashring1) in a separate pr?

Do you think we need to tackle 2 cases:

  1. delete the generated configmap if the controller has been removed.
  2. delete the generate configmap if the statefulset have been removed.

why need case 1 is because we do not know the order of deletion for controller and statefulset. What do you think we just use case 1 for cleanup?

@clyang82
Copy link
Contributor Author

clyang82 commented Feb 3, 2021

Overall this seems reasonable, however, i think there are some tricky edge cases. What if the receive controller is not running as a deployment but as a single pod?

right now, our readme just mention that we deploy the controller using deployment. Can we tackle it if we have such utilization?

@squat
Copy link
Member

squat commented Feb 4, 2021

yes, two different cases looking at two totally orthogonal concepts. One is concerned with the order of deletion for dependent resources and the other with watching resources correctly.

We should definitely address number 2, even if in a different PR, to ensure that users/cluster admins are not confused and so that we don't pollute the API.

concerning 1: I'd like input from the rest of the @observatorium/maintainers team. WDYT about the assumption from this controller that it's running as a deployment? does this make sense? I think we should model this in the same way that other projects do, e.g. Prometheus Operator since this is a solved problem. The approach that PO takes is to set the owner reference to the resource that triggered the creation of the new resource. In the case of a prometheus statefulset, that would be the prometheus custom resource, not the prometheus operator [0]
I think this makes sense, though it is opposed to what I mentioned before, that this controller itself should be the owner.
I'm sorry for the back and forth @clyang82. I think we just need to decide: is the owner the statefulset or is it the base configmap.

[0]: https://github.com/prometheus-operator/prometheus-operator/blob/5e2eae9595e31367d3cebd15f21100d7d5c81e52/pkg/prometheus/statefulset.go#L165-L182)

@clyang82
Copy link
Contributor Author

clyang82 commented Feb 5, 2021

Thanks @squat for your solid consideration. Agree with your ideas that who triggers the creation of the new resource is the owner of new resource. most of operators follow this approach - the customer resource is the owner of operand.

But for this case, it is a little complex since we only have one configmap generated for all tenants. in other words, if we define 2 tenants, there are 2 receive sts. which one should be the owner of this generated configmap?

Can we consider this use case - Assume that the thanos-receive-controller is only running in observatorium (correct me if we can run it w/o observatorium) so we can use parent custom resource (observatorium CR) as the owner of this generated configmap. That is my previous PR did. WDYT?

@squat
Copy link
Member

squat commented Feb 8, 2021

decided offline: let's use the base configmap as the resource owner of the generated configmaps :)

@@ -547,6 +547,14 @@ func (c *controller) saveHashring(hashring []receive.HashringConfig) error {
ObjectMeta: metav1.ObjectMeta{
Name: c.options.configMapGeneratedName,
Namespace: c.options.namespace,
OwnerReferences: []metav1.OwnerReference{
{
APIVersion: "v1",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we directly use orgCM.APIVersion and orgCM.Kind?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the current client-go has a problem so that it returns empty if using orgCM.APIVersion and orgCM.Kind. We need to have another PR to upgrade client-go. Thanks.

Copy link
Member

@squat squat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's merge and fix accessing .Kind and .APIVersion directly in a follow-up

@squat squat merged commit 36d6090 into observatorium:master Feb 8, 2021
@clyang82 clyang82 deleted the cleanup branch February 8, 2021 14:53
@kakkoyun
Copy link
Member

kakkoyun commented Feb 9, 2021

Let's merge and fix accessing .Kind and .APIVersion directly in a follow-up

Do we need to create an issue for it? @squat @clyang82

@squat
Copy link
Member

squat commented Feb 9, 2021

👍 yes I'll do this rn

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

BUG: artifcats left after deleted the observatorium CR
3 participants