Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Doesn't work reject for scaling in StatefulSet. #120

Open
ttsrg opened this issue Nov 28, 2023 · 11 comments
Open

Doesn't work reject for scaling in StatefulSet. #120

ttsrg opened this issue Nov 28, 2023 · 11 comments

Comments

@ttsrg
Copy link

ttsrg commented Nov 28, 2023

Hello. Thanks a million for your operator.
But i can't realize reject for sts.
I use next modrule:

apiVersion: api.kubemod.io/v1beta1
kind: ModRule
metadata:
  name: reject-sts-scale-control-clients
  namespace: redis 
spec:
  type: Reject
  rejectMessage: 'More than 3 pods are not allowed, but ordered are - {{ .Target.spec.replicas }}'

  admissionOperations:
    - CREATE
    - UPDATE

  match:

    - select: '$.kind'
      matchValues: 
        - 'StatefulSet'
    
    - select: '$.spec.replicas > 3'

And it doesn't work ( instead of the Deployment via replicaset).

@vassilvk
Copy link
Member

Hi @ttsrg,
Do you see anything odd in KubeMod's operator logs?
Did you make sure to deploy the ModRule to the correct namespace?

Also, are you deploying the ModRule and the StatefulSet at the same time to the same namespace? If so, you might want to make sure the ModRule is deployed ahead of time. You can also deploy the ModRule as a cluster-wide rule (by deploying it to namespace kubemod-system) that targets your StatefulSet namespace by setting targetNamespaceRegex. Either way, the ModRule should be there before one attempts to create/update the StatefulSet.

@ttsrg
Copy link
Author

ttsrg commented Dec 1, 2023

Hi @vassilvk

Do you see anything odd in KubeMod's operator logs?

No, I don't.

Did you make sure to deploy the ModRule to the correct namespace?

Yes, i did. "namespace: redis" confirms it.

Also, are you deploying the ModRule and the StatefulSet at the same time to the same namespace?

No, I've investigated only UPDATE func via changing scale.

If so, you might want to make sure the ModRule is deployed ahead of time. You can also deploy the ModRule as a cluster-wide rule (by deploying it to namespace kubemod-system) that targets your StatefulSet namespace by setting targetNamespaceRegex. Either way, the ModRule should be there before one attempts to create/update the StatefulSet.

Shure, I'll be do it.
Yes, reject works when creates sts:

  • admission webhook "dragnet.kubemod.io" denied the request: operation rejected by the following ModRule(s): redis/reject-sts: "More than 1 pods are not allowed, but ordered are - 2"

But does not work when changes scale in sts , I guess it has to exec UPDATE admission.

Also I've faced problem with "Reject" rule: operator log writes redundant data. Should I open another issue?

16 same messages on 1 event:
{"level":"info","ts":"2023-12-01 12:38:56.899Z","logger":"dragnet-webhook","msg":"Rejected","request uid":"9b0c735f-9ca6-4a36-a4a1-b9c196ac9bfc","namespace":"elasticsearch","resource":"replicasets/elasticsearch-client-6fbcbbfc66","operation":"UPDATE","rejections":"elasticsearch/reject-deploy-rs-scale-clients: \"More than 6 pods are not allowed, but ordered are - 7\""} {"level":"info","ts":"2023-12-01 12:38:57.213Z","logger":"dragnet-webhook","msg":"Rejected","request uid":"1feae43f-939e-44b1-9df8-ba8bf09c68ad","namespace":"elasticsearch","resource":"replicasets/elasticsearch-client-6fbcbbfc66","operation":"UPDATE","rejections":"elasticsearch/reject-deploy-rs-scale-clients: \"More than 6 pods are not allowed, but ordered are - 7\""}

@vassilvk
Copy link
Member

vassilvk commented Dec 2, 2023

But does not work when changes scale in sts , I guess it has to exec UPDATE admission.

Just so I understand, the rule does reject a StatefulSet CREATE operation when the StatefulSet includes more than 3 replicas, but does not reject UPDATE to a StatefulSet whose replicas are higher than 3? You said "I've investigated only UPDATE func via changing scale" - can you please elaborate? Did you use kubectl scale, kubectl apply, kubectl edit, or kubectl patch? Or is the change made by an in-cluster operator performing an update against the StatefulSet?

Also I've faced problem with "Reject" rule: operator log writes redundant data. Should I open another issue?

If this is about a rejection ModRule you placed on ReplicaSet controlled by a Deployment controller, the many log messages you see are showing us that the Deployment controller is trying to do its job by changing the replicas of the ReplicaSet. When KubeMod rejects the request, the Deployment attempts to do this again in a loop, causing KubeMod to reject the message again. It may make more sense to reject the number of replicas at the Deployment, otherwise the Deployment controller will correctly keep trying to reconcile its desired state with the cluster state by attempting to update its ReplicaSet, only to get rejected again, and again.

@ttsrg
Copy link
Author

ttsrg commented Dec 4, 2023

can you please elaborate? Did you use kubectl scale, kubectl apply, kubectl edit, or kubectl patch?

Yes, I used such commands: kubectl scale, apply, edit and patch in sts. And only for scale, reject does not work.

@vassilvk
Copy link
Member

vassilvk commented Dec 5, 2023

Ah, I see. This might be because scaling of Kubernetes scalable resources is performed through update operations against the /scale sub-resource of the target resource (see here).

You might be able to reject updates to the scale of StatefulSet by targeting the Scale sub-resource.

Since the Scale manifest's Spec has Replicas, the logic of your original ModRule should work, but we need to change the kind to target the Scale subresource. The only tricky part is to figure out how Kubernetes encodes the Scale kind in the body of the webhook request.

First, you need to add statefulsets/scale to the resources targeted by Kubemod.

Then I would try this (same as your original ModRule, but targets kind StatefuSet/Scale:

apiVersion: api.kubemod.io/v1beta1
kind: ModRule
metadata:
  name: reject-sts-scale-control-clients
  namespace: redis 
spec:
  type: Reject
  rejectMessage: 'More than 3 pods are not allowed, but ordered are - {{ .Target.spec.replicas }}'

  admissionOperations:
    - UPDATE

  match:

    - select: '$.kind'
      matchValues: 
        - 'StatefulSet/Scale'
    
    - select: '$.spec.replicas > 3'

If kind StatefulSet/Scale doesn't work, I would try to fish for the name of the kind using matchRegex for select: '$.kind' with a regex that includes case insensitive scale - for example, something like this:

...
match:
  - select: '$.kind'
    matchRegex: '(?i)scale'
...

Then I would modify the reject message to include the {{ .Target.kind }} so I can see the exact kind. Once I've found the kind, I would change the ModRule to use the correct Kind.

Note that scaling works like this for all scalable objects - including Deployments, so you might want to rethink your Deployment scale out rejection and instead of blocking it at the ReplicaSet, you block it at Kind Deployment and the Scale subresource of Deployment, using the same ModRule, but for kind Deployment and Deployment/Scale (or whatever the name of the Scale kind turns out to be).

If you decide to control deployments Scale the same way, don't forget to add deployments/scale to KubeMod's resource target list.

@ttsrg
Copy link
Author

ttsrg commented Dec 7, 2023

:( No one variant hadn't worked.

@ttsrg
Copy link
Author

ttsrg commented Dec 7, 2023

Also I faced problems w non-working probes,/metrics - may be necessary rebuild container using any extra options?

@vassilvk
Copy link
Member

vassilvk commented Dec 7, 2023

Also I faced problems w non-working probes,/metrics - may be necessary rebuild container using any extra options?

Not sure what you mean by this. Are you having trouble installing/running the kubemod operator?

@vassilvk
Copy link
Member

vassilvk commented Dec 7, 2023

Regarding:

:( No one variant hadn't worked.

Hopefully I'll have some time soon to take a closer look and try to reproduce.

@ttsrg
Copy link
Author

ttsrg commented Dec 8, 2023

Regarding:

:( No one variant hadn't worked.

Hopefully I'll have some time soon to take a closer look and try to reproduce.

That's great

@ttsrg
Copy link
Author

ttsrg commented Jan 3, 2024

Happy NY

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants