-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Description
Bug Report
What did you do?
I created a Helm operator to manage deployments of custom objects. To reduce the number of operator Docker images required, I put multiple CRDs in the Helm operator and added each to the watches.yaml.
What did you expect to see?
A controller pod that sleeps and uses little resources when changes are not being applied to the system.
What did you see instead? Under which circumstances?
It has been 48+ hours since the last object modification on the system (over the weekend). But when I run kubectl top (pods|nodes) I see that the Helm operator pod is using around 10% CPU utilization. While not extremely high, this is during a period of complete inactivity, which is making me think that it may be excessively polling the Kubernetes API for object changes. I would expect during inactivity that the resource usage would be minimal. RAM usage is quite low.
Environment
- operator-sdk version:
Git SHA: b1d5e62bcf750515d5b34fe9705844217373a5cf
- Kubernetes version information:
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.2", GitCommit:"cff46ab41ff0bb44d8584413b598ad8360ec1def", GitTreeState:"clean", BuildDate:"2019-01-13T23:15:13Z", GoVersion:"go1.11.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.5-eks-6bad6d", GitCommit:"6bad6d9c768dc0864dab48a11653aa53b5a47043", GitTreeState:"clean", BuildDate:"2018-12-06T23:13:14Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
-
Kubernetes cluster kind:
-
Are you writing your operator in ansible, helm, or go?
Helm
Additional context
We have 4 different CRDs managed by this operator. 3 have only one object present, and the 4th only has 2. This makes me worry about how this will scale to creating more CRDs in the future.