Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VPA does not work when using DaemonSet mode #2605

Closed
mcanevet opened this issue Feb 6, 2024 · 8 comments · Fixed by #2665
Closed

VPA does not work when using DaemonSet mode #2605

mcanevet opened this issue Feb 6, 2024 · 8 comments · Fixed by #2665
Assignees
Labels
bug Something isn't working good first issue Good for newcomers

Comments

@mcanevet
Copy link

mcanevet commented Feb 6, 2024

Component(s)

No response

What happened?

I created an OpenTelemetryCollector in DaemonSet mode to collect my logs:

apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
  name: logs
spec:
  config: ...
  env:
    - name: KUBE_NODE_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: spec.nodeName
  mode: daemonset
  priorityClassName: system-cluster-critical
  resources:
    limits:
      memory: 128Mi
    requests:
      cpu: 10m
      memory: 128Mi
  tolerations:
    - operator: Exists
  volumeMounts:
    - mountPath: /var/log
      name: varlog
      readOnly: true
  volumes:
    - hostPath:
        path: /var/log
      name: varlog

And a VerticalPodAutoscaler in order to automatically adjust the resources based on the actual usage:

apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: logs-collector
spec:
  resourcePolicy:
    containerPolicies:
      - containerName: '*'
        controlledResources:
          - cpu
          - memory
  targetRef:
    apiVersion: opentelemetry.io/v1alpha1
    kind: OpenTelemetryCollector
    name: logs
  updatePolicy:
    updateMode: Auto

But I get this error message in the status of my VerticalPodAutoscaler:

Cannot read targetRef. Reason: Unhandled targetRef
opentelemetry.io/v1alpha1 / OpenTelemetryCollector / logs, last error
Resource monitoring/logs has an empty selector for scale sub-resource

I have another OpenTelemetryCollector in mode Deployment and another one in StatefulSet and both are working fine with VerticalPodAutoscaler. It looks like the issue is limited to DaemonSet mode.

Kubernetes Version

1.29.0

Operator version

0.92.0

Collector version

0.92.0

Environment information

No response

Log output

No response

Additional context

No response

@mcanevet mcanevet added bug Something isn't working needs triage labels Feb 6, 2024
@yuriolisa
Copy link
Contributor

@mcanevet, thank you for raising this issue. Did you check if the VPA was deployed on the same namespace of OpenTelemetryCollector?
I'm asking that due to the check of VPA does .

@mcanevet
Copy link
Author

mcanevet commented Feb 6, 2024

@yuriolisa yes, I actually have 3 OpenTelemetryCollector: one for logs with DaemonSet, one for metrics with StatefulSet and one for traces with Deployment) with the respective VPA in the same Namespace, and the only one that does not work is the one using DaemonSet.

@jaronoff97
Copy link
Contributor

this may be because we aren't setting the selector for daemonset, which we probably can just go ahead and do.

@rivToadd
Copy link
Contributor

rivToadd commented Feb 7, 2024

I got it

@mcanevet
Copy link
Author

mcanevet commented Feb 8, 2024

Probably related. For Deployment and StatefulSet I have the field status.scale.selector properly set, but this field does not exist for DaemonSet.

@yuriolisa
Copy link
Contributor

Just a heads-up #1779

@rivToadd
Copy link
Contributor

rivToadd commented Feb 24, 2024 via email

@rivToadd
Copy link
Contributor

#2659

I've got this MR on the above issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working good first issue Good for newcomers
Projects
None yet
4 participants