Skip to content

Conversation

rlieberman-splunk
Copy link
Collaborator

Description

This PR allows users to set labels on the splunk operator controller manager service account and service created by helm deployments. Previously, these two entities would only use default labels created by the helm chart, but it will now include any labels added by the user through the splunkOperator.labels field, similar to how the deployment is already created.

Key Changes

  • Update helm chart templates to copy labels to service and service account for splunk operator

Testing and Verification

Manual testing before and after changes using the following values.yaml file:

splunkOperator:
  labels:
    custom-label: raizel
  persistentVolumeClaim:
    storageClassName: "gp2"
  splunkGeneralTerms: "--accept-sgt-current-at-splunk-com"
  • 3.0.0 release: helm install splunk-operator splunk/splunk-operator -f values.yaml
% kubectl describe service splunk-operator-controller-manager-service 
Name:                     splunk-operator-controller-manager-service
Namespace:                default
Labels:                   app.kubernetes.io/instance=splunk-operator
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=splunk-operator
                          app.kubernetes.io/version=3.0.0
                          control-plane=controller-manager
                          helm.sh/chart=splunk-operator-3.0.0
Annotations:              meta.helm.sh/release-name: splunk-operator
                          meta.helm.sh/release-namespace: default
...
% kubectl describe serviceaccount splunk-operator-controller-manager 
Name:                splunk-operator-controller-manager
Namespace:           default
Labels:              app.kubernetes.io/instance=splunk-operator
                     app.kubernetes.io/managed-by=Helm
                     app.kubernetes.io/name=splunk-operator
                     app.kubernetes.io/version=3.0.0
                     control-plane=controller-manager
                     helm.sh/chart=splunk-operator-3.0.0
Annotations:         meta.helm.sh/release-name: splunk-operator
                     meta.helm.sh/release-namespace: default
...
% kubectl describe deployment splunk-operator-controller-manager 
Name:               splunk-operator-controller-manager
Namespace:          default
CreationTimestamp:  Fri, 03 Oct 2025 14:17:08 -0500
Labels:             app.kubernetes.io/instance=splunk-operator
                    app.kubernetes.io/managed-by=Helm
                    app.kubernetes.io/name=splunk-operator
                    app.kubernetes.io/version=3.0.0
                    control-plane=controller-manager
                    custom-label=raizel
                    helm.sh/chart=splunk-operator-3.0.0
Annotations:        deployment.kubernetes.io/revision: 1
                    meta.helm.sh/release-name: splunk-operator
                    meta.helm.sh/release-namespace: default
...
  • Branch Release (see the custom label added to all entities): helm install splunk-operator ./helm-chart/splunk-operator -f values.yaml
% kubectl describe service splunk-operator-controller-manager-service 
Name:                     splunk-operator-controller-manager-service
Namespace:                default
Labels:                   app.kubernetes.io/instance=splunk-operator
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=splunk-operator
                          app.kubernetes.io/version=3.0.0
                          control-plane=controller-manager
                          custom-label=raizel
                          helm.sh/chart=splunk-operator-3.0.0
Annotations:              meta.helm.sh/release-name: splunk-operator
                          meta.helm.sh/release-namespace: default
...
% kubectl describe serviceaccount splunk-operator-controller-manager 
Name:                splunk-operator-controller-manager
Namespace:           default
Labels:              app.kubernetes.io/instance=splunk-operator
                     app.kubernetes.io/managed-by=Helm
                     app.kubernetes.io/name=splunk-operator
                     app.kubernetes.io/version=3.0.0
                     control-plane=controller-manager
                     custom-label=raizel
                     helm.sh/chart=splunk-operator-3.0.0
Annotations:         meta.helm.sh/release-name: splunk-operator
                     meta.helm.sh/release-namespace: default
...
% kubectl describe serviceaccount splunk-operator-controller-manager 
Name:                splunk-operator-controller-manager
Namespace:           default
Labels:              app.kubernetes.io/instance=splunk-operator
                     app.kubernetes.io/managed-by=Helm
                     app.kubernetes.io/name=splunk-operator
                     app.kubernetes.io/version=3.0.0
                     control-plane=controller-manager
                     custom-label=raizel
                     helm.sh/chart=splunk-operator-3.0.0
Annotations:         meta.helm.sh/release-name: splunk-operator
                     meta.helm.sh/release-namespace: default

Related Issues

PR Checklist

  • Code changes adhere to the project's coding standards.
  • Relevant unit and integration tests are included.
  • Documentation has been updated accordingly.
  • All tests pass locally.
  • The PR description follows the project's guidelines.

@coveralls
Copy link
Collaborator

coveralls commented Oct 3, 2025

Pull Request Test Coverage Report for Build 18231834956

Details

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • 1 unchanged line in 1 file lost coverage.
  • Overall coverage decreased (-0.008%) to 86.565%

Files with Coverage Reduction New Missed Lines %
pkg/splunk/enterprise/afwscheduler.go 1 92.9%
Totals Coverage Status
Change from base Build 18040656546: -0.008%
Covered Lines: 10709
Relevant Lines: 12371

💛 - Coveralls

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants