Skip to content

Conversation

@MaoMaoCake
Copy link

@MaoMaoCake MaoMaoCake commented Apr 23, 2025

What this PR does / why we need it:
Makes the labels for loki pods more specific so as to not cause issues when using with mimir.

Which issue(s) this PR fixes:
Fixes #17404

Special notes for your reviewer:

Checklist

  • Reviewed the CONTRIBUTING.md guide (required)
  • Documentation added
  • Tests updated
  • Title matches the required conventional commits format, see here
    • Note that Promtail is considered to be feature complete, and future development for logs collection will be in Grafana Alloy. As such, feat PRs are unlikely to be accepted unless a case can be made for the feature actually being a bug fix to existing behavior.
  • Changes that require user attention or interaction to upgrade are documented in docs/sources/setup/upgrade/_index.md
  • If the change is deprecating or removing a configuration option, update the deprecated-config.yaml and deleted-config.yaml files respectively in the tools/deprecated-config-checker directory. Example PR

Signed-off-by: Jirapong Pansak <55589246+MaoMaoCake@users.noreply.github.com>
updates values to use the new labels

Signed-off-by: Jirapong Pansak <55589246+MaoMaoCake@users.noreply.github.com>
Signed-off-by: Jirapong Pansak <55589246+MaoMaoCake@users.noreply.github.com>
Signed-off-by: Jirapong Pansak <55589246+MaoMaoCake@users.noreply.github.com>
Signed-off-by: Jirapong Pansak <55589246+MaoMaoCake@users.noreply.github.com>
Signed-off-by: Jirapong Pansak <55589246+MaoMaoCake@users.noreply.github.com>
Signed-off-by: Jirapong Pansak <55589246+MaoMaoCake@users.noreply.github.com>
Signed-off-by: Jirapong Pansak <55589246+MaoMaoCake@users.noreply.github.com>
Signed-off-by: Jirapong Pansak <55589246+MaoMaoCake@users.noreply.github.com>
Signed-off-by: Jirapong Pansak <55589246+MaoMaoCake@users.noreply.github.com>
Signed-off-by: Jirapong Pansak <55589246+MaoMaoCake@users.noreply.github.com>
Signed-off-by: Jirapong Pansak <55589246+MaoMaoCake@users.noreply.github.com>
Signed-off-by: Jirapong Pansak <55589246+MaoMaoCake@users.noreply.github.com>
Signed-off-by: Jirapong Pansak <55589246+MaoMaoCake@users.noreply.github.com>
Signed-off-by: Jirapong Pansak <55589246+MaoMaoCake@users.noreply.github.com>
Signed-off-by: Jirapong Pansak <55589246+MaoMaoCake@users.noreply.github.com>
Signed-off-by: Jirapong Pansak <55589246+MaoMaoCake@users.noreply.github.com>
Signed-off-by: Jirapong Pansak <55589246+MaoMaoCake@users.noreply.github.com>
Signed-off-by: Jirapong Pansak <55589246+MaoMaoCake@users.noreply.github.com>
Signed-off-by: Jirapong Pansak <55589246+MaoMaoCake@users.noreply.github.com>
Signed-off-by: Jirapong Pansak <55589246+MaoMaoCake@users.noreply.github.com>
Signed-off-by: Jirapong Pansak <55589246+MaoMaoCake@users.noreply.github.com>
@MaoMaoCake MaoMaoCake requested a review from a team as a code owner April 23, 2025 08:15
@CLAassistant
Copy link

CLAassistant commented Apr 23, 2025

CLA assistant check
All committers have signed the CLA.

Copy link
Contributor

@jkroepke jkroepke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please check my comments.

Comment on lines +1078 to +1079
app.kubernetes.io/instance: loki
app.kubernetes.io/name: loki
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
app.kubernetes.io/instance: loki
app.kubernetes.io/name: loki
app.kubernetes.io/name: '{{ include "loki.name" . }}'
app.kubernetes.io/instance: '{{ .Release.Name }}'

this would match

{{/*
Selector labels
*/}}
{{- define "loki.selectorLabels" -}}
app.kubernetes.io/name: {{ include "loki.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

But affinity need to be pass trough the helm tpl function everywhere, which is fine to solve this case in idiomatic way.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ill make this change would you prefer I made a new pull request without the _helpers-xxx changes?

Copy link
Contributor

@jkroepke jkroepke Jul 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you modify this existing here, if you wish. both is fine.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jkroepke - I've made the change I think you've asked for here:
#19061

TomHellier added a commit to TomHellier/loki that referenced this pull request Aug 29, 2025
This commit allows templating in the affinity and topology spread constraints, and updates the values to be more specific about the app.kubernetes.io/instance and app.kubernetes.io/name kubernetes recommended label.

When deploying into a cluster with a low number of nodes, or a cluster with the app.kubernetes.io/component label already used extensively, you cannot place these pods. If we want them to be scheduled apart, we should be more specific with the label selector.

This follows on from the draft PR grafana#17405
TomHellier added a commit to TomHellier/loki that referenced this pull request Aug 29, 2025
…grafana#17404)

This commit allows templating in the affinity and topology spread constraints, and updates the values to be more specific about the app.kubernetes.io/instance and app.kubernetes.io/name kubernetes recommended label.

When deploying into a cluster with a low number of nodes, or a cluster with the app.kubernetes.io/component label already used extensively, you cannot place these pods. If we want them to be scheduled apart, we should be more specific with the label selector.

This follows on from the draft PR grafana#17405
TomHellier added a commit to TomHellier/loki that referenced this pull request Aug 29, 2025
grafana#17404)

This commit allows templating in the affinity and topology spread constraints, and updates the values to be more specific about the app.kubernetes.io/instance and app.kubernetes.io/name kubernetes recommended label.

When deploying into a cluster with a low number of nodes, or a cluster with the app.kubernetes.io/component label already used extensively, you cannot place these pods. If we want them to be scheduled apart, we should be more specific with the label selector.

This follows on from the draft PR grafana#17405
TomHellier added a commit to TomHellier/loki that referenced this pull request Aug 29, 2025
grafana#17404)

This commit allows templating in the affinity and topology spread constraints, and updates the values to be more specific about the app.kubernetes.io/instance and app.kubernetes.io/name kubernetes recommended label.

When deploying into a cluster with a low number of nodes, or a cluster with the app.kubernetes.io/component label already used extensively, you cannot place these pods. If we want them to be scheduled apart, we should be more specific with the label selector.

This follows on from the draft PR grafana#17405
TomHellier added a commit to TomHellier/loki that referenced this pull request Aug 29, 2025
grafana#17404)

This commit allows templating in the affinity and topology spread constraints, and updates the values to be more specific about the app.kubernetes.io/instance and app.kubernetes.io/name kubernetes recommended label.

When deploying into a cluster with a low number of nodes, or a cluster with the app.kubernetes.io/component label already used extensively, you cannot place these pods. If we want them to be scheduled apart, we should be more specific with the label selector.

This follows on from the draft PR grafana#17405
TomHellier added a commit to TomHellier/loki that referenced this pull request Aug 29, 2025
grafana#17404)

This commit allows templating in the affinity and topology spread constraints, and updates the values to be more specific about the app.kubernetes.io/instance and app.kubernetes.io/name kubernetes recommended label.

When deploying into a cluster with a low number of nodes, or a cluster with the app.kubernetes.io/component label already used extensively, you cannot place these pods. If we want them to be scheduled apart, we should be more specific with the label selector.

This follows on from the draft PR grafana#17405
TomHellier added a commit to TomHellier/loki that referenced this pull request Aug 29, 2025
grafana#17404)

This commit allows templating in the affinity and topology spread constraints, and updates the values to be more specific about the app.kubernetes.io/instance and app.kubernetes.io/name kubernetes recommended label.

When deploying into a cluster with a low number of nodes, or a cluster with the app.kubernetes.io/component label already used extensively, you cannot place these pods. If we want them to be scheduled apart, we should be more specific with the label selector.

This follows on from the draft PR grafana#17405
TomHellier added a commit to TomHellier/loki that referenced this pull request Aug 29, 2025
grafana#17404)

This commit allows templating in the affinity and topology spread constraints, and updates the values to be more specific about the app.kubernetes.io/instance and app.kubernetes.io/name kubernetes recommended label.

When deploying into a cluster with a low number of nodes, or a cluster with the app.kubernetes.io/component label already used extensively, you cannot place these pods. If we want them to be scheduled apart, we should be more specific with the label selector.

This follows on from the draft PR grafana#17405
TomHellier added a commit to TomHellier/loki that referenced this pull request Sep 2, 2025
grafana#17404)

This commit allows templating in the affinity and topology spread constraints, and updates the values to be more specific about the app.kubernetes.io/instance and app.kubernetes.io/name kubernetes recommended label.

When deploying into a cluster with a low number of nodes, or a cluster with the app.kubernetes.io/component label already used extensively, you cannot place these pods. If we want them to be scheduled apart, we should be more specific with the label selector.

This follows on from the draft PR grafana#17405
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Helm Chart affinity is not specific enough

4 participants