-
Notifications
You must be signed in to change notification settings - Fork 363
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
jsonnet: unlock dependencies for 4.9 development cycle #1214
jsonnet: unlock dependencies for 4.9 development cycle #1214
Conversation
@paulfantom My changes are in. |
Signed-off-by: paulfantom <pawel@krupa.net.pl>
b62a181
to
d35eb8d
Compare
CI was filing due to
|
@@ -26,7 +26,7 @@ | |||
"subdir": "jsonnet/prometheus-operator" | |||
} | |||
}, | |||
"version": "release-0.47" | |||
"version": "master" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
given that the jsonnet code generates the operator's CRDs, should we pin to a release branch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CRDs and prometheus-operator alerts are coming from the same place and are locked with this version
flag. Having this set to master
allows us to test new alerts and ensure CRDs are really backward compatible as they should.
I agree that this should be set to particular released version of prometheus-operator when we lock all dependencies before releasing OpenShift.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok thanks for the explanation. I'm fine with this approach :)
/retest |
nodeSelector: | ||
beta.kubernetes.io/os: linux |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we most likely don't want that since we are platform agnostic
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are not platform agnostic. Thanos is built only for linux, same for prometheus, alertmanager, ksm, and all other components:
$ grep nodeSelector -A1 -R .
./alertmanager/alertmanager.yaml: nodeSelector:
./alertmanager/alertmanager.yaml- kubernetes.io/os: linux
--
./grafana/deployment.yaml: nodeSelector:
./grafana/deployment.yaml- beta.kubernetes.io/os: linux
--
./kube-state-metrics/deployment.yaml: nodeSelector:
./kube-state-metrics/deployment.yaml- kubernetes.io/os: linux
--
./node-exporter/daemonset.yaml: nodeSelector:
./node-exporter/daemonset.yaml- kubernetes.io/os: linux
--
./openshift-state-metrics/deployment.yaml: nodeSelector:
./openshift-state-metrics/deployment.yaml- kubernetes.io/os: linux
--
./prometheus-adapter/deployment.yaml: nodeSelector:
./prometheus-adapter/deployment.yaml- kubernetes.io/os: linux
--
./prometheus-k8s/prometheus.yaml: nodeSelector:
./prometheus-k8s/prometheus.yaml- kubernetes.io/os: linux
--
./prometheus-operator-user-workload/deployment.yaml: nodeSelector:
./prometheus-operator-user-workload/deployment.yaml- kubernetes.io/os: linux
--
./prometheus-operator/deployment.yaml: nodeSelector:
./prometheus-operator/deployment.yaml- kubernetes.io/os: linux
--
./prometheus-user-workload/prometheus.yaml: nodeSelector:
./prometheus-user-workload/prometheus.yaml- kubernetes.io/os: linux
--
./telemeter-client/deployment.yaml: nodeSelector:
./telemeter-client/deployment.yaml- beta.kubernetes.io/os: linux
--
./thanos-querier/deployment.yaml: nodeSelector:
./thanos-querier/deployment.yaml- beta.kubernetes.io/os: linux
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh ok, I thought they could be scheduled on any nodes and the respective runtimes would handle the rest.
@@ -46,13 +46,19 @@ spec: | |||
- --query.replica-label=prometheus_replica | |||
- --query.replica-label=thanos_ruler_replica | |||
- --store=dnssrv+_grpc._tcp.prometheus-operated.openshift-monitoring.svc.cluster.local | |||
- --query.auto-downsampling |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any concerns regarding enabling auto-downsampling?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have any, it should reduce the load in some situations.
/lgtm |
/retest Please review the full test history for this PR and help us cut down flakes. |
1 similar comment
/retest Please review the full test history for this PR and help us cut down flakes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dgrisonnet, paulfantom, simonpasquier The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest Please review the full test history for this PR and help us cut down flakes. |
@paulfantom: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/retest Please review the full test history for this PR and help us cut down flakes. |
Unlocking all jsonnet deps. All relevant work was done in
jsonnet/
directory.assets/
andmanifests/
are generated.Additional work to allow using latest libs:
targetGroups
during mixin inclusionsecurityContext
from thanos querier pods. It wasn't included earlier and with one provided by the upstream community, querier cannot start. This is due to OpenShift constraints.Some jsonnet repositories changed place or default branch:
etcd-io/etcd
main
main
main
@prashbnair please verify if this is also bringing in relevant changes from kubernetes-mixin
/cc @simonpasquier