Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fast forward the 'master' branch to be in sync with 'openshift-master' #10

Merged
merged 169 commits into from Nov 9, 2018

Conversation

pgier
Copy link

@pgier pgier commented Nov 8, 2018

We're switching the branch names to be in sync with other openshift projects for build automation.
Also adds an OWNERS file

vboginskey and others added 30 commits July 5, 2018 20:51
…abelname

bugfix: fix alertmanager port number label name
As I work with kube-state-metrics in a large cluster, I found I needed to make some adjustments.

- Expose the collectors, allowing one to configure exclusions. 

- Expose the addon_resizer parameters, facilitating reproduce adjustments

- Allow adjusting scrapeTimeout and scrapeInterval
* helm: Use CRDs for rules for operator 0.20.0+

Changed rules configmapst push -f to be PrometheusRule instead
Deprecated `additionalRulesConfigMapLabels` in favor of `additionalRulesLabels`

Fixes prometheus-operator#1523, prometheus-operator#1576, prometheus-operator#1595

* helm: Rename configmap files to prometheusrule

* helm: Remove alert-rules labels from rules

Since rules are now sourced from CRDs and rules can be for recording

* helm: Bump chart versions
* Make the Prometheus Operator Docker image run as `nobody` by default.
* Disallow privilege escalation via K8s
* Enforce read only root filesystem
As requested, this updates the resource specification to live directly in config.kubeStateMetrics

It also clarifies the config variables. These names are what google uses in some of their tooling.

(And a slight tweak to the way collectors are specified)
We default to a 30s scrapeInterval, we may as well also set scrapeTimeout to the same.
…update-dead-links

Update dead links to latest version of Kubernetes docs
…able-kube-state-metrics

Configure kube-state-metrics
In certain Prometheus Operator deployment scenarios it is desirable to
manage CRD creation outside of the operator. Likewise, it can be
desirable to scope the permissions of the Prometheus Operator so that it
does not have cluster-level access. This commit enables operation in
these situations by adding a flag to configure whether or not the
Prometheus Operator should try to create CRDs itself.
If the Prometheus Operator has been configured to watch only a specific
namespace, then we should not run the namespace informer.
pkg: add flag to toggle CRD creation in operator
* Use prometheus.fullname and append extra-rules to additional rules
Merge release 0.22 back into master
…-prometheus-pod-annotations

Add ability to specify pod metadata
brancz and others added 27 commits August 7, 2018 09:27
…_jsonnet_add_SM_NS_Selector

add serviceMonitorNamespaceSelector to the prometheus jsonnet library
This commit bumps the version of the Prometheus Operator jsonnet
dependency in kube-prometheus. With this change, kube-prometheus now
supports Prometheus Operator v0.23.0.
contrib/kube-prometheus: bump prometheus-operator
…e-monitor-sel

prometheus: Fix error handling when not generating config
…metheus-operator#1729)

* added v1 to RBAC rbac.authorization.k8s.io capabilities

* added v1 API support to all Helm templates

* bumped helm charts versions

* missing some other capabilities
The Prometheus Operator manifests all use double dashes for long flags
exept for the `logtostderr` flag, which uses one dash. This commit
adjusts the Prometheus Operator libsonnet to fix this. The
kube-prometheus project will need to be bumped later to pick up this
change.
Users have reported high CPU usage of the Prometheus Operator when
adding an annotation to a Prometheus object. The Operator would update
the respective StatefulSet in an infinite loop.

Whether a given StatefulSet needs updating is determined by the hash of
the inputs needed to generate the StatefulSet, which is calculated and
then attached to the StatefulSet as an annotation. On subsequent
reconciliations this hash is compared to the hash of the new inputs.

The function to build the StatefulSet definition is passed the
Prometheus object. This is done by value, not by reference. This does
not enforce a deep copy but merely a shallow copy. In the build function
the new StatefulSet would inherit the annotation map of the Prometheus
object. Next the input hash would be added to this map, resulting in
both the Statefulset having the hash annotation, as intended, as well as
the Prometheus object (same map, shared as a reference).

On subsequent reconciliations the same Prometheus object is used to
calculate the input hash, this time accidentally containing the has
annotation from the previous run. Even though the actual inputs never
changed, this results in a new hash, thereby updating the StatefulSet,
...

The solution is to deep copy the Prometheus object before using it in
the StatefulSet build function, thereby never mutating the annotations
of the Prometheus object. Same measure is taken for the Alertmanager
StatefulSet build function.
Fix serviceMonitorSelector all selector
pkg/*/statefulset.go: Do not mutate shared object
Change operator target to build only from available files
@openshift-ci-robot openshift-ci-robot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label Nov 8, 2018
@stevekuznetsov stevekuznetsov merged commit 0dddb38 into openshift:master Nov 9, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet