New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sync with upstream 1.9.2 release #22
Conversation
Add support for Darwin OS in e2e test
*: cut v1.8.0-rc.0
pin go version to go mod artifact file
Signed-off-by: YaoZengzeng <yaozengzeng@huawei.com>
remove unneeded blank line
compile kube-state-metrics with go1.13
Deployments, like Nodes, have status conditions observing the current state. While the state of Available and Progressing conditions can likely be inferred by other metrics, the state of ReplicaFailure can not be inferred. This changelist adds a new metric `kube_deployment_status_condition` that observes all the conditions on a deployment for each condition status. This is analogous to the status conditions observed by nodes and horizontal pod autoscalers, and allows kube-state-metrics to observe status conditions added by third-parties. As an example, for a deployment that has stalled, the following new metrics observed would allow an operator to detect the condition: kube_deployment_status_condition{deployment="example", namespace="default", condition="ReplicaFailure", status="true"} 1 kube_deployment_status_condition{deployment="example", namespace="default", condition="ReplicaFailure", status="false"} 0 kube_deployment_status_condition{deployment="example", namespace="default", condition="ReplicaFailure", status="unknown"} 0 Bug: #886 Signed-off-by: Terin Stock <terin@cloudflare.com>
The "condition" and "status" labels for the hpa status conditions were mapped to the incorrect values. This resulted in the status being in the condition label, and the condition in the status label. This changelist corrects the mapping, so that condition and status map to their respective values. kube_hpa_status_condition{condition="AbleToScale",hpa="hpa1",namespace="ns1",status="false"} 0 kube_hpa_status_condition{condition="AbleToScale",hpa="hpa1",namespace="ns1",status="true"} 1 kube_hpa_status_condition{condition="AbleToScale",hpa="hpa1",namespace="ns1",status="unknown"} 0 Fixes: f9658ca ("Add hpa conditions") Signed-off-by: Terin Stock <terin@cloudflare.com>
deployment status conditions
correct hpa condition status
bump golangci-lint to latest stable release
fix shell script bug in licensecheck logic
Signed-off-by: YaoZengzeng <yaozengzeng@huawei.com>
Signed-off-by: YaoZengzeng <yaozengzeng@huawei.com>
fix hyperlink issue
framework for e2e test implemented in golang
Signed-off-by: YaoZengzeng <yaozengzeng@huawei.com>
[e2e] lint the metrics of ksm
VERSION: Update version to v1.8.0-rc.1
# Conflicts: # CHANGELOG.md # README.md # VERSION # go.mod
Merge release-1.8 into master
Update README
This commit introduces the fact that build.WithGenerateStoreFunc() needs to be used for configuring properly the `Builder` (like any other `With...` method. * rename: `WithCustomGenerateStoreFunc` to `WithGenerateStoreFunc`. * remove buildStorFunc defaulting in `NewBuilder()` function * add `DefaultGenerateStoreFunc()` method in `BuilderInterface` * update `Builder initialisation` in `main.go` Signed-off-by: cedric lamoriniere <cedric.lamoriniere@datadoghq.com>
Add the possibility to provide “custom” metric-store to the builder
*: Sync master branch with release-1.9 branch
*: Sync master with release-1.9 branch
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: brancz, LiliC The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/test e2e-aws |
/retest Please review the full test history for this PR and help us cut down flakes. |
9 similar comments
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
/retest Please review the full test history for this PR and help us cut down flakes. |
cc @openshift/openshift-team-monitoring