Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Start exporting the in-cluster network programming latency metric. #71999

Merged
merged 1 commit into from
Feb 12, 2019

Conversation

mm4tt
Copy link
Contributor

@mm4tt mm4tt commented Dec 12, 2018

What type of PR is this?
/kind feature

What this PR does / why we need it:
This is the final step of implementing the first version of in-cluster network programming latency that was proposed here - https://github.com/kubernetes/community/blob/master/sig-scalability/slos/network_programming_latency.md
The computation of the latency is based on the EndpointsLastChangeTriggerTime annotation, which implementation can be found in #71998

Does this PR introduce a user-facing change?:
NONE

@k8s-ci-robot
Copy link
Contributor

@MateuszMatejczyk: Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Dec 12, 2018
@k8s-ci-robot
Copy link
Contributor

Hi @MateuszMatejczyk. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@mm4tt
Copy link
Contributor Author

mm4tt commented Dec 12, 2018

/assign @wojtek-t

@k8s-ci-robot k8s-ci-robot added sig/network Categorizes an issue or PR as relevant to SIG Network. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Dec 12, 2018
@mm4tt
Copy link
Contributor Author

mm4tt commented Dec 12, 2018

/assign @wojtek-t

@mm4tt
Copy link
Contributor Author

mm4tt commented Dec 12, 2018

/uncc @lavalamp
/uncc @freehan

Unassigning @lavalamp and @freehan until @wojtek-t takes a look.

@wojtek-t
Copy link
Member

I will do the first pass later this week and then will add more reviewers from networking team.

Copy link
Member

@wojtek-t wojtek-t left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks reasonable to me. Hover, this doesn't make sense to submit until #71998 is at least ready for merging.
So holding for now, but assigning someone else to also take a look.

/assign @freehan

/hold
/ok-to-test

}
return len(ect.items) > 0
}

func getLastChangeTriggerTime(endpoints *v1.Endpoints) time.Time {
val, _ := time.Parse(time.RFC3339Nano, endpoints.Annotations[v1.EndpointsLastChangeTriggerTime])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't silently ignore errors - if not more, at least log an error.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Added log statement and a comment explaining why we can ignore the error.

@k8s-ci-robot k8s-ci-robot added do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Dec 13, 2018
@wojtek-t wojtek-t added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. release-note-none Denotes a PR that doesn't merit a release note. and removed needs-priority Indicates a PR lacks a `priority/foo` label and requires one. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. labels Dec 13, 2018
@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Dec 13, 2018
@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jan 26, 2019
@wojtek-t
Copy link
Member

wojtek-t commented Feb 6, 2019

/hold cancel

With endpoint controller changes already being merged, we are ready to resurrect this change.
@mm4tt - can you please rebase?

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Feb 6, 2019
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 6, 2019
Copy link
Contributor Author

@mm4tt mm4tt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, PTAL

@@ -30,6 +30,7 @@ import (
"k8s.io/client-go/tools/record"
utilproxy "k8s.io/kubernetes/pkg/proxy/util"
utilnet "k8s.io/kubernetes/pkg/util/net"
"time"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

pkg/proxy/endpoints.go Show resolved Hide resolved
@@ -26,6 +26,8 @@ import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/sets"
"time"
"sort"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@@ -92,6 +93,9 @@ type EndpointChangeTracker struct {
// isIPv6Mode indicates if change tracker is under IPv6/IPv4 mode. Nil means not applicable.
isIPv6Mode *bool
recorder record.EventRecorder
// Map from the Endpoints namespaced-name to the time of the trigger that caused the endpoints
// object to change. Used to calculate the network-programming-latency.
lastChangeTriggerTimes map[types.NamespacedName]time.Time
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point.
Added in the documentation of the metric that exports the Network Programming Latency.

}
change.current = ect.endpointsToEndpointsMap(current)
// if change.previous equal to change.current, it means no change
if reflect.DeepEqual(change.previous, change.current) {
delete(ect.items, namespacedName)
delete(ect.lastChangeTriggerTimes, namespacedName)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC, the situation you described is something like this:

T0: proxier.Sync()
T1: proxier observes Endpoints E1 change, E1.EndpointsLastChangeTriggerTime = t0
T2: proxier observes Endpoints E1 change, E1.EndpointsLastChangeTriggerTime = t1 (t1>=t0)
T3: proxier.Sync()

In such case the implementation will ignore the second timestamp and use t0 (which is guaranteed to be <= t1) to measure the latency.

Let me know if it makes sense.

Name: "network_programming_latency_seconds",
Help: "In Cluster Network Programming Latency in seconds",
// The last bucket will be [0.001s*2^20 ~= 17min, +inf)
Buckets: prometheus.ExponentialBuckets(0.001, 2, 20),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not convinced that the buckets are correct.
I'm fine with leaving it as is for now, but please leave a TODO to reevaluate it before 1.14 release.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Copy link
Member

@wojtek-t wojtek-t left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Two more comments - other than that lgtm.

@wojtek-t
Copy link
Member

That LGTM.

@freehan - can you please take another look?

Copy link
Contributor

@freehan freehan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just a nit, LGTM overall

// Reset the lastChangeTriggerTimes for the Endpoints object. Given that the network programming
// SLI is defined as the duration between a time of an event and a time when the network was
// programmed to incorporate that event, if there are events that happened between two
// consecutive syncs syncs and that canceled each other out, e.g. pod A added -> pod A deleted,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are 2 syncs

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, done.

}
change.current = ect.endpointsToEndpointsMap(current)
// if change.previous equal to change.current, it means no change
if reflect.DeepEqual(change.previous, change.current) {
delete(ect.items, namespacedName)
delete(ect.lastChangeTriggerTimes, namespacedName)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay. Noop sounds good.

@wojtek-t
Copy link
Member

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Feb 12, 2019
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: mm4tt, wojtek-t

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Feb 12, 2019
@mm4tt
Copy link
Contributor Author

mm4tt commented Feb 12, 2019

/retest

@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Feb 12, 2019

@mm4tt: The following test failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kubernetes-e2e-kops-aws c116d4f77daa048f89bf8d97a3b6e0e9cea63b58 link /test pull-kubernetes-e2e-kops-aws

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@krzysied
Copy link
Contributor

/retest

@k8s-ci-robot k8s-ci-robot merged commit 41d2445 into kubernetes:master Feb 12, 2019
@mm4tt mm4tt deleted the kube-proxy branch February 12, 2019 14:01
mm4tt added a commit to mm4tt/coredns that referenced this pull request Mar 13, 2019
The DNS Programming Latency defintion can be found [here](https://github.com/kubernetes/community/blob/master/sig-scalability/slos/dns_programming_latency.md)

This PR covers only "headless with selector" services, other service
kinds are blocked on the impossibility of determining the LastUpdateTime of a
service object.

The PR bears some similarities to
kubernetes/kubernetes#71999, which introdced
In-Cluster Network Programming Latency. The main difference is that
there is no actual "programming" happening in CoreDNS (for comparison,
in kube-proxy the network programming consists of writing IPTables/IPVS
rules). The CoreDNS serves the content directly from the endpoints/service/pod cache,
creating DNS records on the fly. Thus, we assume that the programming of DNS ends in the
moment when the endpoints/service/pod change reaches the CoreDNS via the
Watch mechanism.
mm4tt added a commit to mm4tt/coredns that referenced this pull request Mar 13, 2019
The DNS Programming Latency definition can be found [here](https://github.com/kubernetes/community/blob/master/sig-scalability/slos/dns_programming_latency.md)

This PR covers only "headless with selector" services, other service
kinds are blocked on the impossibility of determining the LastUpdateTime of a
service object.

The PR bears some similarities to
kubernetes/kubernetes#71999, which introdced
In-Cluster Network Programming Latency. The main difference is that
there is no actual "programming" happening in CoreDNS (for comparison,
in kube-proxy the network programming consists of writing IPTables/IPVS
rules). The CoreDNS serves the content directly from the endpoints/service/pod cache,
creating DNS records on the fly. Thus, we assume that the programming of DNS ends in the
moment when the endpoints/service/pod change reaches the CoreDNS via the
Watch mechanism.
mm4tt added a commit to mm4tt/coredns that referenced this pull request Mar 14, 2019
The DNS Programming Latency definition can be found [here](https://github.com/kubernetes/community/blob/master/sig-scalability/slos/dns_programming_latency.md)

This PR covers only "headless with selector" services, other service
kinds are blocked on the impossibility of determining the LastUpdateTime of a
service object.

The PR bears some similarities to
kubernetes/kubernetes#71999, which introdced
In-Cluster Network Programming Latency. The main difference is that
there is no actual "programming" happening in CoreDNS (for comparison,
in kube-proxy the network programming consists of writing IPTables/IPVS
rules). The CoreDNS serves the content directly from the endpoints/service/pod cache,
creating DNS records on the fly. Thus, we assume that the programming of DNS ends in the
moment when the endpoints/service/pod change reaches the CoreDNS via the
Watch mechanism.
mm4tt added a commit to mm4tt/coredns that referenced this pull request Mar 14, 2019
The DNS Programming Latency definition can be found [here](https://github.com/kubernetes/community/blob/master/sig-scalability/slos/dns_programming_latency.md)

This PR covers only "headless with selector" services, other service
kinds are blocked on the impossibility of determining the LastUpdateTime of a
service object.

The PR bears some similarities to
kubernetes/kubernetes#71999, which introdced
In-Cluster Network Programming Latency. The main difference is that
there is no actual "programming" happening in CoreDNS (for comparison,
in kube-proxy the network programming consists of writing IPTables/IPVS
rules). The CoreDNS serves the content directly from the endpoints/service/pod cache,
creating DNS records on the fly. Thus, we assume that the programming of DNS ends in the
moment when the endpoints/service/pod change reaches the CoreDNS via the
Watch mechanism.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. release-note-none Denotes a PR that doesn't merit a release note. sig/network Categorizes an issue or PR as relevant to SIG Network. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants