Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kube-proxy: ICMP reject via LBs when no endpoints #74394

Merged
merged 3 commits into from Mar 12, 2019

Conversation

@thockin
Copy link
Member

commented Feb 22, 2019

/kind bug

What this PR does / why we need it:

ICMP reject services with no endpoints through LBs. We already reject all other cases.

Fixes #48719

#72879 (by @marwinski) was almost identical, but I didn't see that until I had already written this. Full credit to @marwinski .

Does this PR introduce a user-facing change?:

Services of type=LoadBalancer which have no endpoints will now immediately ICMP reject connections, rather than time out.
@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

commented Feb 22, 2019

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: thockin

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot requested review from danwinship and freehan Feb 22, 2019

@k8s-ci-robot k8s-ci-robot removed the needs-sig label Feb 22, 2019

@thockin thockin force-pushed the thockin:proxy-reject-lb-no-endpoints branch 2 times, most recently from 02a0bd9 to ef913cf Feb 22, 2019

jig.Scale(ns1, 0)
jig.Scale(ns2, 0)

//FIXME: prove ICMP-reject

This comment has been minimized.

Copy link
@danwinship

danwinship Feb 22, 2019

Contributor

For the internal case (#72561) we ended up just assuming that we'd get "connection refused" rather than "timed out".

You could just check the counters on the iptables rules; run iptables-save -c before and after, and verify that the packet count on the expected REJECT rule actually went up, and assume that if the right rule got hit, then the right end-user behavior happened too. (Though in fact, in this case the test would still pass even if the network plugin ate the ICMP reply...)

You could also check the global ICMP counters in netstat -s, which would let you confirm that the node had received an ICMP reject for some reason. That would have some unknowable number of false positives, but shouldn't have false negatives.

@marwinski

This comment has been minimized.

Copy link

commented Feb 27, 2019

As this PR is virtually identical to #72879 I would appreciate if you name me as co-author and close the other one.

#72879 is similar, but I didn't see it until I had already written this. The testing is the tricky part.

Testing is indeed tricky but here is how I did it manually (with some proposal to automate). Our problem was not the "connection refused" but the stale conntrack entries that were caused by not rejecting the packets. Downside is that it works only for providers that have load balancers that do not terminate the TCP connections.

Prepare service

> kubectl run nginx --image nginx --replicas=1
deployment.apps/nginx created
> kubectl expose deployment nginx --type=LoadBalancer --port=80
service/nginx exposed
> kubectl describe service  nginx
...
LoadBalancer Ingress:     104.40.197.150
> curl 104.40.197.150
<!DOCTYPE html>
<html>
...

Log on to the host or a pod with host network (in this case calico-node)

Make sure there are not SYN_SENT entries:

> kubectl exec -it -n kube-system calico-node-bsn4f -c calico-node sh
/ # conntrack -L | grep SYN_SENT
conntrack v1.4.4 (conntrack-tools): 489 flow entries have been shown.
/ #

-> no stale SYN_SENT entries, all clean

Start curl endless loop

> while true; do curl 104.40.197.150 ; done
<!DOCTYPE html>
<html>
...

Scale replicas to 0

kubectl scale deployment nginx --replicas=0
deployment.extensions/nginx scaled

Go to host and look for stale entries:

/ # conntrack -L | grep SYN_SENT
tcp      6 119 SYN_SENT src=109.192.86.132 dst=104.40.197.150 sport=60659 dport=80 [UNREPLIED] src=104.40.197.150 dst=109.192.86.132 sport=80 dport=60659 mark=0 secctx=system_u:object_r:unlabeled_t:s0 use=1
conntrack v1.4.4 (conntrack-tools): 434 flow entries have been shown.
/ #

With the fix we don't see any stale SYN_SENT entries.

This can be simplified in the following way, however that works only if the load balancer is not a terminating one:

  • create the service as shown above
  • Craft a connection request to the "external load balancer ip" but with the mac address of a cluster node
  • Without the rule you will see a hang when number of services is scaled to 0. You will also see stale SYN_SENT entries
  • With the rule you should see a REJECT and no stale conntrack entries.

This should be straightforward but will not work when NodePort is created for the LB (but is straightforward to automate).

@nikopen

This comment has been minimized.

Copy link
Member

commented Mar 4, 2019

/milestone v1.14

@k8s-ci-robot k8s-ci-robot added this to the v1.14 milestone Mar 4, 2019

@thockin thockin force-pushed the thockin:proxy-reject-lb-no-endpoints branch from 26d3382 to 67cb489 Mar 5, 2019

@thockin

This comment has been minimized.

Copy link
Member Author

commented Mar 5, 2019

Happy to name you as co-author (inasmuch as that's a thing).

We need to have automated testing for this. I have been jumping between tasks, and have not had much time on this one. I think I finally figured why the approach I had was not working. If this run fails the way I expect, then we can proceed.

@thockin thockin force-pushed the thockin:proxy-reject-lb-no-endpoints branch 2 times, most recently from e6eca1f to 2bf827b Mar 6, 2019

Fix small race in e2e
Occasionally we get spurious errors about "no route to host" when we
race with kube-proxy.  This should reduce that.  It's mostly just log
noise.

@thockin thockin force-pushed the thockin:proxy-reject-lb-no-endpoints branch 3 times, most recently from a15bf3c to 19db0e3 Mar 11, 2019

thockin added some commits Mar 8, 2019

Retool HTTP and UDP e2e utils
This is a prefactoring for followup changes that need to use very
similar but subtly different test.  Now it is more generic, though it
pushes a little logic up the stack.  That makes sense to me.
Kube-proxy: REJECT LB IPs with no endpoints
We REJECT every other case.  Close this FIXME.

To get this to work in all cases, we have to process service in
filter.INPUT, since LB IPS might be manged as local addresses.

@thockin thockin force-pushed the thockin:proxy-reject-lb-no-endpoints branch from 19db0e3 to a2fb074 Mar 12, 2019

@thockin

This comment has been minimized.

Copy link
Member Author

commented Mar 12, 2019

Tests pass. Will revert the test-enable changes

@thockin thockin force-pushed the thockin:proxy-reject-lb-no-endpoints branch from a2fb074 to de25d6c Mar 12, 2019

@thockin

This comment has been minimized.

Copy link
Member Author

commented Mar 12, 2019

I have re-pushed. PTAL.

@danwinship

This comment has been minimized.

Copy link
Contributor

commented Mar 12, 2019

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm label Mar 12, 2019

@thockin

This comment has been minimized.

Copy link
Member Author

commented Mar 12, 2019

/retest

@k8s-ci-robot k8s-ci-robot merged commit f33e5e8 into kubernetes:master Mar 12, 2019

16 of 17 checks passed

pull-kubernetes-kubemark-e2e-gce-big Job triggered.
Details
cla/linuxfoundation thockin authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-conformance-image-test Skipped.
pull-kubernetes-cross Skipped.
pull-kubernetes-e2e-gce Job succeeded.
Details
pull-kubernetes-e2e-gce-100-performance Job succeeded.
Details
pull-kubernetes-e2e-gce-device-plugin-gpu Job succeeded.
Details
pull-kubernetes-godeps Skipped.
pull-kubernetes-integration Job succeeded.
Details
pull-kubernetes-local-e2e Skipped.
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-typecheck Job succeeded.
Details
pull-kubernetes-verify Job succeeded.
Details
pull-publishing-bot-validate Skipped.
tide In merge pool.
Details
@thockin

This comment has been minimized.

Copy link
Member Author

commented Mar 12, 2019

/retest

@thockin

This comment has been minimized.

Copy link
Member Author

commented Mar 13, 2019

@m1093782566 Can you take a look at IPVS and see if it suffers similarly (#48719 is the symptom)

@thockin

This comment has been minimized.

Copy link
Member Author

commented Mar 29, 2019

@m1093782566 please?

@m1093782566

This comment has been minimized.

Copy link
Member

commented Mar 30, 2019

@thockin

I don't think IPVS has the same issue.

k8s-ci-robot added a commit that referenced this pull request Apr 18, 2019

Merge pull request #76529 from spencerhance/automated-cherry-pick-of-…
…#72534-#74394-upstream-release-1.12

Automated cherry pick of #72534: kube-proxy: rename internal field for clarity #74394: Fix small race in e2e

feiskyer added a commit to feiskyer/kubernetes that referenced this pull request Apr 22, 2019

Revert "Merge pull request kubernetes#76529 from spencerhance/automat…
…ed-cherry-pick-of-#72534-kubernetes#74394-upstream-release-1.12"

This reverts commit 535e3ad, reversing
changes made to 336d787.

rjaini added a commit to msazurestackworkloads/kubernetes that referenced this pull request May 6, 2019

Getting 1.12.8 Upstream Code (#20)
* Fix kubernetes#73479 AWS NLB target groups missing tags

`elbv2.AddTags` doesn't seem to support assigning the same set of
tags to multiple resources at once leading to the following error:
  Error adding tags after modifying load balancer targets:
  "ValidationError: Only one resource can be tagged at a time"

This can happen when using AWS NLB with multiple listeners pointing
to different node ports.

When k8s creates a NLB it creates a target group per listener along
with installing security group ingress rules allowing the traffic to
reach the k8s nodes.

Unfortunately if those target groups are not tagged, k8s will not
manage them, thinking it is not the owner.

This small changes assigns tags one resource at a time instead of
batching them as before.

Signed-off-by: Brice Figureau <brice@daysofwonder.com>

* remove get azure accounts in the init process set timeout for get azure account operation

use const for timeout value

remove get azure accounts in the init process

add lock for account init

* add timeout in GetVolumeLimits operation

add timeout for getAllStorageAccounts

* add mixed protocol support for azure load balancer

* record event on endpoint update failure

* fix parse devicePath issue on Azure Disk

* Kubernetes version v1.12.7-beta.0 openapi-spec file updates

* add retry for detach azure disk

add more logging info in detach disk

add more logging for azure disk attach/detach

* Add/Update CHANGELOG-1.12.md for v1.12.6.

* Reduce cardinality of admission webhook metrics

* fix negative slice index error in keymutex

* Remove reflector metrics as they currently cause a memory leak

* Explicitly set GVK when sending objects to webhooks

* add Azure Container Registry anonymous repo support

apply fix for msi and fix test failure

* DaemonSet e2e: Update image and rolling upgrade test timeout

Use Nginx as the DaemonSet image instead of the ServeHostname image.
This was changed because the ServeHostname has a sleep after terminating
which makes it incompatible with the DaemonSet Rolling Upgrade e2e test.

In addition, make the DaemonSet Rolling Upgrade e2e test timeout a
function of the number of nodes that make up the cluster. This is
required because the more nodes there are, the longer the time it will
take to complete a rolling upgrade.

Signed-off-by: Alexander Brand <alexbrand09@gmail.com>

* Revert kubelet to default to ttl cache secret/configmap behavior

* cri_stats_provider: overload nil as 0 for exited containers stats

Always report 0 cpu/memory usage for exited containers to make
metrics-server work as expect.

Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com>

* flush iptable chains first and then remove them

while cleaning up ipvs mode. flushing iptable chains first and then
remove the chains. this avoids trying to remove chains that are still
referenced by rules in other chains.

fixes kubernetes#70615

* Checks whether we have cached runtime state before starting a container that requests any device plugin resource. If not, re-issue Allocate grpc calls. This allows us to handle the edge case that a pod got assigned to a node even before it populates its extended resource capacity.

* Fix panic in kubectl cp command

* Augmenting API call retry in nodeinfomanager

* Bump debian-iptables to v11.0.1. Rebase docker image on debian-base:0.4.1

* Adding a check to make sure UseInstanceMetadata flag is true to get data from metadata.

* GetMountRefs fixed to handle corrupted mounts by treating it like an unmounted volume

* Update Cluster Autoscaler version to 1.12.3

* add module 'nf_conntrack' in ipvs prerequisite check

* Allow disable outbound snat when Azure standard load balancer is used

* Ensure Azure load balancer cleaned up on 404 or 403

* fix smb unmount issue on Windows

fix log warning

use IsCorruptedMnt in GetMountRefs on Windows

use errorno in IsCorruptedMnt check

fix comments: add more error code

add more error no checking

change year

fix comments

fix bazel error

fix bazel

fix bazel

fix bazel

revert bazel change

* kubelet: updated logic of verifying a static critical pod

- check if a pod is static by its static pod info
- meanwhile, check if a pod is critical by its corresponding mirror pod info

* Allow session affinity a period of time to setup for new services.

This is to deal with the flaky session affinity test.

* Restore username and password kubectl flags

* build/gci: bump CNI version to 0.7.5

* fix race condition issue for smb mount on windows

change var name

* allows configuring NPD release and flags on GCI and add cluster e2e test

* allows configuring NPD image version in node e2e test and fix the test

* bump repd min size in e2es

* Kubernetes version v1.12.8-beta.0 openapi-spec file updates

* Add/Update CHANGELOG-1.12.md for v1.12.7.

* stop vsphere cloud provider from spamming logs with `failed to patch IP` Fixes: kubernetes#75236

* Do not delete existing VS and RS when starting

* Fix updating 'currentMetrics' field for HPA with 'AverageValue' target

* Populate ClientCA in delegating auth setup

kubernetes#67768 accidentally removed population of the the ClientCA
in the delegating auth setup code.  This restores it.

* Update gcp images with security patches

[stackdriver addon] Bump prometheus-to-sd to v0.5.0 to pick up security fixes.
[fluentd-gcp addon] Bump fluentd-gcp-scaler to v0.5.1 to pick up security fixes.
[fluentd-gcp addon] Bump event-exporter to v0.2.4 to pick up security fixes.
[fluentd-gcp addon] Bump prometheus-to-sd to v0.5.0 to pick up security fixes.
[metatada-proxy addon] Bump prometheus-to-sd v0.5.0 to pick up security fixes.

* Fix AWS driver fails to provision specified fsType

* Updated regional PD minimum size; changed regional PD failover test to use StorageClassTest to generate PVC template

* Bump debian-iptables to v11.0.2

* Avoid panic in cronjob sorting

This change handles the case where the ith cronjob may have its start
time set to nil.

Previously, the Less method could cause a panic in case the ith
cronjob had its start time set to nil, but the jth cronjob did not. It
would panic when calling Before on a nil StartTime.

* Add volume mode downgrade test: should not mount/map in <1.13

* disable HTTP2 ingress test

* ensuring that logic is checking for differences in listener

* Use Node-Problem-Detector v0.6.3 on GCI

* Delete only unscheduled pods if node doesn't exist anymore.

* proxy: Take into account exclude CIDRs while deleting legacy real servers

* Increase default maximumLoadBalancerRuleCount to 250

* kube-proxy: rename internal field for clarity

* kube-proxy: rename vars for clarity, fix err str

* kube-proxy: rename field for congruence

* kube-proxy: reject 0 endpoints on forward

Previously we only REJECTed on OUTPUT which works for packets from the
node but not for packets from pods on the node.

* kube-proxy: remove old cleanup rules

* Kube-proxy: REJECT LB IPs with no endpoints

We REJECT every other case.  Close this FIXME.

To get this to work in all cases, we have to process service in
filter.INPUT, since LB IPS might be manged as local addresses.

* Retool HTTP and UDP e2e utils

This is a prefactoring for followup changes that need to use very
similar but subtly different test.  Now it is more generic, though it
pushes a little logic up the stack.  That makes sense to me.

* Fix small race in e2e

Occasionally we get spurious errors about "no route to host" when we
race with kube-proxy.  This should reduce that.  It's mostly just log
noise.

* Fix Azure SLB support for multiple backend pools

Azure VM and vmssVM support multiple backend pools for the same SLB, but
not for different LBs.

* Revert "Merge pull request kubernetes#76529 from spencerhance/automated-cherry-pick-of-#72534-kubernetes#74394-upstream-release-1.12"

This reverts commit 535e3ad, reversing
changes made to 336d787.

k8s-ci-robot added a commit that referenced this pull request May 7, 2019

Merge pull request #76516 from spencerhance/automated-cherry-pick-of-…
…#72534-#74394-upstream-release-1.13

Automated cherry pick of #72534: kube-proxy: rename internal field for clarity #74394: Fix small race in e2e
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.