Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Filter nodePortAddresses to proxiers #89998

Merged
merged 1 commit into from Jun 13, 2020

Conversation

uablrek
Copy link
Contributor

@uablrek uablrek commented Apr 9, 2020

What type of PR is this?

/kind bug

What this PR does / why we need it:

To avoid faulty entries in ipvs when both ipv4 and ipv6 addresses are used in "nodePortAddresses".

May also cause some yet unseen faults.

Which issue(s) this PR fixes:

Fixes #89923

Special notes for your reviewer:

I did not see any faults for proxy-mode=iptables, but the same filtering might be appropriate also for proxy-mode=iptables.

Does this PR introduce a user-facing change?:

NONE

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. do-not-merge/invalid-commit-message Indicates that a PR should not merge because it has an invalid commit message. kind/bug Categorizes issue or PR as related to a bug. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. area/ipvs sig/network Categorizes an issue or PR as relevant to SIG Network. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Apr 9, 2020
@uablrek
Copy link
Contributor Author

uablrek commented Apr 9, 2020

With;

nodePortAddresses: ["::1/128","127.0.0.1/32"]

Ipvs config;

$ ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  12.0.0.1:443 rr
  -> 192.168.1.1:6443             Masq    1      0          0         
TCP  12.0.19.107:443 rr
  -> 11.0.3.5:4443                Masq    1      0          0         
TCP  12.0.208.66:5001 rr
  -> 11.0.0.2:5001                Masq    1      0          0         
  -> 11.0.1.2:5001                Masq    1      0          0         
  -> 11.0.2.2:5001                Masq    1      0          0         
  -> 11.0.3.4:5001                Masq    1      0          0         
TCP  127.0.0.1:30100 rr
  -> 11.0.0.2:5001                Masq    1      0          0         
  -> 11.0.1.2:5001                Masq    1      0          0         
  -> 11.0.2.2:5001                Masq    1      0          0         
  -> 11.0.3.4:5001                Masq    1      0          0         
TCP  [fd00:4000::7710]:5001 rr
  -> [1100::2]:5001               Masq    1      0          0         
  -> [1100:0:0:1::2]:5001         Masq    1      0          0         
  -> [1100:0:0:2::2]:5001         Masq    1      0          0         
  -> [1100:0:0:3::4]:5001         Masq    1      0          0         
TCP  [::1]:30200 rr
  -> [1100::2]:5001               Masq    1      0          0         
  -> [1100:0:0:1::2]:5001         Masq    1      0          0         
  -> [1100:0:0:2::2]:5001         Masq    1      0          0         
  -> [1100:0:0:3::4]:5001         Masq    1      0          0         

@aojea
Copy link
Member

aojea commented Apr 9, 2020

I think you should add it to the single stack one too

nodePortAddresses: nodePortAddresses,

nodePortAddresses:     filterCIDRs(isIPv6, nodePortAddresses),

@uablrek
Copy link
Contributor Author

uablrek commented Apr 9, 2020

@aojea You mean in case of user mistakes?

@uablrek
Copy link
Contributor Author

uablrek commented Apr 9, 2020

The corresponding in the iptables proxier;

ipv4Proxier, err := NewProxier(ipt[0], sysctl,
exec, syncPeriod, minSyncPeriod, masqueradeAll, masqueradeBit, localDetectors[0], hostname,
nodeIP[0], recorder, healthzServer, nodePortAddresses)
if err != nil {
return nil, fmt.Errorf("unable to create ipv4 proxier: %v", err)
}
ipv6Proxier, err := NewProxier(ipt[1], sysctl,
exec, syncPeriod, minSyncPeriod, masqueradeAll, masqueradeBit, localDetectors[1], hostname,
nodeIP[1], recorder, healthzServer, nodePortAddresses)
if err != nil {
return nil, fmt.Errorf("unable to create ipv6 proxier: %v", err)
}

@uablrek
Copy link
Contributor Author

uablrek commented Apr 9, 2020

If the "filterCIDRs()" shall be used in the iptables proxier it should be moved to some "util" packade IMO to avoid duplicated code (and duplicated tests).

If so, where shall it go?

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/invalid-commit-message Indicates that a PR should not merge because it has an invalid commit message. label Apr 9, 2020
@aojea
Copy link
Member

aojea commented Apr 9, 2020

@aojea You mean in case of user mistakes?

🤔 well,I think you are right, if you define IPv6 addresses in --nodeport-addresses in an IPv4 cluster is clearly an user mistake.

If the "filterCIDRs()" shall be used in the iptables proxier it should be moved to some "util" packade IMO to avoid duplicated code (and duplicated tests).

If so, where shall it go?

my apologies, I was assuming you were already using this one

// FilterIncorrectCIDRVersion filters out the incorrect IP version case from a slice of CIDR strings.
func FilterIncorrectCIDRVersion(ipStrings []string, isIPv6Mode bool) ([]string, []string) {
return filterWithCondition(ipStrings, isIPv6Mode, utilnet.IsIPv6CIDRString)
}

@uablrek
Copy link
Contributor Author

uablrek commented Apr 9, 2020

@aojea Thanks for the pointer 😃 The "filterCIDRs()" is just below the updated code and is already used in the call.

I will remove the "filterCIDRs()" to clean-up the code and also move to the place you pointed out. Then it can't go wrong since "NewProxy()" is called for ipv4/ipv6 in dual-stack. I think it will be cleaner.

Also I will add the update in the iptables proxier to avoid any yet-unseen problems.

@uablrek
Copy link
Contributor Author

uablrek commented Apr 9, 2020

Hmm, IMHO "FilterIncorrectCIDRVersion()" is not intuitive. Not the name and not how it works. You must read the code to use it. I will use it in the iptables proxier, but keep "filterCIDRs()" in the ipvs proxier and let reviewers decide.

@uablrek uablrek changed the title Filter nodePortAddresses to the ipvs proxiers in dual-stack mode Filter nodePortAddresses to proxiers Apr 9, 2020
@uablrek
Copy link
Contributor Author

uablrek commented Apr 9, 2020

The nodePortAddresses are filtered in NewProxy() as proposed by @aojea in #89998 (comment).

Not only will this catch user mistakes which are probably common in transition to dual-stack and makes the code a little bit cleaner, it also allows a common setting for ipv4-only, ipv6-only and dual-stack in our use-case;

Use case

When K8s is deployed in a DC (in VMs or bare-metal) with no K8s-aware load-balancers, loadbalancing to nodePorts can not be used. The external addresses must be configured in DC-GWs in some way, likely announced with BGP. kube-proxy will then load-balance incoming traffic.

This works fine but since NodePorts are implicit for services with type: LoadBalancer (and can't be turned-off) a number of ports above 30000 are opened but not used.

These ports show up on mandatory security scans, and a request to either describe their purpose or to close them is raised.

We can't close them but by setting;

nodePortAddresses: ["::1/128","127.0.0.1/32"]

we do not expose them externally. They will not show up on port scans any more.

@uablrek
Copy link
Contributor Author

uablrek commented Apr 10, 2020

As the robot suggested;
/assign @bowei

Copy link
Member

@SataQiu SataQiu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Apr 11, 2020
@uablrek
Copy link
Contributor Author

uablrek commented May 15, 2020

Corrections after review are pushed.

True to;

It does not seem right to crash kube-proxy in case of a misconfigured NodePortAddresses.

a warning is logged if NodePortAddresses of the wrong family is configured in a single-stack cluster;

W0515 07:37:53.620500     254 proxier.go:439] NodePortAddresses of wrong family; [::1/128]

In dual-stack the addresses are filtered in NewDualStackProxier() so the proxier instanses only gets addresses with it's family (no warnings).

@uablrek
Copy link
Contributor Author

uablrek commented May 15, 2020

/test pull-kubernetes-integration

@uablrek
Copy link
Contributor Author

uablrek commented May 15, 2020

/retest

@uablrek
Copy link
Contributor Author

uablrek commented May 15, 2020

Problem does not seem to be related to the PR
/retest

@uablrek
Copy link
Contributor Author

uablrek commented May 15, 2020

/retest

@aojea
Copy link
Member

aojea commented May 15, 2020

/lgtm
/retest

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label May 15, 2020
@uablrek
Copy link
Contributor Author

uablrek commented May 18, 2020

/retest

@uablrek
Copy link
Contributor Author

uablrek commented May 18, 2020

@aojea Can you please help since you have good knowledge of flaky tests.

The pull-kubernetes-e2e-gci-gce-ipvs keeps on failing and I can't see any way this PR can cause the failures. Unless there really are NodePortAddresses of the wrong family this PR should not make any difference.

@aojea
Copy link
Member

aojea commented May 18, 2020

/test pull-kubernetes-e2e-gci-gce-ipvs

let me check again, but seems the common test failures on all runs are

[sig-storage] Flexvolumes should be mountable when non-attachable
[sig-storage] Flexvolumes should be mountable when attachable 

@aojea
Copy link
Member

aojea commented May 18, 2020

seems those tests were failing for a long time,
https://prow.k8s.io/job-history/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-e2e-gci-gce-ipvs

in the meantime let me fix the reporting in testgrid so we can notice it sooner next time (cc: @andrewsykim )

@uablrek
Copy link
Contributor Author

uablrek commented May 18, 2020

@aojea Thanks a lot!

@aojea
Copy link
Member

aojea commented May 18, 2020

I'm also removing the storage tests for the ipvs jobs, I don't have time for debugging current failures and we weren't tracking those tests.
It is better having only tests we should care and increase later the scope if necessary kubernetes/test-infra#17623

@andrewsykim
Copy link
Member

FYI #91253

@aojea I can investigate soon if you haven't started

@aojea
Copy link
Member

aojea commented May 19, 2020

FYI #91253

@aojea I can investigate soon if you haven't started

I just did an initial check, but nothing relevant. Please go ahead, I'll be watching the issue to check if I can help, but I'm a bit tied this week

@uablrek
Copy link
Contributor Author

uablrek commented May 27, 2020

/retest

@uablrek
Copy link
Contributor Author

uablrek commented May 27, 2020

Updated after review please see; #89998 (comment)

@liggitt Please have a new look at it.

The failing test is a problem with the test, not with this PR.

@uablrek
Copy link
Contributor Author

uablrek commented Jun 12, 2020

/retest

@uablrek uablrek requested a review from liggitt June 12, 2020 06:26
Copy link
Member

@thockin thockin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

/lgtm
/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: thockin, uablrek

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jun 12, 2020
@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

2 similar comments
@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Jun 13, 2020

@uablrek: The following test failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
pull-kubernetes-e2e-gci-gce-ipvs f54b8f9 link /test pull-kubernetes-e2e-gci-gce-ipvs

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-ci-robot k8s-ci-robot merged commit 35fc65d into kubernetes:master Jun 13, 2020
@k8s-ci-robot k8s-ci-robot added this to the v1.19 milestone Jun 13, 2020
@uablrek uablrek deleted the issue-89923 branch April 1, 2021 11:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/ipvs cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. lgtm "Looks good to me", indicates that a PR is ready to be merged. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. release-note-none Denotes a PR that doesn't merit a release note. sig/network Categorizes an issue or PR as relevant to SIG Network. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

DualStack: Kube-proxy/ipvs; nodePortAddresses with both ipv6 and ipv4 causes invalid ipvs entries
10 participants