New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add e2e test for type SRV kube-dns probes for IPv4 and IPv6 #53806

Closed
leblancd opened this Issue Oct 12, 2017 · 3 comments

Comments

Projects
None yet
3 participants
@leblancd
Contributor

leblancd commented Oct 12, 2017

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug
/kind feature
/area ipv6
/sig network

What happened:
Recent changes were made to kubeadm and to kube-dns to modify the health probes that are configured for kube-dns sidecar so that DNS queries of type SRV, rather than type A, are requested:

  • PR #51378
  • kubernetes/dns PR #151
    These changes were made to allow the probe config to be agnostic of whether kube-proxy in the cluster is supporting IPv4 service addresses or IPv6 addresses. (IPv4 vs IPv6 operation of kube-proxy is mutually exclusive at the moment, pending the addition of dual-stack support).
    Given these changes, e2e test case(s) need to be added to test kube-dns sidecar probes of type SRV, and optionally an IPv6 specific test case to try kube-dns sidecar probes of type AAAA.

What you expected to happen:
Expect to have e2e test coverage for configuring kube-dns sidecar probes of type SRV.

How to reproduce it (as minimally and precisely as possible):
Code inspection of current e2e test cases.

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version): Latest master
  • Cloud provider or hardware configuration**: VirtualBox cluster, Ubuntu 16.04.1 host, CentOS 7 guests
  • OS (e.g. from /etc/os-release): CentOS 7
  • Kernel (e.g. uname -a): 3.10.0-514.26.2.el7.x86_64 #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools:
  • Others:
@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Jan 10, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

fejta-bot commented Jan 10, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Feb 10, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

fejta-bot commented Feb 10, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Mar 13, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

fejta-bot commented Mar 13, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment