Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to access NodePort externally using host's IP in AWS kubernetes cluster deployed with kops #50261

Closed
jwickens opened this issue Aug 7, 2017 · 6 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@jwickens
Copy link

jwickens commented Aug 7, 2017

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:

I have a NodePort service and a pod setup in this AWS cluster:

> kubectl describe svc my-nodeport-service
Name:                   my-node-port-service
Namespace:              default
Labels:                 <none>
Selector:               service=my-selector
Type:                   NodePort
IP:                     100.71.211.249
Port:                   <unset> 80/TCP
NodePort:               <unset> 30176/TCP
Endpoints:              100.96.2.11:3000
Session Affinity:       None
Events:                 <none>

> kubectl describe pods my-nodeport-pod
Name:           my-nodeport-pod
Node:           <ip>.eu-west-1.compute.internal/<ip>
Labels:         service=my-selector
Status:         Running
IP:             100.96.2.11
Containers:
  update-center:
    Port:               3000/TCP
    Ready:              True
    Restart Count:      0

On curl to the available AWS instance IP on the given node port the connection hangs

What you expected to happen:

I expect to be able to curl from the externally available AWS instance IP on the given node port and get the correct response.

How to reproduce it (as minimally and precisely as possible):

  • setup default Kops cluster on AWS
  • setup pod and nodeport service
  • curl to the instance ip at the nodeport node

Anything else we need to know?:

Working locally on node

(ssh into node)
$ sudo netstat -nap | grep 30176
tcp6       0      0 :::30176                :::*                    LISTEN      2093/kube-proxy
$ curl 172.20.62.89:30176
Ok

IP Tables on node

Chain INPUT (policy ACCEPT 3368 packets, 1645K bytes)
 pkts bytes target     prot opt in     out     source               destination
3842K 2078M KUBE-SERVICES  all  --  any    any     anywhere             anywhere             /* kubernetes service portals */
3863K 2182M KUBE-FIREWALL  all  --  any    any     anywhere             anywhere

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DOCKER-ISOLATION  all  --  any    any     anywhere             anywhere
    0     0 DOCKER     all  --  any    docker0  anywhere             anywhere
    0     0 ACCEPT     all  --  any    docker0  anywhere             anywhere             ctstate RELATED,ESTABLISHED
    0     0 ACCEPT     all  --  docker0 !docker0  anywhere             anywhere
    0     0 ACCEPT     all  --  docker0 docker0  anywhere             anywhere

Chain OUTPUT (policy ACCEPT 3348 packets, 2306K bytes)
 pkts bytes target     prot opt in     out     source               destination
3858K 2772M KUBE-SERVICES  all  --  any    any     anywhere             anywhere             /* kubernetes service portals */
3867K 2775M KUBE-FIREWALL  all  --  any    any     anywhere             anywhere

Chain DOCKER (1 references)
 pkts bytes target     prot opt in     out     source               destination

Chain DOCKER-ISOLATION (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 RETURN     all  --  any    any     anywhere             anywhere

Chain KUBE-FIREWALL (2 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DROP       all  --  any    any     anywhere             anywhere             /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000

Chain KUBE-SERVICES (2 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 REJECT     tcp  --  any    any     anywhere             100.71.67.98         /* kube-system/draftd:http has no endpoints */ tcp dpt:http reject-with icmp-port-unreachable
    0     0 REJECT     tcp  --  any    any     anywhere             100.67.124.227       /* postgres/db: has no endpoints */ tcp dpt:postgresql reject-with icmp-port-unreachable
    0     0 REJECT     tcp  --  any    any     anywhere             100.64.243.75        /* kube-system/tiller-deploy:tiller has no endpoints */ tcp dpt:44134 reject-with icmp-port-unreachable

Kops Issue:
kubernetes/kops#3146

Alb controller issue:
kubernetes-sigs/aws-load-balancer-controller#169

Environment:

  • Kubernetes version (use kubectl version): 1.7.3
  • Cloud provider or hardware configuration**: AWS
  • OS (e.g. from /etc/os-release): Debian GNU/Linux 8 (jessie)
  • Kernel (e.g. uname -a): Linux ip-172-20-47-68 4.4.65-k8s Unit test coverage in Kubelet is lousy. (~30%) #1 SMP Tue May 2 15:48:24 UTC 2017 x86_64 GNU/Linux
  • Install tools: Kops 1.7.0
  • Others:
@k8s-github-robot k8s-github-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Aug 7, 2017
@jwickens
Copy link
Author

jwickens commented Aug 7, 2017

@kubernetes/sig-aws-bugs

@k8s-ci-robot k8s-ci-robot added sig/aws kind/bug Categorizes issue or PR as related to a bug. labels Aug 7, 2017
@k8s-ci-robot
Copy link
Contributor

@jwickens: Reiterating the mentions to trigger a notification:
@kubernetes/sig-aws-bugs.

In response to this:

@kubernetes/sig-aws-bugs

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-github-robot k8s-github-robot removed the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Aug 7, 2017
@jwickens
Copy link
Author

jwickens commented Aug 7, 2017

I've also tried specifying

[...]
metadata:
    name: my-service
    annotations:
        service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
[...]

To the service as well as specifying the node host's public ip as an externalIp in the service but no luck with either of these.

@jwickens
Copy link
Author

jwickens commented Aug 8, 2017

I found a resolution to my problem, it involved configuring the security groups (firewall) rules in AWS. Fuller answer here: https://stackoverflow.com/questions/45543694/kubernetes-cluster-on-aws-with-kops-nodeport-service-unavailable/45561848#45561848

I suggest adding this hint to the documentation on nodeports.

@xiangpengzhao
Copy link
Contributor

/cc @justinsb

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 11, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

5 participants