Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No network access from second node (multiple nodes cluster) #8966

Closed
mrbenosborne opened this issue Aug 11, 2020 · 8 comments
Closed

No network access from second node (multiple nodes cluster) #8966

mrbenosborne opened this issue Aug 11, 2020 · 8 comments
Assignees
Labels
co/docker-driver Issues related to kubernetes in container co/multinode Issues related to multinode clusters kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@mrbenosborne
Copy link

Steps to reproduce the issue:

Start Minikube with 2 nodes

# Start minikube
minikube start --driver=docker --nodes 2 -p minikube
kubectl label nodes minikube nodeType=standard
kubectl label nodes minikube-m02 nodeType=high-mem

Apply a deployment and service

kubectl apply -f ./echo-server.yaml

echo-server.yaml

The deployment echoserver is set to be placed on the node that has a matching label of nodeType=standard, the deployment debug is set to be placed on the label matching nodeType=high-mem.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: echoserver
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: echoserver
    spec:
      containers:
      - image: k8s.gcr.io/echoserver:1.4
        imagePullPolicy: IfNotPresent
        name: echoserver
        ports:
         - containerPort: 8080
      nodeSelector:
        nodeType: standard
  selector:
    matchLabels:
      app: echoserver
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: debug
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: debug
    spec:
      containers:
      - image: k8s.gcr.io/echoserver:1.4
        imagePullPolicy: IfNotPresent
        name: echoserver
        ports:
         - containerPort: 8080
      nodeSelector:
        nodeType: high-mem
  selector:
    matchLabels:
      app: debug

Get pods

kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
debug-86cf8b5b6b-qtlzm        1/1     Running   0          10m
echoserver-6fb448987b-jpcwx   1/1     Running   0          21m

Issue

When I exec into the echoserver pod and run:

apt-get update

the command runs successfully, however if I run the same command on debug pod which is on node 2 then it fails to resolve.

Is there something I need to enable on minikube or the node 2 to enable outbound network access?

@afbjorklund afbjorklund added co/docker-driver Issues related to kubernetes in container co/multinode Issues related to multinode clusters labels Aug 12, 2020
@priyawadhwa priyawadhwa added the kind/support Categorizes issue or PR as a support question. label Aug 12, 2020
@priyawadhwa priyawadhwa added the kind/bug Categorizes issue or PR as related to a bug. label Sep 8, 2020
@priyawadhwa
Copy link

Hey @mrbenosborne thanks for opening this issue. I'm not very familiar with mulitnode, but I think we have merged some network related fixes in the past few weeks. Would you mind upgrading to minikube v1.13.0 and see if that fixes the issue?

@priyawadhwa priyawadhwa added the triage/needs-information Indicates an issue needs more information in order to work on it. label Sep 8, 2020
@mrbenosborne
Copy link
Author

Hi @priyawadhwa .

I updated minikube but I am still hitting the same issue:

image

@medyagh
Copy link
Member

medyagh commented Oct 1, 2020

we will need to investigate this. CC: @sharifelgamal

@sharifelgamal sharifelgamal added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it. labels Oct 14, 2020
@priyawadhwa priyawadhwa added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Dec 28, 2020
@rejohnst
Copy link

FWIW, I still see this issue on Minikube 1.16.0 (Linux, kvm2 driver). Please let me know if there's any information I can provide to help debug this.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 17, 2021
@rejohnst
Copy link

rejohnst commented Jun 17, 2021

I retested this on Minikube 1.21.0 (Linux, kvm2 driver) and the issue seems to be fixed. I created a 4 node cluster and all nodes had network access.

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 17, 2021
@spowelljr
Copy link
Member

Like @rejohnst said, this seems to resolved. I tried the listed steps on v1.22.0 and the debug pod had network access, closing issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-driver Issues related to kubernetes in container co/multinode Issues related to multinode clusters kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

9 participants