Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

边缘node上不能启动多个nginx pod #623

Closed
fengjing1009 opened this issue May 31, 2019 · 3 comments
Closed

边缘node上不能启动多个nginx pod #623

fengjing1009 opened this issue May 31, 2019 · 3 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@fengjing1009
Copy link

fengjing1009 commented May 31, 2019

What happened:
pod调度成功后无法在边缘节点上正常被创建
What you expected to happen:
pod被调度成功后能够正常被创建
How to reproduce it (as minimally and precisely as possible):
创建完第一个deploy 以后,使用相同的镜像创建第二个不同名称的deploy,并被调度到相同node上,第二个pod一直处于创建状态
Anything else we need to know?:
我使用vagrant 创建的我的两个poc 节点,下面是我的配置文件

$ cat Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :

ENV["LC_ALL"] = "en_US.UTF-8"

Vagrant.configure("2") do |config|
    (1..4).each do |i|  #循环体控制
      config.vm.define "ubuntu#{i}" do |node|
        node.vm.box = "k8s" #声明box
        node.ssh.insert_key = false
        node.vm.hostname = "ubuntu#{i}" #设置主机名
        node.vm.network "public_network"  #设置ip地址
                # default router
        node.vm.provision "shell",
          run: "always",
          inline: "route add default gw 172.29.2.1"
        # delete default gw on eth0
        node.vm.provision "shell",
          run: "always",
          inline: "eval `route -n |awk '{ if ($8 ==\"enp0s3\" && $2 != \"0.0.0.0\") print \"route del default gw \" $2; }'`"
        node.vm.provision "shell", #指明provision
          inline: "echo hello from node #{i}" #指明provision内容
        node.vm.provider "virtualbox" do |v|  #指明provider
          v.cpus = 2 #证明cpu
          v.customize ["modifyvm", :id, "--name", "ubuntu#{i}", "--memory", "2048"] #指明内存
        end
      end
    end
end

Environment:

  • KubeEdge version: 0.3

  • Hardware configuration:

  • OS (e.g. from /etc/os-release):Ubuntu 16.04.6 LTS (Xenial Xerus)

  • Kernel (e.g. uname -a):Linux ubuntu4 4.4.0-148-generic

  • Others:
    下面是我集群运行时候的一些信息

root@ubuntu3:~# kubectl  get nodes
NAME        STATUS     ROLES    AGE   VERSION
10.0.2.15   Ready      <none>   56m   0.3.0-beta.0
ubuntu3     NotReady   master   59m   v1.14.2
root@ubuntu3:~# kubectl  get pods
NAME                                 READY   STATUS              RESTARTS   AGE
nginx-deployment-d86dfb797-dxztc     1/1     Running             0          55m
nginx1-deployment-798f77fdbc-c8zgc   0/1     ContainerCreating   0          56m
test-deployment-665cbd4d85-hbgpm     1/1     Running             0          56m
root@ubuntu3:~# kubectl  describe pods nginx1-deployment-798f77fdbc-c8zgc
Name:               nginx1-deployment-798f77fdbc-c8zgc
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               10.0.2.15/
Start Time:         Fri, 31 May 2019 03:32:39 +0000
Labels:             app=nginx1
                    pod-template-hash=798f77fdbc
Annotations:        <none>
Status:             Unknown
IP:
Controlled By:      ReplicaSet/nginx1-deployment-798f77fdbc
Containers:
  nginx1:
    Image:        nginx
    Port:         80/TCP
    Host Port:    88/TCP
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-sbt4x (ro)
Conditions:
  Type    Status
  Ready   False
Volumes:
  default-token-sbt4x:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-sbt4x
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  56m   default-scheduler  Successfully assigned default/nginx1-deployment-798f77fdbc-c8zgc to 10.0.2.15
  • 下面是我三个应用的yaml
root@ubuntu3:~# cat test.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.15.12
        ports:
        - containerPort: 80
          hostPort: 80
root@ubuntu3:~# cat test1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx1-deployment
  labels:
    app: nginx1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx1
  template:
    metadata:
      labels:
        app: nginx1
    spec:
      containers:
      - name: nginx1
        image: nginx
        ports:
        - containerPort: 80
          hostPort: 88
root@ubuntu3:~# cat test2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-deployment
  labels:
    app: test
spec:
  replicas:
  selector:
    matchLabels:
      app: test
  template:
    metadata:
      labels:
        app: test
    spec:
      containers:
      - name: test
        image: tomcat
        ports:
        - containerPort: 8080
          hostPort: 8080
@fengjing1009 fengjing1009 added the kind/bug Categorizes issue or PR as related to a bug. label May 31, 2019
@sids-b
Copy link
Member

sids-b commented May 31, 2019

cc @m1093782566 @edisonxiang

@shouhong
Copy link
Member

shouhong commented Jun 1, 2019

@fengjing1009 As you created the issue with Chinese, I would restate the problem you mentioned for others to know the context.

Problem:
Not able to run multiple nginx pods (with exactly the same nginx image) on the same edge node.

How to reproduce it:
The problem was tested with the 3 yaml files you provided above (test.yaml, test1.yaml, test2.yaml). First creating a deployment with test.yaml and it works fine. Then creating deployment with test1.yaml or test2.yaml, the container cannot be created on edge node successfully.

Reason:
The containers running on edge node use Host network. This means:

  1. The containerPort and hostPort specified in the deployment should be the same. This is why test1.yaml cannot work. Please refer to https://github.com/kubeedge/kubeedge/blob/master/docs/getting-started/usage.md "Deploy Application" section.
  2. The same edge node cannot run two containers listening on the same port, this will cause port conflict problem. This is why test2.yaml cannot work. It specifies the ports as 8080, but actually the nginx configuration in nginx:1.15.12 image specifies the port as 80. So, the container still starts with port 80 but this port was already used by the container started with test.yaml.

How to run 2 nginx pods on the same edge node?
It requires these two pods listening on different ports. I created a new nginx image shouhong/nginx-81:1.15.12 with configuration listen on port 81. You can update test1.yaml/test2.yaml with this image name and update the containerPort and hostPort to 81 to have a try. It should work well together with test.yaml.

@fengjing1009
Copy link
Author

thanks for your reply, I will close this issue @shouhong

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

3 participants