Skip to content

Latest commit

 

History

History
644 lines (484 loc) · 34.7 KB

README.md

File metadata and controls

644 lines (484 loc) · 34.7 KB

NSX-T 2.4.x & K8S - PART 4

Home Page

Table Of Contents

Current State
NSX Container Plugin (NCP) Installation
NSX Node Agent Installation
Test Workload Deployment

Current State

Back to Table of Contents

K8S Cluster

Previously in Part 3, K8S cluster has successfully been formed using kubeadm.


root@k8s-master:/home/vmware# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   15m   v1.14.1
k8s-node1    Ready       50s   v1.14.1
k8s-node2    Ready       14s   v1.14.1

The namespaces that are provisioned by default can be seen using the following kubectl command.


root@k8s-master:~# kubectl get namespaces
NAME              STATUS   AGE
default           Active   4h36m
kube-node-lease   Active   4h36m
kube-public       Active   4h36m
kube-system       Active   4h36m
root@k8s-master:~#

To see which infrastructure Pods are automatically provisioned during the initialization of K8S cluster, following command can be used.


root@k8s-master:~# kubectl get pods --all-namespaces
NAMESPACE     NAME                                 READY   STATUS              RESTARTS   AGE
kube-system   coredns-fb8b8dccf-b592z              0/1     ContainerCreating   0          4h27m
kube-system   coredns-fb8b8dccf-j66fg              0/1     ContainerCreating   0          4h27m
kube-system   etcd-k8s-master                      1/1     Running             0          4h26m
kube-system   kube-apiserver-k8s-master            1/1     Running             0          4h26m
kube-system   kube-controller-manager-k8s-master   1/1     Running             0          4h26m
kube-system   kube-proxy-bk7rs                     1/1     Running             0          19m
kube-system   kube-proxy-j4p5f                     1/1     Running             0          4h27m
kube-system   kube-proxy-mkm4w                     1/1     Running             0          44m
kube-system   kube-scheduler-k8s-master            1/1     Running             0          4h26m
root@k8s-master:~#

Notice "coredns-xxx" Pods are stuck in "ContainerCreating" phase, the reason is although kubelet agent on K8S worker Node sent a request to NSX-T CNI Plugin module to start provisioning the individual network interface for these Pods, since the NSX Node Agent is not installed on the K8S worker nodes yet (Nor the NSX Container Plugin for attaching NSX-T management plane to K8S API) , kubelet can not move forward with the Pod creation.

In the previous output, kube-proxy Pods can be ignored as their functionality will be replaced by NSX Kube Proxy container (in the NSX Node Agent Pod) . Kube-proxy is explained in Part 5.

Below is a revisit of the NSX-T and K8S integration architecture (which was mentioned in Part 2 of this series)

Alt desc

NSX-T Topology

Current state of the topology is still as the same that we created in Part 1. Shown below. The only difference now is K8S cluster is deployed hence the infrastructure Pods are scheduled on K8S nodes.

Below screenshots from the NSX-T GUI show the current configuration of the dataplane with NSX-T.

LOGICAL SWITCHES

LOGICAL ROUTERS

IP POOLS

LOAD BALANCER

No load balancers exist yet.

FIREWALL

Only the two empty firewall sections and the default section exist in the rule base.

NSX Container Plugin Installation

Back to Table of Contents

Once again, NSX Container Plugin (NCP) image file is in the NSX container folder that was copied to each K8S node will be used in this section.

Load The Docker Image for NSX NCP (and NSX Node Agent) on K8S Nodes

For the commands below, "sudo" can be used with each command or privilege can be escalated to root by using "sudo -H bash" in advance.

On each K8S node, navigate to "/home/vmware/nsx-container-2.4.1.13515827/Kubernetes" folder then execute the following command to load respective image to the local Docker repository of each K8S Node.

NSX Container Plugin (NCP) and NSX Node Agent Pods use the same container image.


root@k8s-master:/home/vmware/nsx-container-2.4.1.13515827/Kubernetes# docker load -i nsx-ncp-ubuntu-2.4.1.13515827.tar
c854e44a1a5a: Loading layer [==================================================>]  132.8MB/132.8MB
8ba4b4ea187c: Loading layer [==================================================>]  15.87kB/15.87kB
46c98490f575: Loading layer [==================================================>]  9.728kB/9.728kB
1633f88f8c9f: Loading layer [==================================================>]  4.608kB/4.608kB
0e20f4f8a593: Loading layer [==================================================>]  3.072kB/3.072kB
29ee2462776b: Loading layer [==================================================>]  3.072kB/3.072kB
09df119f61a0: Loading layer [==================================================>]  10.84MB/10.84MB
d2445ae12a7e: Loading layer [==================================================>]  28.16kB/28.16kB
c02b8962769c: Loading layer [==================================================>]  284.7kB/284.7kB
3465892d0467: Loading layer [==================================================>]  11.26kB/11.26kB
9a6fc128cdcf: Loading layer [==================================================>]  1.625MB/1.625MB
0ed84005a093: Loading layer [==================================================>]  7.168kB/7.168kB
502420413898: Loading layer [==================================================>]   1.23MB/1.23MB
c30860d2ecd5: Loading layer [==================================================>]    171kB/171kB
8d69b3ad3ee8: Loading layer [==================================================>]  392.4MB/392.4MB
Loaded image: registry.local/2.4.1.13515827/nsx-ncp-ubuntu:latest
root@k8s-master:/home/vmware/nsx-container-2.4.1.13515827/Kubernetes#

Make sure the image is now in the local Docker repository :


root@k8s-master:/home/vmware/nsx-container-2.4.1.13515827/Kubernetes# docker images
REPOSITORY                                     TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                          v1.14.2             5c24210246bb        12 days ago         82.1MB
k8s.gcr.io/kube-apiserver                      v1.14.2             5eeff402b659        12 days ago         210MB
k8s.gcr.io/kube-controller-manager             v1.14.2             8be94bdae139        12 days ago         158MB
k8s.gcr.io/kube-scheduler                      v1.14.2             ee18f350636d        12 days ago         81.6MB
registry.local/2.4.1.13515827/nsx-ncp-ubuntu   latest              5714a979b290        4 weeks ago         518MB
k8s.gcr.io/coredns                             1.3.1               eb516548c180        4 months ago        40.3MB
k8s.gcr.io/etcd                                3.3.10              2c4adeb21b4f        5 months ago        258MB
k8s.gcr.io/pause                               3.1                 da86e6ba6ca1        17 months ago       742kB
root@k8s-master:/home/vmware/nsx-container-2.4.1.13515827/Kubernetes#

Make sure to update the image name from "nsx-ncp-ubuntu" => "nsx-ncp" , since the yaml files, for both NCP and NSX Node Agent, are referring to image name as "nsx-ncp"


root@k8s-master:/home/vmware/nsx-container-2.4.1.13515827/Kubernetes#  docker tag registry.local/2.4.1.13515827/nsx-ncp-ubuntu:latest nsx-ncp:latest
root@k8s-master:/home/vmware/nsx-container-2.4.1.13515827/Kubernetes#

Verify the image name has changed.


root@k8s-master:/home/vmware# docker images
REPOSITORY                                     TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                          v1.14.2             5c24210246bb        2 weeks ago         82.1MB
k8s.gcr.io/kube-apiserver                      v1.14.2             5eeff402b659        2 weeks ago         210MB
k8s.gcr.io/kube-controller-manager             v1.14.2             8be94bdae139        2 weeks ago         158MB
k8s.gcr.io/kube-scheduler                      v1.14.2             ee18f350636d        2 weeks ago         81.6MB
nsx-ncp                                        latest              5714a979b290        5 weeks ago         518MB
registry.local/2.4.1.13515827/nsx-ncp-ubuntu   latest              5714a979b290        5 weeks ago         518MB
k8s.gcr.io/coredns                             1.3.1               eb516548c180        4 months ago        40.3MB
k8s.gcr.io/etcd                                3.3.10              2c4adeb21b4f        6 months ago        258MB
k8s.gcr.io/pause                               3.1                 da86e6ba6ca1        17 months ago       742kB

Creating NSX Specific K8S Resources

For better isolation and security, NSX infrastructure Pods (NSX Container Plugin (NCP) and NSX Node Agent) will be running in their dedicated K8S namespace and a K8S Role Based Access Control (RBAC) policy will be applied for that namespace.

A single yml file, which is included here, will be used to implement the following steps :

create a dedicated K8S namespace, as "nsx-system", for NCP and Node Agent Pods
create a service account, as "ncp-svc-account", for NCP
create a service account, as "nsx-node-agent-svc-account", for Node agent
create a cluster role, as "ncp-cluster-role" , for NCP (with specific API access)
create a cluster role "ncp-patch-role" , for NCP (with specific API access)
bind "ncp-svc-account" to "ncp-cluster-role"
bind "ncp-svc-account" to "ncp-patch-role"
create a cluster role "nsx-node-agent-cluster-role" , for NSX Node Agent (with specific API access) bind "nsx-node-agent-svc-account" to "nsx-node-agent-cluster-role"


root@k8s-master:~# kubectl create -f https://raw.githubusercontent.com/dumlutimuralp/nsx-t-k8s/master/Yaml/nsx-ncp-rbac.yml
namespace/nsx-system created
serviceaccount/ncp-svc-account created
clusterrole.rbac.authorization.k8s.io/ncp-cluster-role created
clusterrole.rbac.authorization.k8s.io/ncp-patch-role created
clusterrolebinding.rbac.authorization.k8s.io/ncp-cluster-role-binding created
clusterrolebinding.rbac.authorization.k8s.io/ncp-patch-role-binding created
serviceaccount/nsx-node-agent-svc-account created
clusterrole.rbac.authorization.k8s.io/nsx-node-agent-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/nsx-node-agent-cluster-role-binding created
root@k8s-master:~#
root@k8s-master:~#

Verify that the new namespace is created.


root@k8s-master:~# kubectl get namespaces
NAME              STATUS   AGE
default           Active   5h22m
kube-node-lease   Active   5h22m
kube-public       Active   5h22m
kube-system       Active   5h22m
nsx-system        Active   5m47s
root@k8s-master:~#

Verify that the service account and role bindings are successfully configured.


root@k8s-master:/home/vmware# kubectl get sa -n nsx-system
NAME                         SECRETS   AGE
default                      1         5d19h
ncp-svc-account              1         5d19h
nsx-node-agent-svc-account   1         5d19h
root@k8s-master:/home/vmware#

root@k8s-master:/home/vmware# kubectl describe sa ncp-svc-account -n nsx-system
Name:                ncp-svc-account
Namespace:           nsx-system
Labels:              
Annotations:         
Image pull secrets:  
Mountable secrets:   ncp-svc-account-token-czzwc
Tokens:              ncp-svc-account-token-czzwc
Events:              
root@k8s-master:/home/vmware#

root@k8s-master:/home/vmware# kubectl get clusterrolebinding -n nsx-system
NAME                                                   AGE
cluster-admin                                          6d
kubeadm:kubelet-bootstrap                              6d
kubeadm:node-autoapprove-bootstrap                     6d
kubeadm:node-autoapprove-certificate-rotation          6d
kubeadm:node-proxier                                   6d
ncp-cluster-role-binding                               5d19h
ncp-patch-role-binding                                 5d19h
nsx-node-agent-cluster-role-binding                    5d19h
|
|
 Output Omitted
|
|
system:kube-dns                                        6d
system:kube-scheduler                                  6d
system:node                                            6d
system:node-proxier                                    6d
system:public-info-viewer                              6d
system:volume-scheduler                                6d
root@k8s-master:/home/vmware#

root@k8s-master:/home/vmware# kubectl describe clusterrolebinding ncp-cluster-role-binding -n nsx-system
Name:         ncp-cluster-role-binding
Labels:       
Annotations:  
Role:
  Kind:  ClusterRole
  Name:  ncp-cluster-role
Subjects:
  Kind            Name             Namespace
  ----            ----             ---------
  ServiceAccount  ncp-svc-account  nsx-system
root@k8s-master:/home/vmware#

The "nsx-ncp-rbac.yml" is put together by Yasen Simeonov (Senior Technical Product Manager at VMware) which is published here originally.

The same yml file is also published in VMware NSX-T 2.4 Installation Guide here ((WITHOUT the nsx-system namespace resource though, hence the namespace needs to be manually created if the yml file in the installation guide will be used)

Deploy NSX Container Plugin (NCP)

Another yml file, "ncp-deployment.yml" will be used to deploy NSX Container Plugin. This yml file is also provided in the content of the NSX Container Plugin zip file that was downloaded from My.VMware portal. (It is also included here)

However, before moving forward, NSX-T specific environmental parameters need to be configured. The yml file contains a configmap for the configuration of the ncp.ini file for the NCP. Basically most of the parameters are commented out with a "#" character. The definitions of each parameter are in the yml file itself.

The "ncp-deployment.yml" file can simply be edited with a text editor. The parameters in the file that are used in this environment has "#" removed. Below is a list and explanation of each :

cluster = k8s-cluster1 : Used to identify the NSX-T objects that are provisioned for this K8S cluster. Notice that K8S Node logical ports in "K8SNodeDataPlaneLS" are configured with the "k8s-cluster1" tag and the "ncp/cluster" scope also with the hostname of Ubuntu node as the tag and "ncp/node_name" scope on NSX-T side.

enable_snat = True : This parameter basically defines that all the K8S Pods in each K8S namespace in this K8S cluster will be SNATed (to be able to access the other resources in the datacenter external to NSX domain) . The SNAT rules will be autoatically provisioned on Tier 0 Router in this lab. The SNAT IP will be allocated from IP Pool named "K8S-NAT-Pool" that was configured back in Part 3.

apiserver_host_ip = 10.190.5.10 , apiserver_host_port = 6443 : These parameters are for NCP to access K8S API.

ingress_mode = nat : This parameter basically defines that NSX will use SNAT/DNAT rules for K8S ingress (L7 HTTPS/HTTP load balancing) to access the K8S service at the backend.

nsx_api_managers = 10.190.1.80 , nsx_api_user = admin , nsx_api_password = XXXXXXXXXXXXXX : These parameters are for NCP to access/consume the NSX Manager.

insecure = True : NSX Manager server certificate is not verified.

top_tier_router = T0-K8S-Domain : The name of the Logical Router that will be used for implementing SNAT rules for the Pods in the K8S Namespaces.

overlay_tz = TZ-Overlay : The name of the existing overlay transport zone that will be used for creating new logical switches/segments for K8S namespaces and container networking.

subnet_prefix = 24 : The size of the IP Pools for the namespaces that will be carved out from the main "K8S-POD-IP-BLOCK" configured in Part 3 (172.25.0.0 /16). Whenever a new K8S namespace is created a /24 IP pool will be allocated from thatthat IP block.

use_native_loadbalancer = True : This setting is to use NSX-T load balancer for K8S Service Type : Load Balancer. Whenever a new K8S service is exposed with the Type : Load Balancer then a VIP will be provisioned on NSX-T LB attached to a Tier 1 Logical Router dedicated for LB function. The VIP will be allocated from the IP Pool named "K8S-LB-Pool" that was configured back in Part 3.

default_ingress_class_nsx = True : This is to use NSX-T load balancer for K8S ingress (L7 HTTP/HTTPS load balancing) , instead of other solutions such as NGINX, HAProxy etc. Whenever a K8S ingress object is created, a Layer 7 rule will be configured on the NSX-T load balancer.

service_size = 'SMALL' : This setting configures a small sized NSX-T Load Balancer for the K8S cluster. Options are Small/Medium/Large. This is the Load Balancer instance which is attached to a dedicated Tier 1 Logical Router in the topology.

container_ip_blocks = K8S-POD-IP-BLOCK : This setting defines from which IP block each K8S namespace will carve its IP Pool/IP address space from. (172.25.0.0 /16 in this case) Size of each K8S namespace pool was defined with subnet_prefix parameter above)

external_ip_pools = K8S-NAT-Pool : This setting defines from which IP pool each SNAT IP will be allocated from. Whenever a new K8S namespace is created, then a NAT IP will be allocated from this pool. (10.190.7.100 to 10.190.7.150 in this case)

external_ip_pools_lb = K8S-LB-Pool : This setting defines from which IP pool each K8S service, which is configured with Type : Load Balancer, will allocate its IP from. (10.190.6.100 to 10.190.6.150 in this case)

top_firewall_section_marker = Section1 and bottom_firewall_section_marker = Section2 : This is to specify between which sections the K8S orchestrated firewall rules will fall in between.

One additional configuration that is made in the yml file is removing the "#" from the line where it says "serviceAccountName: ncp-svc-account" . So that the NCP Pod has appropriate role and access to K8S cluster resources

The edited yml file, "ncp-deployment-custom.yml" in this case, can now be deployed from anywhere. In this environment this yml file is copied to /home/vmware folder in K8S Master Node and deployed in the "nsx-system" namespace with the following command.


root@k8s-master:/home/vmware# kubectl create -f ncp-deployment-custom.yml --namespace=nsx-system
configmap/nsx-ncp-config created
deployment.extensions/nsx-ncp created
root@k8s-master:/home/vmware#
root@k8s-master:/home/vmware# kubectl get pods --all-namespaces
NAMESPACE     NAME                                 READY   STATUS              RESTARTS   AGE
kube-system   coredns-fb8b8dccf-b592z              0/1     ContainerCreating   0          5h45m
kube-system   coredns-fb8b8dccf-j66fg              0/1     ContainerCreating   0          5h45m
kube-system   etcd-k8s-master                      1/1     Running             0          5h44m
kube-system   kube-apiserver-k8s-master            1/1     Running             0          5h44m
kube-system   kube-controller-manager-k8s-master   1/1     Running             0          5h44m
kube-system   kube-proxy-bk7rs                     1/1     Running             0          97m
kube-system   kube-proxy-j4p5f                     1/1     Running             0          5h45m
kube-system   kube-proxy-mkm4w                     1/1     Running             0          122m
kube-system   kube-scheduler-k8s-master            1/1     Running             0          5h44m
nsx-system    nsx-ncp-7f65bbf6f6-mr29b             1/1     Running             0          18s
root@k8s-master:/home/vmware#

As NCP is deployed as replicaset (replicas :1 is specified in deployment yml) , K8S will make sure that at a given time a single NCP Pod is running and healthy.

Notice the changes to the existing logical switches/segments, Tier 1 Logical Routers, Load Balancer below . All these newly created objects have been provisioned by NCP (as soon as NCP Pod has been successfully deployed) by identifying the the K8S desired state and mapping the K8S resources in etcd to the NSX-T Logical Networking constructs.

LOGICAL SWITCHES

LOGICAL ROUTERS

IP POOLS per Namespace

SNAT Pool

SNAT RULES

LOAD BALANCER

VIRTUAL SERVERS for INGRESS on LOAD BALANCER

FIREWALL RULEBASE

Notice also that CoreDNS pods are still in ContainerCreating phase, the reason for that is NSX Node Agent (which is responsible for connecting the pods to a logical switch) is still not installed on K8S Worker Nodes yet (next step)

NSX Node Agent Installation

Back to Table of Contents

"nsx-node-agent-ds.yml" will be used to deploy NSX Node Agent. This yml file is also provided in the content of the NSX Container Plugin zip file that was downloaded from My.VMware portal.

This yml file also contains a configmap for the configuration of the ncp.ini file for the NSX Node Agent. The "nsx-node-agent-ds.yml" file can simply be edited with a text editor. The following parameters need to be configured :

apiserver_host_ip = 10.190.5.10 , apiserver_host_port = 6443 : These parameters are for NSX Node Agent to access K8S API.

"#" is removed from the line with "serviceAccountname:..." so that role based access control can properly be applied for NSX Node Agent as well.

The edited yml file, "nsx-node-agent-ds-custom.yml" in this case, can now be deployed from anywhere. In this environment this yml file is copied to /home/vmware folder in K8S Master Node and deployed in the "nsx-system" namespace with the following command.


root@k8s-master:/home/vmware# kubectl create -f nsx-node-agent-ds-custom.yml --namespace=nsx-system

As NSX Node Agent is deployed as a deamonset it will be running on each worker node in the K8S cluster.


root@k8s-master:/home/vmware# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                 READY   STATUS              RESTARTS   AGE     IP            NODE         NOMINATED NODE   READINESS GATES
kube-system   coredns-fb8b8dccf-b592z              0/1     ContainerCreating   0          6h              k8s-master              
kube-system   coredns-fb8b8dccf-j66fg              0/1     ContainerCreating   0          6h              k8s-master              
kube-system   etcd-k8s-master                      1/1     Running             0          5h59m   10.190.5.10   k8s-master              
kube-system   kube-apiserver-k8s-master            1/1     Running             0          5h59m   10.190.5.10   k8s-master              
kube-system   kube-controller-manager-k8s-master   1/1     Running             0          5h59m   10.190.5.10   k8s-master              
kube-system   kube-proxy-bk7rs                     1/1     Running             0          112m    10.190.5.12   k8s-node2               
kube-system   kube-proxy-j4p5f                     1/1     Running             0          6h      10.190.5.10   k8s-master              
kube-system   kube-proxy-mkm4w                     1/1     Running             0          137m    10.190.5.11   k8s-node1               
kube-system   kube-scheduler-k8s-master            1/1     Running             0          5h59m   10.190.5.10   k8s-master              
nsx-system    nsx-ncp-7f65bbf6f6-mr29b             1/1     Running             0          14m     10.190.5.12   k8s-node2               
nsx-system    nsx-node-agent-2tjb7                 2/2     Running             0          24s     10.190.5.12   k8s-node2               
nsx-system    nsx-node-agent-nqwgx                 2/2     Running             0          24s     10.190.5.11   k8s-node1               
root@k8s-master:/home/vmware#

Note : "-o wide" provides which Pod <=> Node mapping in the output

Notice yet again the coredns pods are still in ContainerCreating state. At this stage simply delete those two coredns pods and K8S scheduler will recreate those two pods and both of them will be successfully get attached to the respective overlay network on NSX-T side.


root@k8s-master:/home/vmware# kubectl delete pod/coredns-fb8b8dccf-b592z --namespace=kube-system
pod "coredns-fb8b8dccf-b592z" deleted
root@k8s-master:/home/vmware# kubectl delete pod/coredns-fb8b8dccf-j66fg --namespace=kube-system
pod "coredns-fb8b8dccf-j66fg" deleted
root@k8s-master:/home/vmware# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE     IP            NODE         NOMINATED NODE   READINESS GATES
kube-system   coredns-fb8b8dccf-fhn6q              1/1     Running   0          3m40s   172.25.4.4    k8s-node1               
kube-system   coredns-fb8b8dccf-wqndw              1/1     Running   0          88s     172.25.4.3    k8s-node2               
kube-system   etcd-k8s-master                      1/1     Running   0          6h4m    10.190.5.10   k8s-master              
kube-system   kube-apiserver-k8s-master            1/1     Running   0          6h4m    10.190.5.10   k8s-master              
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          6h4m    10.190.5.10   k8s-master              
kube-system   kube-proxy-bk7rs                     1/1     Running   0          117m    10.190.5.12   k8s-node2               
kube-system   kube-proxy-j4p5f                     1/1     Running   0          6h5m    10.190.5.10   k8s-master              
kube-system   kube-proxy-mkm4w                     1/1     Running   0          142m    10.190.5.11   k8s-node1               
kube-system   kube-scheduler-k8s-master            1/1     Running   0          6h4m    10.190.5.10   k8s-master              
nsx-system    nsx-ncp-7f65bbf6f6-mr29b             1/1     Running   0          20m     10.190.5.12   k8s-node2               
nsx-system    nsx-node-agent-2tjb7                 2/2     Running   0          5m35s   10.190.5.12   k8s-node2               
nsx-system    nsx-node-agent-nqwgx                 2/2     Running   0          5m35s   10.190.5.11   k8s-node1               
root@k8s-master:/home/vmware#

At this stage the topology looks like this

Test Workload Deployment

Back to Table of Contents

Let' s create a new namespace


root@k8s-master:/home/vmware# kubectl create namespace demons
namespace/demons created
root@k8s-master:/home/vmware#

A new logical switch is created for "demons" namespace , shown below

NCP not only creates the above constructs but also tags them with the appropriate metadata, shown below

For instance, in the above output , "project id" is the "UUID" for the "demons" K8S namespace. Which can be verified as below :


root@k8s-master:/home/vmware# kubectl get ns demons -o yaml
apiVersion: v1
kind: Namespace
metadata:
  creationTimestamp: "2019-06-03T23:54:01Z"
  name: demons
  resourceVersion: "773201"
  selfLink: /api/v1/namespaces/demons
  uid: dcb423b5-865a-11e9-a2fc-005056b42e41
spec:
  finalizers:
  - kubernetes
status:
  phase: Active

A new logical router is also created for "demons" namespace, shown below

A new IP Pool is also allocated from K8S-POD-IP-BLOCK, shown below

IP Pool allocated for the namespace is also tagged with metadata, shown below

An SNAT IP is allocated from the K8S-NAT-Pool (for all the Pods in demons namespace) , and the respective NAT rule is automatically configured on Tier 0 Logical Router (T0-K8S-Domain)

Deploy a sample app in the namespace (in imperative way)


root@k8s-master:/home/vmware# kubectl run nsxtestapp --image=dumlutimuralp/nsx-demo --replicas=2 --namespace=demons
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nsxtestapp created
root@k8s-master:/home/vmware# 

Note : Notice the message in the output. K8S is recommending declerative way of implementing pods. Note : Notice the message "deployment.apps/nsxtestapp created"; as K8S always creates a deployment object, which consists of two Pods in this case, here a deployment with the name "nsxtestapp" is created automatically.

Verify that the Pods are created and allocated IPs from the appropriate IP pool.


root@k8s-master:/home/vmware# kubectl get pods -o wide --namespace=demons
NAME                         READY   STATUS    RESTARTS   AGE   IP           NODE        NOMINATED NODE   READINESS GATES
nsxtestapp-5bfcc97b5-n5wbz   1/1     Running   0          11m   172.25.5.3   k8s-node2              
nsxtestapp-5bfcc97b5-ppkqd   1/1     Running   0          11m   172.25.5.2   k8s-node1              
root@k8s-master:/home/vmware#

Verify that there is a deployment object named as "nsxtestapp" in the demons namespace.


root@k8s-master:/home/vmware# kubectl get deployment -n demons
NAME         READY   UP-TO-DATE   AVAILABLE   AGE
nsxtestapp   2/2     2            2           13m
root@k8s-master:/home/vmware#

Now the topology looks like below

In the topology,the Pods that are connected to "demons" logical switch are shown as "nsx-demo1" and "nsx-demo2" in the lab those pods are provisioned as "nsxtestapp-xxxxx".

The logical port for each Pod shows up in NSX-T UI, shown below

Let' s look at the tags that are associated with that logical port as metadata

As shown above, the K8S namespace name, K8S Pod name and UUID (can be verified on K8S below) are carried over to NSX-T as metadata.


root@k8s-master:/home/vmware# kubectl get pod/nsxtestapp-5bfcc97b5-n5wbz -o yaml --namespace=demons
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2019-06-04T00:23:32Z"
  generateName: nsxtestapp-5bfcc97b5-
  labels:
    pod-template-hash: 5bfcc97b5
    run: nsxtestapp
  name: nsxtestapp-5bfcc97b5-n5wbz
  namespace: demons
|
|
Output Omitted
|
|
  resourceVersion: "776038"
  selfLink: /api/v1/namespaces/demons/pods/nsxtestapp-5bfcc97b5-n5wbz
  uid: fc30d02f-865e-11e9-a2fc-005056b42e41
spec:
  containers:
  - image: dumlutimuralp/nsx-demo
    imagePullPolicy: Always
    name: nsxtestapp
|
|
Output Omitted
|
|

Finally let' s check the IP connectivity of the Pod to the resources external to NSX domain.

Perform the command below to get a shell in one of the Pods


root@k8s-master:/home/vmware# kubectl exec -it nsxtestapp-5bfcc97b5-n5wbz /bin/bash --namespace=demons
root@nsxtestapp-5bfcc97b5-n5wbz:/app# 

Check the IP address of the Pod


root@nsxtestapp-5bfcc97b5-n5wbz:/app# ip addr
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: ovs-gretap0@NONE:  mtu 1462 qdisc noop state DOWN group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
3: erspan0@NONE:  mtu 1450 qdisc noop state DOWN group default qlen 1000
    link/ether 36:89:34:84:cd:91 brd ff:ff:ff:ff:ff:ff
4: gre0@NONE:  mtu 1476 qdisc noop state DOWN group default qlen 1
    link/gre 0.0.0.0 brd 0.0.0.0
5: ovs-ip6gre0@NONE:  mtu 1448 qdisc noop state DOWN group default qlen 1
    link/gre6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
6: ovs-ip6tnl0@NONE:  mtu 1452 qdisc noop state DOWN group default qlen 1
    link/tunnel6 :: brd ::
50: eth0@if51:  mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 02:50:56:00:28:03 brd ff:ff:ff:ff:ff:ff
 inet 172.25.5.3/24 scope global eth0
       valid_lft forever preferred_lft forever
root@nsxtestapp-5bfcc97b5-n5wbz:/app# 

Ping the external physical router from the Pod


root@nsxtestapp-5bfcc97b5-n5wbz:/app# ping 10.190.4.1
PING 10.190.4.1 (10.190.4.1): 56 data bytes
64 bytes from 10.190.4.1: icmp_seq=0 ttl=62 time=3.047 ms
64 bytes from 10.190.4.1: icmp_seq=1 ttl=62 time=1.534 ms
64 bytes from 10.190.4.1: icmp_seq=2 ttl=62 time=1.130 ms
64 bytes from 10.190.4.1: icmp_seq=3 ttl=62 time=1.044 ms
64 bytes from 10.190.4.1: icmp_seq=4 ttl=62 time=1.957 ms
64 bytes from 10.190.4.1: icmp_seq=5 ttl=62 time=1.417 ms
^C--- 10.190.4.1 ping statistics ---
6 packets transmitted, 6 packets received, 0% packet loss
round-trip min/avg/max/stddev = 1.044/1.688/3.047/0.676 ms
root@nsxtestapp-5bfcc97b5-n5wbz:/app#

To identify containers per K8S Node, the logical port that the K8S Worker Node' s vNIC2 is connected to can be investigated too. Shown below.

Container ports show up as below.

Back to Table of Contents