Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nodes goes in Not Ready State after load testing #102

Closed
bytesofdhiren opened this issue Dec 27, 2017 · 24 comments
Closed

Nodes goes in Not Ready State after load testing #102

bytesofdhiren opened this issue Dec 27, 2017 · 24 comments

Comments

@bytesofdhiren
Copy link

I was doing load testing in the AKS cluster. Many time, after firing heavy load on the cluster, the nodes are going in the "Not Ready" state and never returns to "Ready" state.

What is a resolution to this problem? How can I bring back the nodes?

@bytesofdhiren bytesofdhiren changed the title Nodes going in Not Ready State Nodes goes in Not Ready State after load testing Dec 27, 2017
@slack slack added the bug label Jan 4, 2018
@slack
Copy link
Contributor

slack commented Jan 4, 2018

Few questions.

  1. What type of load testing were you running? Were you putting pressure on the cloud provider (adding/removing load balancers, provisioning/detaching disks)
  2. Do you have logs from the kubelets in your cluster? Would be curious to see if they logged any errors around connectivity to the apiserver

@bytesofdhiren
Copy link
Author

It was intensive memory and CPU operation. After that, I added a memory & CPU limit on all the pods and that issue is not reproducing anymore. But in any case, once the pod is in "Ready" state it should never go to "NotReady" state.

I don't have log of it since it was not responsive and have to delete it.

@marcel-dempers
Copy link

I have this exact issue. Easy to produce. I run an AKS cluster 1 with a pod with external IP exposed by K8 service. I run another AKS 2 where I run JMeter. From the JMeter AKS 2, I hit 1500 requests \ sec to the AKS 1 where my service lives and the nodes become not ready :

kubectl get nodes
NAME                       STATUS           AGE       VERSION
aks-nodepool1-35008895-0   NotReady,agent   24d       v1.8.2
aks-nodepool1-35008895-1   NotReady,agent   24d       v1.8.2

When I use kubectl describe I see following conditions on the node:

Conditions:
  Type			Status		LastHeartbeatTime			LastTransitionTime			Reason				Message
  ----			------		-----------------			------------------			------				-------
  NetworkUnavailable 	False 		Fri, 19 Jan 2018 12:53:00 +1100 	Fri, 19 Jan 2018 12:53:00 +1100 	RouteCreated 			RouteController created a route
  OutOfDisk 		False 		Mon, 12 Feb 2018 17:49:42 +1100 	Fri, 19 Jan 2018 12:52:35 +1100 	KubeletHasSufficientDisk 	kubelet has sufficient disk space available
  MemoryPressure 	Unknown 	Mon, 12 Feb 2018 17:49:42 +1100 	Mon, 12 Feb 2018 17:50:40 +1100 	NodeStatusUnknown 		Kubelet stopped posting node status.
  DiskPressure 		Unknown 	Mon, 12 Feb 2018 17:49:42 +1100 	Mon, 12 Feb 2018 17:50:40 +1100 	NodeStatusUnknown 		Kubelet stopped posting node status.
  Ready 		Unknown 	Mon, 12 Feb 2018 17:49:42 +1100 	Mon, 12 Feb 2018 17:50:40 +1100 	NodeStatusUnknown 		Kubelet stopped posting node status.

I also am in the process of adding restrictions to deployment resources, but I just thought the cluster should still recover after such a scenario.

I have added events json from kubectl cluster-info dump command.
events.json.zip

Unable to get anything out of heapster

Error from server (BadRequest): the server rejected our request for an unknown reason (get pods heapster-75667786bb-rtl4r)

kube-system status:

kubectl get pods -n kube-system
NAME                                    READY     STATUS     RESTARTS   AGE
heapster-75667786bb-rtl4r               2/2       Unknown    6          24d
heapster-75667786bb-vkcs9               0/2       Pending    0          25m
kube-dns-v20-6c8f7f988b-ggm8w           0/3       Pending    0          25m
kube-dns-v20-6c8f7f988b-grflt           3/3       Unknown    9          24d
kube-dns-v20-6c8f7f988b-mzqbk           3/3       Unknown    9          24d
kube-dns-v20-6c8f7f988b-z2bch           0/3       Pending    0          25m
kube-proxy-hchrl                        1/1       NodeLost   3          24d
kube-proxy-xckjx                        1/1       NodeLost   3          24d
kube-svc-redirect-ht4bm                 1/1       NodeLost   44         24d
kube-svc-redirect-kvkv9                 1/1       NodeLost   45         24d
kubernetes-dashboard-6fc8cf9586-lvp8n   1/1       Unknown    47         24d
kubernetes-dashboard-6fc8cf9586-qdnh8   0/1       Pending    0          25m
tunnelfront-654c57cd9c-4x2zk            0/1       Pending    0          25m
tunnelfront-654c57cd9c-n7g87            1/1       Unknown    3          24d

Hope this info helps you guys troubleshoot further if needed.

@dshimko
Copy link

dshimko commented Feb 21, 2018

Same issue after overloading on number of replicas that would fit the amount of available RAM.

Conditions:
  Type                  Status          LastHeartbeatTime                       LastTransitionTime                      Reason                          Message
  ----                  ------          -----------------                       ------------------                      ------                          -------
  NetworkUnavailable    False           Fri, 19 Jan 2018 18:26:06 +0000         Fri, 19 Jan 2018 18:26:06 +0000         RouteCreated                    RouteController created a route
  OutOfDisk             False           Mon, 19 Feb 2018 21:34:46 +0000         Fri, 19 Jan 2018 18:25:43 +0000         KubeletHasSufficientDisk        kubelet has sufficient disk space available
  MemoryPressure        Unknown         Mon, 19 Feb 2018 21:34:46 +0000         Mon, 19 Feb 2018 21:35:39 +0000         NodeStatusUnknown               Kubelet stopped posting node status.
  DiskPressure          Unknown         Mon, 19 Feb 2018 21:34:46 +0000         Mon, 19 Feb 2018 21:35:39 +0000         NodeStatusUnknown               Kubelet stopped posting node status.
  Ready                 Unknown         Mon, 19 Feb 2018 21:34:46 +0000         Mon, 19 Feb 2018 21:35:39 +0000         NodeStatusUnknown               Kubelet stopped posting node status.

@mooperd
Copy link

mooperd commented Mar 21, 2018

One of my nodes seems to be hitting a similar problem.

meow@kubrick:~/dev/kube-system$ kubectl get nodes
NAME                       STATUS     ROLES     AGE       VERSION
aks-nodepool1-34207704-0   NotReady   agent     2d        v1.8.7
aks-nodepool1-34207704-1   Ready      agent     2d        v1.8.7
aks-nodepool1-34207704-2   Ready      agent     2d        v1.8.7
meow@kubrick:~/dev/kube-system$ kubectl describe node aks-nodepool1-34207704-0
Name:               aks-nodepool1-34207704-0
Roles:              agent
Labels:             agentpool=nodepool1
                    beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/fluentd-ds-ready=true
                    beta.kubernetes.io/instance-type=Standard_DS1_v2
                    beta.kubernetes.io/os=linux
                    failure-domain.beta.kubernetes.io/region=eastus
                    failure-domain.beta.kubernetes.io/zone=1
                    kubernetes.azure.com/cluster=MC_k8s-testing-aa_k8s-testing-aa-1_eastus
                    kubernetes.io/hostname=aks-nodepool1-34207704-0
                    kubernetes.io/role=agent
                    storageprofile=managed
                    storagetier=Premium_LRS
Annotations:        node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:             <none>
CreationTimestamp:  Sun, 18 Mar 2018 13:13:30 +0100
Conditions:
  Type                 Status    LastHeartbeatTime                 LastTransitionTime                Reason                     Message
  ----                 ------    -----------------                 ------------------                ------                     -------
  NetworkUnavailable   False     Sun, 18 Mar 2018 13:15:06 +0100   Sun, 18 Mar 2018 13:15:06 +0100   RouteCreated               RouteController created a route
  OutOfDisk            False     Wed, 21 Mar 2018 12:16:48 +0100   Sun, 18 Mar 2018 13:13:30 +0100   KubeletHasSufficientDisk   kubelet has sufficient disk space available
  MemoryPressure       Unknown   Wed, 21 Mar 2018 12:16:48 +0100   Wed, 21 Mar 2018 12:17:29 +0100   NodeStatusUnknown          Kubelet stopped posting node status.
  DiskPressure         Unknown   Wed, 21 Mar 2018 12:16:48 +0100   Wed, 21 Mar 2018 12:17:29 +0100   NodeStatusUnknown          Kubelet stopped posting node status.
  Ready                Unknown   Wed, 21 Mar 2018 12:16:48 +0100   Wed, 21 Mar 2018 12:17:29 +0100   NodeStatusUnknown          Kubelet stopped posting node status.
Addresses:
  InternalIP:  10.240.0.4
  Hostname:    aks-nodepool1-34207704-0
Capacity:
 alpha.kubernetes.io/nvidia-gpu:  0
 cpu:                             1
 memory:                          3501600Ki
 pods:                            110
Allocatable:
 alpha.kubernetes.io/nvidia-gpu:  0
 cpu:                             1
 memory:                          3399200Ki
 pods:                            110
System Info:
 Machine ID:                 833e0926ee21aed71ec075d726cbcfe0
 System UUID:                8831AD2D-F08D-B646-BF5D-8BE8223630A4
 Boot ID:                    23445b11-3f5d-4a59-82d9-da2ef2ee25a6
 Kernel Version:             4.13.0-1007-azure
 OS Image:                   Debian GNU/Linux 8 (jessie)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://1.13.1
 Kubelet Version:            v1.8.7
 Kube-Proxy Version:         v1.8.7
PodCIDR:                     10.244.1.0/24
ExternalID:                  2dad3188-8df0-46b6-bf5d-8be8223630a4
Non-terminated Pods:         (11 in total)
  Namespace                  Name                               CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                               ------------  ----------  ---------------  -------------
  es                         elasticsearch-logging-v1-6pj9k     100m (10%)    1 (100%)    0 (0%)           0 (0%)
  es                         kibana-logging-6c56bdff64-wlgnx    100m (10%)    100m (10%)  0 (0%)           0 (0%)
  kube-system                fluentd-rkbz8                      100m (10%)    0 (0%)      200Mi (6%)       200Mi (6%)
  kube-system                kube-dns-v20-5bf84586f4-6bpw8      110m (11%)    0 (0%)      120Mi (3%)       220Mi (6%)
  kube-system                kube-dns-v20-5bf84586f4-fxhd8      110m (11%)    0 (0%)      120Mi (3%)       220Mi (6%)
  kube-system                kube-proxy-fttkb                   100m (10%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-svc-redirect-fhhkc            0 (0%)        0 (0%)      0 (0%)           0 (0%)
  test-app-two               test-app-two-659cb68964-wchcc      0 (0%)        0 (0%)      0 (0%)           0 (0%)
  test-app                   test-app-68f4cc5d94-7gk8h          0 (0%)        0 (0%)      0 (0%)           0 (0%)
  test-fecore                test-fecore-5647458597-s54lw       0 (0%)        0 (0%)      0 (0%)           0 (0%)
  test-fecore                test-fecore-857d988ff9-9sxl9       0 (0%)        0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits    Memory Requests  Memory Limits
  ------------  ----------    ---------------  -------------
  620m (62%)    1100m (110%)  440Mi (13%)      640Mi (19%)
Events:
  Type     Reason                            Age                  From                               Message
  ----     ------                            ----                 ----                               -------
  Warning  FailedNodeAllocatableEnforcement  40m (x4264 over 2d)  kubelet, aks-nodepool1-34207704-0  Failed to update Node Allocatable Limits "": failed to set supported cgroup subsystems for cgroup : Failed to set config for supported subsystems : failed to write 3585638400 to memory.limit_in_bytes: write /var/lib/docker/overlay2/463cfcf6aa43fd385982d198b7bf929b52b7168494235c87153516bffcfebc38/merged/sys/fs/cgroup/memory/memory.limit_in_bytes: invalid argument

@rfum
Copy link

rfum commented Apr 7, 2018

I'm having the same error with my nodes.

What I did :

  • I've got a 3 node cluser managed by AKS closed network
  • plus a linux vm to ssh into my cluster nodes, has public ip, lets say proxy-vm
  • I had 3 replica deployment
  • For testing purposes i've added a object resource quota fix max. pod count for default namespace to 7
  • I've scaled up my deployment into 10 replicas
  • It went fine at first i've tried different kinds of scenarios with quotas and other deployment update stuff.
  • Nearly 3 actions completed with success but the last one stuck for a long time.
  • I did kubectl get nodes the last node was in state NotReady
  • I tried to ssh into this node using my proxy-vm.
  • Took a lot of time but can managed to ssh in it.
  • I did df -h and systemctl status kubelet.service
  • So disk is not nearly full state and also seems kubelet.service works fine
  • I've searched for kubelet logs they are not exist in /var/log/kubelet nor journalctl -u kubelet
  • My ssh connection with one of the node was always open during my tests. When this issue occured my ssh connection went extremely slow.

So here is some logs about my issue :

$ kubectl get nodes
NAME                       STATUS     ROLES     AGE       VERSION
aks-agentpool-23876029-0   NotReady   agent     15h       v1.8.7
aks-agentpool-23876029-1   NotReady   agent     15h       v1.8.7
aks-agentpool-23876029-2   NotReady   agent     15h       v1.8.7

Here one of my nodes description:

$ kubectl describe node aks-agentpool-23876029-0
Name:               aks-agentpool-23876029-0
Roles:              agent
Labels:             agentpool=agentpool
                    beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/instance-type=Standard_B1s
                    beta.kubernetes.io/os=linux
                    failure-domain.beta.kubernetes.io/region=eastus
                    failure-domain.beta.kubernetes.io/zone=0
                    kubernetes.azure.com/cluster=MC_xxx-cluster_xxxx-kube_eastus
                    kubernetes.io/hostname=aks-agentpool-23876029-0
                    kubernetes.io/role=agent
                    storageprofile=managed
                    storagetier=Premium_LRS
Annotations:        node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:             <none>
CreationTimestamp:  Sat, 07 Apr 2018 01:44:09 +0300
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Sat, 07 Apr 2018 01:44:22 +0300   Sat, 07 Apr 2018 01:44:22 +0300   RouteCreated                 RouteController created a route
  OutOfDisk            False   Sat, 07 Apr 2018 17:07:35 +0300   Sat, 07 Apr 2018 01:44:09 +0300   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure       False   Sat, 07 Apr 2018 17:07:35 +0300   Sat, 07 Apr 2018 17:07:35 +0300   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Sat, 07 Apr 2018 17:07:37 +0300   Sat, 07 Apr 2018 17:07:37 +0300   KubeletHasNoDiskPressure     kubelet has no disk pressure
  Ready                False   Sat, 07 Apr 2018 17:07:37 +0300   Sat, 07 Apr 2018 17:07:37 +0300   KubeletNotReady              container runtime is down,PLEG is not healthy: pleg was last seen active 4m12.549011207s ago; threshold is 3m0s
Addresses:
  InternalIP:  10.240.0.4
  Hostname:    aks-agentpool-23876029-0
Capacity:
 alpha.kubernetes.io/nvidia-gpu:  0
 cpu:                             1
 memory:                          921108Ki
 pods:                            110
Allocatable:
 alpha.kubernetes.io/nvidia-gpu:  0
 cpu:                             1
 memory:                          818708Ki
 pods:                            110
System Info:
 Machine ID:                 2c32d3297bfd44c5a577c9f5a562fb1d
 System UUID:                4875273C-34B3-1A42-8A86-EF22E3124ED4
 Boot ID:                    e8b22273-9de7-464e-83ad-6cc6c35da6c2
 Kernel Version:             4.13.0-1011-azure
 OS Image:                   Debian GNU/Linux 8 (jessie)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://1.13.1
 Kubelet Version:            v1.8.7
 Kube-Proxy Version:         v1.8.7
PodCIDR:                     10.244.0.0/24
ExternalID:                  3c277548-b334-421a-8a86-ef22e3124ed4
Non-terminated Pods:         (18 in total)
  Namespace                  Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                                     ------------  ----------  ---------------  -------------
  default                    webserver-5d6cdf9d96-25dvc               0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    webserver-5d6cdf9d96-869sm               0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    webserver-5d6cdf9d96-brtb9               0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    webserver-5d6cdf9d96-bvpvt               0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    webserver-5d6cdf9d96-cllt8               0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    webserver-5d6cdf9d96-fgk4j               0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    webserver-5d6cdf9d96-kc2km               0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    webserver-5d6cdf9d96-msjds               0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    webserver-5d6cdf9d96-swgf2               0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    webserver-5d6cdf9d96-z7p54               0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    webserver-6d656d4d54-drg8j               0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                heapster-75f8df9884-jgp9k                138m (13%)    138m (13%)  294Mi (36%)      294Mi (36%)
  kube-system                kube-dns-v20-5bf84586f4-4m4xp            110m (11%)    0 (0%)      120Mi (15%)      220Mi (27%)
  kube-system                kube-dns-v20-5bf84586f4-z2nr4            110m (11%)    0 (0%)      120Mi (15%)      220Mi (27%)
  kube-system                kube-proxy-ntfw9                         100m (10%)    0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-svc-redirect-ghnxw                  0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kubernetes-dashboard-665f768455-pvql2    100m (10%)    100m (10%)  50Mi (6%)        50Mi (6%)
  kube-system                tunnelfront-88b6d8ddc-stw24              0 (0%)        0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ------------  ----------  ---------------  -------------
  558m (55%)    238m (23%)  584Mi (73%)      784Mi (98%)
Events:
  Type    Reason                   Age                 From                               Message
  ----    ------                   ----                ----                               -------
  Normal  NodeNotReady             38m (x2 over 42m)   kubelet, aks-agentpool-23876029-0  Node aks-agentpool-23876029-0 status is now: NodeNotReady
  Normal  NodeHasSufficientMemory  35m (x23 over 15h)  kubelet, aks-agentpool-23876029-0  Node aks-agentpool-23876029-0 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    35m (x23 over 15h)  kubelet, aks-agentpool-23876029-0  Node aks-agentpool-23876029-0 status is now: NodeHasNoDiskPressure
  Normal  NodeReady                35m (x6 over 15h)   kubelet, aks-agentpool-23876029-0  Node aks-agentpool-23876029-0 status is now: NodeReady

All processes which running in one of the nodes

$ ps -aux
USER        PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root          1  0.0  0.4  38072  4212 ?        Ss   Apr06   0:15 /sbin/init
root          2  0.0  0.0      0     0 ?        S    Apr06   0:00 [kthreadd]
root          4  0.0  0.0      0     0 ?        S<   Apr06   0:00 [kworker/0:0H]
root          6  0.0  0.0      0     0 ?        S<   Apr06   0:00 [mm_percpu_wq]
root          7  0.0  0.0      0     0 ?        S    Apr06   0:08 [ksoftirqd/0]
root          8  0.0  0.0      0     0 ?        S    Apr06   0:26 [rcu_sched]
root          9  0.0  0.0      0     0 ?        S    Apr06   0:00 [rcu_bh]
root         10  0.0  0.0      0     0 ?        S    Apr06   0:00 [migration/0]
root         11  0.0  0.0      0     0 ?        S    Apr06   0:00 [watchdog/0]
root         12  0.0  0.0      0     0 ?        S    Apr06   0:00 [cpuhp/0]
root         13  0.0  0.0      0     0 ?        S    Apr06   0:00 [kdevtmpfs]
root         14  0.0  0.0      0     0 ?        S<   Apr06   0:00 [netns]
root         15  0.0  0.0      0     0 ?        S    Apr06   0:00 [khungtaskd]
root         16  0.0  0.0      0     0 ?        S    Apr06   0:00 [oom_reaper]
root         17  0.0  0.0      0     0 ?        S<   Apr06   0:00 [writeback]
root         18  0.0  0.0      0     0 ?        S    Apr06   0:00 [kcompactd0]
root         19  0.0  0.0      0     0 ?        SN   Apr06   0:00 [ksmd]
root         20  0.0  0.0      0     0 ?        SN   Apr06   0:10 [khugepaged]
root         21  0.0  0.0      0     0 ?        S<   Apr06   0:00 [crypto]
root         22  0.0  0.0      0     0 ?        S<   Apr06   0:00 [kintegrityd]
root         23  0.0  0.0      0     0 ?        S<   Apr06   0:00 [kblockd]
root         24  0.0  0.0      0     0 ?        S<   Apr06   0:00 [ata_sff]
root         25  0.0  0.0      0     0 ?        S<   Apr06   0:00 [md]
root         26  0.0  0.0      0     0 ?        S<   Apr06   0:00 [edac-poller]
root         27  0.0  0.0      0     0 ?        S<   Apr06   0:00 [hv_vmbus_con]
root         28  0.0  0.0      0     0 ?        S<   Apr06   0:00 [devfreq_wq]
root         30  0.0  0.0      0     0 ?        S<   Apr06   0:00 [watchdogd]
root         33  0.0  0.0      0     0 ?        S    Apr06   0:00 [kauditd]
root         34  0.1  0.0      0     0 ?        S    Apr06   1:05 [kswapd0]
root         35  0.0  0.0      0     0 ?        S    Apr06   0:00 [ecryptfs-kthrea]
root         77  0.0  0.0      0     0 ?        S<   Apr06   0:00 [kthrotld]
root         78  0.0  0.0      0     0 ?        S<   Apr06   0:00 [nfit]
root         81  0.0  0.0      0     0 ?        S    Apr06   0:00 [scsi_eh_0]
root         82  0.0  0.0      0     0 ?        S<   Apr06   0:00 [scsi_tmf_0]
root         83  0.0  0.0      0     0 ?        S    Apr06   0:00 [scsi_eh_1]
root         84  0.0  0.0      0     0 ?        S<   Apr06   0:00 [scsi_tmf_1]
root         85  0.0  0.0      0     0 ?        S    Apr06   0:00 [scsi_eh_2]
root         86  0.0  0.0      0     0 ?        S<   Apr06   0:00 [scsi_tmf_2]
root         87  0.0  0.0      0     0 ?        S    Apr06   0:00 [scsi_eh_3]
root         88  0.0  0.0      0     0 ?        S<   Apr06   0:00 [scsi_tmf_3]
root         92  0.0  0.0      0     0 ?        S    Apr06   0:00 [scsi_eh_4]
root         93  0.0  0.0      0     0 ?        S<   Apr06   0:00 [scsi_tmf_4]
root         94  0.0  0.0      0     0 ?        S    Apr06   0:00 [scsi_eh_5]
root         95  0.0  0.0      0     0 ?        S<   Apr06   0:00 [scsi_tmf_5]
root         97  0.0  0.0      0     0 ?        S<   Apr06   0:01 [kworker/0:1H]
root        101  0.0  0.0      0     0 ?        S<   Apr06   0:00 [ipv6_addrconf]
root        291  0.0  0.0      0     0 ?        S<   Apr06   0:00 [raid5wq]
root        342  0.0  0.0      0     0 ?        S    Apr06   0:00 [jbd2/sda1-8]
root        343  0.0  0.0      0     0 ?        S<   Apr06   0:00 [ext4-rsv-conver]
root        407  0.0  0.0      0     0 ?        S<   Apr06   0:00 [iscsi_eh]
root        419  0.0  0.0      0     0 ?        S<   Apr06   0:00 [ib-comp-wq]
root        420  0.0  0.0      0     0 ?        S<   Apr06   0:00 [ib_addr]
root        421  0.0  0.0      0     0 ?        S<   Apr06   0:00 [ib_mcast]
root        422  0.0  0.0      0     0 ?        S<   Apr06   0:00 [ib_nl_sa_wq]
root        423  0.0  0.0      0     0 ?        S<   Apr06   0:00 [ib_cm]
root        424  0.0  0.0      0     0 ?        S<   Apr06   0:00 [iw_cm_wq]
root        425  0.0  0.0      0     0 ?        S<   Apr06   0:00 [rdma_cm]
root        452  0.0  0.0  94772   596 ?        Ss   Apr06   0:00 /sbin/lvmetad -f
root        463  0.0  0.2  43816  2276 ?        Ss   Apr06   0:02 /lib/systemd/systemd-udevd
root        505  0.0  0.0      0     0 ?        S    Apr06   0:00 [hv_balloon]
systemd+    506  0.0  0.0 100324   756 ?        Ssl  Apr06   0:00 /lib/systemd/systemd-timesyncd
root       1111  0.0  0.0  16120   856 ?        Ss   Apr06   0:00 /sbin/dhclient -1 -v -pf /run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases -I -df /var/lib/dhcp/dhclient6.eth0.leases eth0
root       1173  0.0  1.5  68604 13932 ?        Ss   Apr06   0:02 /usr/bin/python3 -u /usr/sbin/waagent -daemon
root       1292  0.0  0.0      0     0 ?        S    Apr06   0:00 [jbd2/sdb1-8]
root       1293  0.0  0.0      0     0 ?        S<   Apr06   0:00 [ext4-rsv-conver]
root       1357  0.0  0.0 272944   716 ?        Ssl  Apr06   0:00 /usr/lib/accountsservice/accounts-daemon
root       1363  0.0  0.0   5220   116 ?        Ss   Apr06   0:00 /sbin/iscsid
root       1367  0.0  0.3   5720  3508 ?        S<Ls Apr06   0:04 /sbin/iscsid
daemon     1381  0.0  0.1  26044  1572 ?        Ss   Apr06   0:00 /usr/sbin/atd -f
message+   1396  0.0  0.0  42900   536 ?        Ss   Apr06   0:01 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
unscd      1439  0.0  0.1  14964  1488 ?        Ss   Apr06   0:03 /usr/sbin/nscd -d
root       1459  0.0  0.0   4396   728 ?        Ss   Apr06   0:00 /usr/sbin/acpid
root       1471  0.0  0.1 653064  1304 ?        Ssl  Apr06   0:02 /usr/bin/lxcfs /var/lib/lxcfs/
syslog     1483  0.0  0.0 247968   884 ?        Ssl  Apr06   0:00 /usr/sbin/rsyslogd -n
root       1489  0.0  0.1  17620  1376 ?        Ss   Apr06   0:00 /usr/sbin/cron -f
root       1520  0.0  0.1  20096  1004 ?        Ss   Apr06   0:00 /lib/systemd/systemd-logind
root       1541  0.0  1.3 281460 12492 ?        Ssl  Apr06   0:02 /usr/lib/snapd/snapd
root       1576  0.0  0.2 279260  2180 ?        Ssl  Apr06   0:00 /usr/lib/policykit-1/polkitd --no-debug
root       1671  0.0  0.0   4924   636 ?        Ss   Apr06   0:00 /sbin/mdadm --monitor --pid-file /run/mdadm/monitor.pid --daemonise --scan --syslog
root       1824  0.0  0.0   6208   428 tty1     Ss+  Apr06   0:00 /sbin/agetty --noclear tty1 linux
root       1825  0.0  0.0   4392   644 ttyS0    Ss+  Apr06   0:00 /sbin/agetty --keep-baud 115200 38400 9600 ttyS0 vt220
root       1996  0.0  0.0  59208   920 ?        Ss   Apr06   0:00 /usr/sbin/sshd -D
root       2126  0.5  3.8 235996 35188 ?        Sl   Apr06   5:35 python3 -u bin/WALinuxAgent-2.2.25-py2.7.egg -run-exthandlers
root       3351  0.0  0.1  47624   988 ?        Ss   Apr06   0:00 /sbin/rpcbind -f -w
statd      3865  0.0  0.2  35368  2228 ?        Ss   Apr06   0:00 /sbin/rpc.statd --no-notify
root       7083  0.4  3.8 1079452 35600 ?       Ssl  Apr06   4:25 dockerd -H fd:// --storage-driver=overlay2 --bip=172.17.0.1/16
root       7938  0.0  0.7 143092  6520 ?        Ssl  Apr06   0:00 /usr/bin/docker run --net=host --pid=host --privileged --rm --volume=/:/rootfs:ro,shared --volume=/dev:/dev --volume=/sys:/sys:ro --volume=/var/run:/var/run:rw -
root       8054  0.0  0.0 141152   420 ?        Sl   Apr06   0:00 docker-containerd-shim 95d757aa1e5fc42be6cab960260b1e1345818f5a2ae69e6df0e904eb90c24d53 /var/run/docker/libcontainerd/95d757aa1e5fc42be6cab960260b1e1345818f5a2ae
root       8101  2.0  9.9 1145180 91788 ?       Ssl  Apr06  19:22 /hyperkube kubelet --containerized --enable-server --node-labels=kubernetes.io/role=agent,agentpool=agentpool,storageprofile=managed,storagetier=Premium_LRS,kube
root       8431  0.0  0.6  68424  5752 ?        Ss   Apr06   0:06 /lib/systemd/systemd-journald
root      10835  0.0  0.1   5008  1592 ?        Ss   Apr06   0:04 /usr/lib/linux-tools/4.13.0-1011-azure/hv_kvp_daemon -n
root      10843  0.0  0.0   4356   676 ?        Ss   Apr06   0:00 /usr/lib/linux-tools/4.13.0-1011-azure/hv_vss_daemon -n
root      11227  0.0  0.2 141152  2480 ?        Sl   Apr06   0:00 docker-containerd-shim f329370980f58f0d2a4edc10bea3910b1a5e6a4750213128fb970044d42f3c47 /var/run/docker/libcontainerd/f329370980f58f0d2a4edc10bea3910b1a5e6a47502
root      11244  0.0  0.0   1024     4 ?        Ss   Apr06   0:00 /pause
root      11271  0.0  0.0 141152   444 ?        Sl   Apr06   0:00 docker-containerd-shim 3f6fc407ff875b33c2ecd79eb3d8a814446de0afd34c4846ffd359985ad5b102 /var/run/docker/libcontainerd/3f6fc407ff875b33c2ecd79eb3d8a814446de0afd34
root      11288  0.0  0.0   1024     4 ?        Ss   Apr06   0:00 /pause
root      11318  0.0  0.0 141152   452 ?        Sl   Apr06   0:00 docker-containerd-shim 4f351d33899c15746de94d649d67a512576d2da60599e114721bc08a49c5210a /var/run/docker/libcontainerd/4f351d33899c15746de94d649d67a512576d2da6059
root      11334  0.1  2.2 512996 20492 ?        Ssl  Apr06   1:30 /hyperkube proxy --kubeconfig=/var/lib/kubelet/kubeconfig --cluster-cidr=10.244.0.0/16 --feature-gates=ExperimentalCriticalPodAnnotation=true
root      11659  0.0  0.0 141152   456 ?        Sl   Apr06   0:06 docker-containerd-shim 3699f2285fc7477941e31e450667fa090532b6c9c9e0421cee91cf0675505e39 /var/run/docker/libcontainerd/3699f2285fc7477941e31e450667fa090532b6c9c9e
root      11676  0.0  0.1   6452  1124 ?        Ss   Apr06   0:33 /bin/bash /lib/redirector/run-kube-svc-redirect.sh
root      61847  0.0  0.2  92796  2704 ?        Ss   12:15   0:00 sshd: azureuser [priv]
azureus+  61993  0.0  0.2  36932  2244 ?        Ss   12:16   0:00 /lib/systemd/systemd --user
azureus+  62007  0.0  0.2  61296  2012 ?        S    12:16   0:00 (sd-pam)
azureus+  62227  0.0  0.2  92796  2040 ?        S    12:16   0:00 sshd: azureuser@pts/0
azureus+  62229  0.0  0.3  12936  3516 pts/0    Ss   12:16   0:00 -bash
root     108932  0.0  0.0 141152   680 ?        Sl   13:12   0:00 docker-containerd-shim c9315b25f9be466310a07d32a8de6a1667baabefa37df16f3499c57c1fd2f61a /var/run/docker/libcontainerd/c9315b25f9be466310a07d32a8de6a1667baabefa37
root     108955  0.0  0.0 141152   736 ?        Sl   13:12   0:00 docker-containerd-shim c42f346ca24d494f9a895278abaa337f7fc68f5c1f54425260c3fc41665b6b64 /var/run/docker/libcontainerd/c42f346ca24d494f9a895278abaa337f7fc68f5c1f5
root     108980  0.0  0.0   1024     4 ?        Ss   13:12   0:00 /pause
root     108994  0.0  0.0   1024     4 ?        Ss   13:12   0:00 /pause
root     109244  0.0  0.3 141152  3016 ?        Sl   13:12   0:00 docker-containerd-shim f1f22f681c12b62b9d50191c1d86cb6307a167abe71ac81fa4d30dcd0583dc44 /var/run/docker/libcontainerd/f1f22f681c12b62b9d50191c1d86cb6307a167abe71
root     109260  0.0  0.0  19728   276 ?        Ss   13:12   0:00 bash /start.sh
root     109288  0.0  0.0 141152   444 ?        Sl   13:12   0:00 docker-containerd-shim 05b7b0e15c00177dbacf2660dff76f645370255a5baa854497c34a904836b065 /var/run/docker/libcontainerd/05b7b0e15c00177dbacf2660dff76f645370255a5ba
root     109305  0.0  0.0  19728   272 ?        Ss   13:12   0:00 bash /start.sh
root     109362  0.0  1.4  49960 12960 ?        S    13:13   0:00 /usr/bin/python /usr/bin/supervisord
root     109364  0.0  1.4  49960 12964 ?        S    13:13   0:00 /usr/bin/python /usr/bin/supervisord
root     109398  0.0  0.0  32568   680 ?        S    13:13   0:00 nginx: master process /usr/sbin/nginx
root     109399  0.0  0.0  32568   680 ?        S    13:13   0:00 nginx: master process /usr/sbin/nginx
root     109400  0.0  2.0 173596 18952 ?        S    13:13   0:00 /usr/local/bin/uwsgi --ini /etc/uwsgi/uwsgi.ini --die-on-term
root     109401  0.0  2.0 173596 18952 ?        S    13:13   0:00 /usr/local/bin/uwsgi --ini /etc/uwsgi/uwsgi.ini --die-on-term
systemd+ 109438  0.0  0.1  33048  1256 ?        S    13:13   0:00 nginx: worker process
systemd+ 109439  0.0  0.1  33048  1256 ?        S    13:13   0:00 nginx: worker process
root     109624  0.0  1.8 173596 17028 ?        S    13:13   0:00 /usr/local/bin/uwsgi --ini /etc/uwsgi/uwsgi.ini --die-on-term
root     109625  0.0  1.8 173596 17028 ?        S    13:13   0:00 /usr/local/bin/uwsgi --ini /etc/uwsgi/uwsgi.ini --die-on-term
root     109626  0.0  1.8 173596 17028 ?        S    13:13   0:00 /usr/local/bin/uwsgi --ini /etc/uwsgi/uwsgi.ini --die-on-term
root     109627  0.0  1.8 173740 17160 ?        S    13:13   0:00 /usr/local/bin/uwsgi --ini /etc/uwsgi/uwsgi.ini --die-on-term
root     110072  0.0  0.2 141152  2480 ?        Sl   13:14   0:00 docker-containerd-shim cf2aa26cd563f757cac3f934a5aeb9752ba9a8d6cd1f1d29f38ceb9ea4ce3119 /var/run/docker/libcontainerd/cf2aa26cd563f757cac3f934a5aeb9752ba9a8d6cd1
root     110088  0.0  0.0   1024     4 ?        Ss   13:14   0:00 /pause
root     110263  0.0  0.3 141152  3196 ?        Sl   13:14   0:00 docker-containerd-shim 0c05af05704bd2186c5cbac50f726977bff890bd53888e3f58bf32c7b87b952d /var/run/docker/libcontainerd/0c05af05704bd2186c5cbac50f726977bff890bd538
root     110280  0.0  0.0  19728   280 ?        Ss   13:14   0:00 bash /start.sh
root     110417  0.0  1.4  49960 12960 ?        S    13:14   0:00 /usr/bin/python /usr/bin/supervisord
root     110420  0.0  0.3 141152  3020 ?        Sl   13:14   0:00 docker-containerd-shim 0b45a679d595675de3240e8b48eda96bde67fd2f7fbf83553b5b7698ef30f56d /var/run/docker/libcontainerd/0b45a679d595675de3240e8b48eda96bde67fd2f7fb
root     110441  0.0  0.0   1024     4 ?        Ss   13:14   0:00 /pause
root     110553  0.0  0.1 141152  1136 ?        Sl   13:14   0:00 docker-containerd-shim 7e7e35b498b35e2484e77ee9faa7586e24989506a14b066736dff5c91b953558 /var/run/docker/libcontainerd/7e7e35b498b35e2484e77ee9faa7586e24989506a14
root     110601  0.0  0.0  19728   268 ?        Ss   13:14   0:00 bash /start.sh
root     110640  0.0  1.4  49960 13052 ?        S    13:15   0:00 /usr/bin/python /usr/bin/supervisord
root     110642  0.0  0.0  32568   680 ?        S    13:15   0:00 nginx: master process /usr/sbin/nginx
root     110643  0.0  2.0 173596 18952 ?        S    13:15   0:00 /usr/local/bin/uwsgi --ini /etc/uwsgi/uwsgi.ini --die-on-term
systemd+ 110683  0.0  0.1  33048  1256 ?        S    13:15   0:00 nginx: worker process
root     110810  0.0  0.0  32568   684 ?        S    13:15   0:00 nginx: master process /usr/sbin/nginx
root     110811  0.0  2.0 173596 18952 ?        S    13:15   0:00 /usr/local/bin/uwsgi --ini /etc/uwsgi/uwsgi.ini --die-on-term
systemd+ 110814  0.0  0.1  33048  1260 ?        S    13:15   0:00 nginx: worker process
root     110825  0.0  0.2 206688  2492 ?        Sl   13:15   0:00 docker-containerd-shim 4a5b55f2f08a5d44b23bab13044e8238a667834b5610ab51ac2566e2190f3401 /var/run/docker/libcontainerd/4a5b55f2f08a5d44b23bab13044e8238a667834b561
root     110844  0.0  0.0   1024     4 ?        Ss   13:15   0:00 /pause
root     111131  0.0  1.8 173756 17100 ?        S    13:15   0:00 /usr/local/bin/uwsgi --ini /etc/uwsgi/uwsgi.ini --die-on-term
root     111132  0.0  1.8 173596 17028 ?        S    13:15   0:00 /usr/local/bin/uwsgi --ini /etc/uwsgi/uwsgi.ini --die-on-term
root     111133  0.0  1.8 173740 17160 ?        S    13:15   0:00 /usr/local/bin/uwsgi --ini /etc/uwsgi/uwsgi.ini --die-on-term
root     111134  0.0  1.8 173740 17160 ?        S    13:15   0:00 /usr/local/bin/uwsgi --ini /etc/uwsgi/uwsgi.ini --die-on-term
root     111220  0.0  0.3 141152  2972 ?        Sl   13:15   0:00 docker-containerd-shim 71abb441fa1e56280ea2fd08099aca053d0538b0f0748a0f41ae41f8059b203c /var/run/docker/libcontainerd/71abb441fa1e56280ea2fd08099aca053d0538b0f07
root     111237  0.0  0.0  19728   280 ?        Ss   13:15   0:00 bash /start.sh
root     111297  0.0  1.4  49960 13376 ?        S    13:15   0:00 /usr/bin/python /usr/bin/supervisord
root     111392  0.0  0.0  32568   680 ?        S    13:15   0:00 nginx: master process /usr/sbin/nginx
root     111393  0.0  2.0 173596 18952 ?        S    13:15   0:00 /usr/local/bin/uwsgi --ini /etc/uwsgi/uwsgi.ini --die-on-term
systemd+ 111395  0.0  0.1  33048  1256 ?        S    13:15   0:00 nginx: worker process
root     111446  0.0  1.8 173596 17028 ?        S    13:16   0:00 /usr/local/bin/uwsgi --ini /etc/uwsgi/uwsgi.ini --die-on-term
root     111447  0.0  1.8 173596 17028 ?        S    13:16   0:00 /usr/local/bin/uwsgi --ini /etc/uwsgi/uwsgi.ini --die-on-term
root     112930  0.0  0.0 206688   444 ?        Sl   13:19   0:00 docker-containerd-shim 655623dc634d97289cf8f989c7c3532ded017c58406d995e6f956ac307e49411 /var/run/docker/libcontainerd/655623dc634d97289cf8f989c7c3532ded017c58406
root     112931  0.0  0.0 206688   436 ?        Sl   13:19   0:00 docker-containerd-shim 400103433c1b19435e3eb277a81d0432d9bde6d5e3d99e12f129163fe604e13b /var/run/docker/libcontainerd/400103433c1b19435e3eb277a81d0432d9bde6d5e3d
root     112985  0.0  0.1 108852  1188 ?        Ssl  13:19   0:00 /proc/self/exe init
root     112987  0.0  0.1 108852  1196 ?        Ssl  13:19   0:00 /proc/self/exe init
root     113568  0.0  0.0      0     0 ?        S    13:21   0:00 [kworker/u256:0]
root     114033  0.0  0.0      0     0 ?        S    13:24   0:00 [kworker/0:3]
root     116508  0.0  0.0      0     0 ?        S    13:40   0:00 [kworker/0:1]
root     116708  0.0  0.0      0     0 ?        S    13:49   0:00 [kworker/u256:1]
root     118994  0.0  0.5 133064  5112 ?        Ssl  14:02   0:00 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainer
root     120014  0.0  0.0      0     0 ?        S    14:07   0:00 [kworker/u256:2]
azureus+ 120850  0.3  0.1  27636  1536 pts/0    R+   14:11   0:00 ps -aux
root     120861  0.0  0.0   1512     4 ?        S    14:11   0:00 sleep 10

All disk consumption about a node :

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            440M     0  440M   0% /dev
tmpfs            90M  1.4M   89M   2% /run
/dev/sda1        49G  4.1G   45G   9% /
tmpfs            64M     0   64M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           450M     0  450M   0% /sys/fs/cgroup
/dev/sdb1       3.9G  8.0M  3.7G   1% /mnt
shm              64M     0   64M   0% /dev/shm
tmpfs            90M     0   90M   0% /run/user/1000

kernel logs one of the nodes :

$ dmesg
[    0.000000] random: get_random_bytes called from start_kernel+0x42/0x50d with crng_init=0
[    0.000000] Linux version 4.13.0-1011-azure (buildd@lcy01-amd64-024) (gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9)) #14-Ubuntu SMP Thu Feb 15 16:15:39 UTC 2018 (Ubuntu 4.13.0-1011.14-azure 4.13.13)
[    0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-4.13.0-1011-azure root=UUID=75604357-9029-4b6f-8c13-6201b689c664 ro console=tty1 console=ttyS0 earlyprintk=ttyS0 rootdelay=300
[    0.000000] KERNEL supported cpus:
[    0.000000]   Intel GenuineIntel
[    0.000000]   AMD AuthenticAMD
[    0.000000]   Centaur CentaurHauls
[    0.000000] x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
[    0.000000] x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
[    0.000000] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
[    0.000000] x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
[    0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
[    0.000000] BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved
[    0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000001ffeffff] usable
[    0.000000] BIOS-e820: [mem 0x000000001fff0000-0x000000001fffefff] ACPI data
[    0.000000] BIOS-e820: [mem 0x000000001ffff000-0x000000001fffffff] ACPI NVS
[    0.000000] BIOS-e820: [mem 0x0000000100000000-0x000000011fffffff] usable
[    0.000000] bootconsole [earlyser0] enabled
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] random: fast init done
[    0.000000] SMBIOS 2.3 present.
[    0.000000] DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS 090007  06/02/2017
[    0.000000] Hypervisor detected: Microsoft Hyper-V
[    0.000000] Hyper-V: features 0x2e7f, hints 0xc2c
[    0.000000] Hyper-V Host Build:14393-10.0-0-0.230
[    0.000000] Hyper-V: LAPIC Timer Frequency: 0xc3500
[    0.000000] tsc: Marking TSC unstable due to running on Hyper-V
[    0.000000] Hyper-V: Using ext hypercall for remote TLB flush
[    0.000000] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.000000] e820: last_pfn = 0x120000 max_arch_pfn = 0x400000000
[    0.000000] MTRR default type: uncachable
[    0.000000] MTRR fixed ranges enabled:
[    0.000000]   00000-9FFFF write-back
[    0.000000]   A0000-DFFFF uncachable
[    0.000000]   E0000-FFFFF write-back
[    0.000000] MTRR variable ranges enabled:
[    0.000000]   0 base 00000000000 mask FFFE0000000 write-back
[    0.000000]   1 base 00100000000 mask FF000000000 write-back
[    0.000000]   2 disabled
[    0.000000]   3 disabled
[    0.000000]   4 disabled
[    0.000000]   5 disabled
[    0.000000]   6 disabled
[    0.000000]   7 disabled
[    0.000000] x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WC  UC- WT
[    0.000000] e820: update [mem 0x20000000-0xffffffff] usable ==> reserved
[    0.000000] e820: last_pfn = 0x1fff0 max_arch_pfn = 0x400000000
[    0.000000] found SMP MP-table at [mem 0x000ff780-0x000ff78f] mapped at [ffff9024800ff780]
[    0.000000] Scanning 1 areas for low memory corruption
[    0.000000] Base memory trampoline at [ffff902480099000] 99000 size 24576
[    0.000000] Using GB pages for direct mapping
[    0.000000] BRK [0x094d1000, 0x094d1fff] PGTABLE
[    0.000000] BRK [0x094d2000, 0x094d2fff] PGTABLE
[    0.000000] BRK [0x094d3000, 0x094d3fff] PGTABLE
[    0.000000] BRK [0x094d4000, 0x094d4fff] PGTABLE
[    0.000000] BRK [0x094d5000, 0x094d5fff] PGTABLE
[    0.000000] RAMDISK: [mem 0x1e6ee000-0x1f759fff]
[    0.000000] ACPI: Early table checksum verification disabled
[    0.000000] ACPI: RSDP 0x00000000000F5BF0 000014 (v00 ACPIAM)
[    0.000000] ACPI: RSDT 0x000000001FFF0000 000040 (v01 VRTUAL MICROSFT 06001702 MSFT 00000097)
[    0.000000] ACPI: FACP 0x000000001FFF0200 000081 (v02 VRTUAL MICROSFT 06001702 MSFT 00000097)
[    0.000000] ACPI: DSDT 0x000000001FFF1D24 003CBE (v01 MSFTVM MSFTVM02 00000002 INTL 02002026)
[    0.000000] ACPI: FACS 0x000000001FFFF000 000040
[    0.000000] ACPI: WAET 0x000000001FFF1A80 000028 (v01 VRTUAL MICROSFT 06001702 MSFT 00000097)
[    0.000000] ACPI: SLIC 0x000000001FFF1AC0 000176 (v01 VRTUAL MICROSFT 06001702 MSFT 00000097)
[    0.000000] ACPI: OEM0 0x000000001FFF1CC0 000064 (v01 VRTUAL MICROSFT 06001702 MSFT 00000097)
[    0.000000] ACPI: SRAT 0x000000001FFF0800 000130 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001)
[    0.000000] ACPI: APIC 0x000000001FFF0300 000452 (v01 VRTUAL MICROSFT 06001702 MSFT 00000097)
[    0.000000] ACPI: OEMB 0x000000001FFFF040 000064 (v01 VRTUAL MICROSFT 06001702 MSFT 00000097)
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] SRAT: PXM 0 -> APIC 0x00 -> Node 0
[    0.000000] ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x1fffffff] hotplug
[    0.000000] ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x11fffffff] hotplug
[    0.000000] ACPI: SRAT: Node 0 PXM 0 [mem 0x120200000-0xfdfffffff] hotplug
[    0.000000] ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug
[    0.000000] ACPI: SRAT: Node 0 PXM 0 [mem 0x10000200000-0x1ffffffffff] hotplug
[    0.000000] ACPI: SRAT: Node 0 PXM 0 [mem 0x20000200000-0x3ffffffffff] hotplug
[    0.000000] NUMA: Node 0 [mem 0x00000000-0x1fffffff] + [mem 0x100000000-0x11fffffff] -> [mem 0x00000000-0x11fffffff]
[    0.000000] NODE_DATA(0) allocated [mem 0x11ffd4000-0x11fffefff]
[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
[    0.000000]   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
[    0.000000]   Normal   [mem 0x0000000100000000-0x000000011fffffff]
[    0.000000]   Device   empty
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x0000000000001000-0x000000000009efff]
[    0.000000]   node   0: [mem 0x0000000000100000-0x000000001ffeffff]
[    0.000000]   node   0: [mem 0x0000000100000000-0x000000011fffffff]
[    0.000000] Initmem setup node 0 [mem 0x0000000000001000-0x000000011fffffff]
[    0.000000] On node 0 totalpages: 262030
[    0.000000]   DMA zone: 64 pages used for memmap
[    0.000000]   DMA zone: 21 pages reserved
[    0.000000]   DMA zone: 3998 pages, LIFO batch:0
[    0.000000]   DMA32 zone: 1984 pages used for memmap
[    0.000000]   DMA32 zone: 126960 pages, LIFO batch:31
[    0.000000]   Normal zone: 2048 pages used for memmap
[    0.000000]   Normal zone: 131072 pages, LIFO batch:31
[    0.000000] ACPI: PM-Timer IO Port: 0x408
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
[    0.000000] IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.000000] ACPI: IRQ0 used by override.
[    0.000000] ACPI: IRQ9 used by override.
[    0.000000] Using ACPI (MADT) for SMP configuration information
[    0.000000] smpboot: Allowing 128 CPUs, 127 hotplug CPUs
[    0.000000] PM: Registered nosave memory: [mem 0x00000000-0x00000fff]
[    0.000000] PM: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
[    0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000dffff]
[    0.000000] PM: Registered nosave memory: [mem 0x000e0000-0x000fffff]
[    0.000000] PM: Registered nosave memory: [mem 0x1fff0000-0x1fffefff]
[    0.000000] PM: Registered nosave memory: [mem 0x1ffff000-0x1fffffff]
[    0.000000] PM: Registered nosave memory: [mem 0x20000000-0xffffffff]
[    0.000000] e820: [mem 0x20000000-0xffffffff] available for PCI devices
[    0.000000] Booting paravirtualized kernel on bare hardware
[    0.000000] clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645519600211568 ns
[    0.000000] setup_percpu: NR_CPUS:8192 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1
[    0.000000] percpu: Embedded 44 pages/cpu @ffff90259c600000 s141784 r8192 d30248 u262144
[    0.000000] pcpu-alloc: s141784 r8192 d30248 u262144 alloc=1*2097152
[    0.000000] pcpu-alloc: [0] 000 001 002 003 004 005 006 007
[    0.000000] pcpu-alloc: [0] 008 009 010 011 012 013 014 015
[    0.000000] pcpu-alloc: [0] 016 017 018 019 020 021 022 023
[    0.000000] pcpu-alloc: [0] 024 025 026 027 028 029 030 031
[    0.000000] pcpu-alloc: [0] 032 033 034 035 036 037 038 039
[    0.000000] pcpu-alloc: [0] 040 041 042 043 044 045 046 047
[    0.000000] pcpu-alloc: [0] 048 049 050 051 052 053 054 055
[    0.000000] pcpu-alloc: [0] 056 057 058 059 060 061 062 063
[    0.000000] pcpu-alloc: [0] 064 065 066 067 068 069 070 071
[    0.000000] pcpu-alloc: [0] 072 073 074 075 076 077 078 079
[    0.000000] pcpu-alloc: [0] 080 081 082 083 084 085 086 087
[    0.000000] pcpu-alloc: [0] 088 089 090 091 092 093 094 095
[    0.000000] pcpu-alloc: [0] 096 097 098 099 100 101 102 103
[    0.000000] pcpu-alloc: [0] 104 105 106 107 108 109 110 111
[    0.000000] pcpu-alloc: [0] 112 113 114 115 116 117 118 119
[    0.000000] pcpu-alloc: [0] 120 121 122 123 124 125 126 127
[    0.000000] Built 1 zonelists in Node order, mobility grouping on.  Total pages: 257913
[    0.000000] Policy zone: Normal
[    0.000000] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-4.13.0-1011-azure root=UUID=75604357-9029-4b6f-8c13-6201b689c664 ro console=tty1 console=ttyS0 earlyprintk=ttyS0 rootdelay=300
[    0.000000] log_buf_len individual max cpu contribution: 4096 bytes
[    0.000000] log_buf_len total cpu_extra contributions: 520192 bytes
[    0.000000] log_buf_len min size: 262144 bytes
[    0.000000] log_buf_len: 1048576 bytes
[    0.000000] early log buf free: 252092(96%)
[    0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)
[    0.000000] Calgary: detecting Calgary via BIOS EBDA area
[    0.000000] Calgary: Unable to locate Rio Grande table in EBDA - bailing!
[    0.000000] Memory: 899536K/1048120K available (12300K kernel code, 2317K rwdata, 3576K rodata, 2204K init, 2340K bss, 148584K reserved, 0K cma-reserved)
[    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1
[    0.000000] Kernel/User page tables isolation: enabled
[    0.000000] ftrace: allocating 34650 entries in 136 pages
[    0.000000] Hierarchical RCU implementation.
[    0.000000] 	RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=128.
[    0.000000] 	Tasks RCU enabled.
[    0.000000] RCU: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128
[    0.000000] NR_IRQS: 524544, nr_irqs: 1448, preallocated irqs: 16
[    0.000000] Console: colour VGA+ 80x25
[    0.000000] console [tty1] enabled
[    0.000000] console [ttyS0] enabled
[    0.000000] bootconsole [earlyser0] disabled
[    0.000000] tsc: Detected 2394.452 MHz processor
[    0.004000] Calibrating delay loop (skipped), value calculated using timer frequency.. 4788.90 BogoMIPS (lpj=9577808)
[    0.008026] pid_max: default: 131072 minimum: 1024
[    0.012073] ACPI: Core revision 20170531
[    0.017689] ACPI: 1 ACPI AML tables successfully acquired and loaded
[    0.020162] Security Framework initialized
[    0.024045] Yama: becoming mindful.
[    0.028188] AppArmor: AppArmor initialized
[    0.032299] Dentry cache hash table entries: 131072 (order: 8, 1048576 bytes)
[    0.040103] Inode-cache hash table entries: 65536 (order: 7, 524288 bytes)
[    0.044078] Mount-cache hash table entries: 2048 (order: 2, 16384 bytes)
[    0.048033] Mountpoint-cache hash table entries: 2048 (order: 2, 16384 bytes)
[    0.052792] CPU: Physical Processor ID: 0
[    0.056124] FEATURE SPEC_CTRL Not Present
[    0.060026] mce: CPU supports 1 MCE banks
[    0.068110] Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024
[    0.072031] Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4
[    0.076212] Spectre V2 mitigation: Mitigation: Full generic retpoline
[    0.080027] Spectre V2 mitigation: Speculation control IBPB not-supported IBRS not-supported
[    0.089320] smpboot: Max logical packages: 128
[    0.096083] Switched APIC routing to physical flat.
[    0.100067] clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns
[    0.130452] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
[    0.132127] smpboot: CPU0: Intel(R) Xeon(R) CPU E5-2673 v3 @ 2.40GHz (family: 0x6, model: 0x3f, stepping: 0x2)
[    0.136176] Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only.
[    0.140076] Hierarchical SRCU implementation.
[    0.148031] NMI watchdog: disabled (cpu0): hardware events not enabled
[    0.152008] NMI watchdog: Shutting down hard lockup detector on all cpus
[    0.156442] smp: Bringing up secondary CPUs ...
[    0.160012] smp: Brought up 1 node, 1 CPU
[    0.164007] smpboot: Total of 1 processors activated (4788.90 BogoMIPS)
[    0.176110] devtmpfs: initialized
[    0.180056] x86/mm: Memory block size: 128MB
[    0.184179] evm: security.selinux
[    0.188003] evm: security.SMACK64
[    0.192003] evm: security.SMACK64EXEC
[    0.196003] evm: security.SMACK64TRANSMUTE
[    0.200007] evm: security.SMACK64MMAP
[    0.204002] evm: security.ima
[    0.208006] evm: security.capability
[    0.212178] PM: Registering ACPI NVS region [mem 0x1ffff000-0x1fffffff] (4096 bytes)
[    0.216120] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 7645041785100000 ns
[    0.220099] futex hash table entries: 32768 (order: 9, 2097152 bytes)
[    0.224636] pinctrl core: initialized pinctrl subsystem
[    0.251875] RTC time: 22:25:37, date: 04/06/18
[    0.252253] NET: Registered protocol family 16
[    0.256176] cpuidle: using governor ladder
[    0.260010] cpuidle: using governor menu
[    0.264004] PCCT header not found.
[    0.268070] ACPI: bus type PCI registered
[    0.272008] acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
[    0.276476] PCI: Using configuration type 1 for base access
[    0.281061] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
[    0.284007] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
[    0.288564] ACPI: Added _OSI(Module Device)
[    0.292006] ACPI: Added _OSI(Processor Device)
[    0.296007] ACPI: Added _OSI(3.0 _SCP Extensions)
[    0.300004] ACPI: Added _OSI(Processor Aggregator Device)
[    0.310349] ACPI: Interpreter enabled
[    0.312023] ACPI: (supports S0 S5)
[    0.316007] ACPI: Using IOAPIC for interrupt routing
[    0.320032] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[    0.324192] ACPI: Enabled 1 GPEs in block 00 to 0F
[    0.354576] ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
[    0.356010] acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI]
[    0.360021] acpi PNP0A03:00: _OSC failed (AE_NOT_FOUND); disabling ASPM
[    0.364032] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
[    0.368216] PCI host bridge to bus 0000:00
[    0.372007] pci_bus 0000:00: root bus resource [mem 0xfe0000000-0xfffffffff window]
[    0.376013] pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
[    0.380026] pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
[    0.384014] pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
[    0.388015] pci_bus 0000:00: root bus resource [mem 0x20000000-0xfffbffff window]
[    0.392013] pci_bus 0000:00: root bus resource [bus 00-ff]
[    0.396240] pci 0000:00:00.0: [8086:7192] type 00 class 0x060000
[    0.398621] pci 0000:00:07.0: [8086:7110] type 00 class 0x060100
[    0.403348] pci 0000:00:07.1: [8086:7111] type 00 class 0x010180
[    0.405637] pci 0000:00:07.1: reg 0x20: [io  0xffa0-0xffaf]
[    0.406572] pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
[    0.408011] pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
[    0.412007] pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
[    0.416012] pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
[    0.424004] pci 0000:00:07.3: [8086:7113] type 00 class 0x068000
[    0.424054] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
               * this clock source is slow. Consider trying other clock sources
[    0.434701] pci 0000:00:07.3: quirk: [io  0x0400-0x043f] claimed by PIIX4 ACPI
[    0.436970] pci 0000:00:08.0: [1414:5353] type 00 class 0x030000
[    0.437555] pci 0000:00:08.0: reg 0x10: [mem 0xf8000000-0xfbffffff]
[    0.452935] ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 7 9 10 *11 12 14 15)
[    0.456373] ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 5 7 9 10 11 12 14 15) *0, disabled.
[    0.460318] ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 5 7 9 10 11 12 14 15) *0, disabled.
[    0.464312] ACPI: PCI Interrupt Link [LNKD] (IRQs 3 4 5 7 9 10 11 12 14 15) *0, disabled.
[    0.468513] SCSI subsystem initialized
[    0.472130] libata version 3.00 loaded.
[    0.476069] pci 0000:00:08.0: vgaarb: setting as boot VGA device
[    0.480000] pci 0000:00:08.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
[    0.480014] pci 0000:00:08.0: vgaarb: bridge control possible
[    0.484010] vgaarb: loaded
[    0.488240] EDAC MC: Ver: 3.0.0
[    0.492805] hv_vmbus: Vmbus version:4.0
[    0.496152] PCI: Using ACPI for IRQ routing
[    0.500005] PCI: pci_cache_line_size set to 64 bytes
[    0.500890] e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
[    0.500892] e820: reserve RAM buffer [mem 0x1fff0000-0x1fffffff]
[    0.501646] NetLabel: Initializing
[    0.504010] NetLabel:  domain hash size = 128
[    0.508010] NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
[    0.512022] NetLabel:  unlabeled traffic allowed by default
[    0.516179] clocksource: Switched to clocksource hyperv_clocksource_tsc_page
[    0.567016] VFS: Disk quotas dquot_6.6.0
[    0.587005] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[    0.620907] AppArmor: AppArmor Filesystem Enabled
[    0.649602] pnp: PnP ACPI init
[    0.667670] pnp 00:00: Plug and Play ACPI device, IDs PNP0b00 (active)
[    0.667725] pnp 00:01: Plug and Play ACPI device, IDs PNP0303 PNP030b (active)
[    0.667763] pnp 00:02: Plug and Play ACPI device, IDs PNP0f03 PNP0f13 (active)
[    0.668558] pnp 00:03: [dma 0 disabled]
[    0.668582] pnp 00:03: Plug and Play ACPI device, IDs PNP0501 (active)
[    0.669282] pnp 00:04: [dma 0 disabled]
[    0.669309] pnp 00:04: Plug and Play ACPI device, IDs PNP0501 (active)
[    0.670192] pnp 00:05: [dma 2]
[    0.670225] pnp 00:05: Plug and Play ACPI device, IDs PNP0700 (active)
[    0.670261] system 00:06: [io  0x01e0-0x01ef] has been reserved
[    0.699125] system 00:06: [io  0x0160-0x016f] has been reserved
[    0.728488] system 00:06: [io  0x0278-0x027f] has been reserved
[    0.757248] system 00:06: [io  0x0378-0x037f] has been reserved
[    0.786392] system 00:06: [io  0x0678-0x067f] has been reserved
[    0.815132] system 00:06: [io  0x0778-0x077f] has been reserved
[    0.849956] system 00:06: [io  0x04d0-0x04d1] has been reserved
[    0.880151] system 00:06: Plug and Play ACPI device, IDs PNP0c02 (active)
[    0.880353] system 00:07: [io  0x0400-0x043f] has been reserved
[    0.910521] system 00:07: [io  0x0370-0x0371] has been reserved
[    0.941264] system 00:07: [io  0x0440-0x044f] has been reserved
[    0.971103] system 00:07: [mem 0xfec00000-0xfec00fff] could not be reserved
[    1.004229] system 00:07: [mem 0xfee00000-0xfee00fff] has been reserved
[    1.037259] system 00:07: Plug and Play ACPI device, IDs PNP0c02 (active)
[    1.037465] system 00:08: [mem 0x00000000-0x0009ffff] could not be reserved
[    1.073816] system 00:08: [mem 0x000c0000-0x000dffff] could not be reserved
[    1.107115] system 00:08: [mem 0x000e0000-0x000fffff] could not be reserved
[    1.141123] system 00:08: [mem 0x00100000-0x1fffffff] could not be reserved
[    1.174538] system 00:08: [mem 0xfffc0000-0xffffffff] has been reserved
[    1.482604] system 00:08: Plug and Play ACPI device, IDs PNP0c01 (active)
[    1.482902] pnp: PnP ACPI: found 9 devices
[    1.509840] clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
[    1.553102] pci_bus 0000:00: resource 4 [mem 0xfe0000000-0xfffffffff window]
[    1.553103] pci_bus 0000:00: resource 5 [io  0x0000-0x0cf7 window]
[    1.553105] pci_bus 0000:00: resource 6 [io  0x0d00-0xffff window]
[    1.553106] pci_bus 0000:00: resource 7 [mem 0x000a0000-0x000bffff window]
[    1.553108] pci_bus 0000:00: resource 8 [mem 0x20000000-0xfffbffff window]
[    1.553225] NET: Registered protocol family 2
[    1.576438] TCP established hash table entries: 8192 (order: 4, 65536 bytes)
[    1.612218] TCP bind hash table entries: 8192 (order: 5, 131072 bytes)
[    1.645719] TCP: Hash tables configured (established 8192 bind 8192)
[    1.678131] UDP hash table entries: 512 (order: 2, 16384 bytes)
[    1.707682] UDP-Lite hash table entries: 512 (order: 2, 16384 bytes)
[    1.738875] NET: Registered protocol family 1
[    1.758612] pci 0000:00:00.0: Limiting direct PCI/PCI transfers
[    1.786472] pci 0000:00:08.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
[    1.820939] PCI: CLS 0 bytes, default 64
[    1.820982] Unpacking initramfs...
[    2.074171] Freeing initrd memory: 16816K
[    2.095995] PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
[    2.130803] software IO TLB [mem 0x1a6ee000-0x1e6ee000] (64MB) mapped at [ffff90249a6ee000-ffff90249e6edfff]
[    2.182185] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2283be6f12a, max_idle_ns: 440795258165 ns
[    2.238779] Scanning for low memory corruption every 60 seconds
[    2.277573] audit: initializing netlink subsys (disabled)
[    2.307090] Initialise system trusted keyrings
[    2.332015] audit: type=2000 audit(1523053537.306:1): state=initialized audit_enabled=0 res=1
[    2.378490] Key type blacklist registered
[    2.400942] workingset: timestamp_bits=36 max_order=18 bucket_order=0
[    2.437274] zbud: loaded
[    2.451807] squashfs: version 4.0 (2009/01/31) Phillip Lougher
[    2.484533] fuse init (API version 7.26)
[    2.508588] Key type asymmetric registered
[    2.530882] Asymmetric key parser 'x509' registered
[    2.556888] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 247)
[    2.593765] io scheduler noop registered (default)
[    2.621136] intel_idle: does not run on family 6 model 63
[    2.621200] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
[    2.665519] ACPI: Power Button [PWRF]
[    2.693194] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled
[    2.818889] 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
[    2.931224] 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A
[    2.972249] Linux agpgart interface v0.103
[    2.997982] loop: module loaded
[    3.016348] hv_vmbus: registering driver hv_storvsc
[    3.043858] scsi host0: storvsc_host_t
[    3.068425] scsi 0:0:0:0: Direct-Access     Msft     Virtual Disk     1.0  PQ: 0 ANSI: 5
[    3.114363] scsi host1: storvsc_host_t
[    3.139365] scsi 1:0:1:0: Direct-Access     Msft     Virtual Disk     1.0  PQ: 0 ANSI: 5
[    3.192674] scsi host2: storvsc_host_t
[    3.219776] scsi host3: storvsc_host_t
[    3.247919] sd 0:0:0:0: Attached scsi generic sg0 type 0
[    3.278998] sd 1:0:1:0: Attached scsi generic sg1 type 0
[    3.308643] ata_piix 0000:00:07.1: version 2.13
[    3.309831] ata_piix 0000:00:07.1: Hyper-V Virtual Machine detected, ATA device ignore set
[    3.352132] sd 1:0:1:0: [sdb] 8388608 512-byte logical blocks: (4.29 GB/4.00 GiB)
[    3.393205] sd 1:0:1:0: [sdb] 4096-byte physical blocks
[    3.421828] sd 0:0:0:0: [sda] 104857600 512-byte logical blocks: (53.7 GB/50.0 GiB)
[    3.462093] sd 0:0:0:0: [sda] 4096-byte physical blocks
[    3.499675] scsi host4: ata_piix
[    3.516036] scsi host5: ata_piix
[    3.534091] ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0xffa0 irq 14
[    3.568498] ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0xffa8 irq 15
[    3.604101] sd 1:0:1:0: [sdb] Write Protect is off
[    3.629081] sd 1:0:1:0: [sdb] Mode Sense: 0f 00 10 00
[    3.632147] sd 0:0:0:0: [sda] Write Protect is off
[    3.656078] sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00
[    3.656324] libphy: Fixed MDIO Bus: probed
[    3.677298] tun: Universal TUN/TAP device driver, 1.6
[    3.705250] PPP generic driver version 2.4.2
[    3.729250] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, supports DPO and FUA
[    3.775740] sd 1:0:1:0: [sdb] Write cache: disabled, read cache: enabled, supports DPO and FUA
[    4.015068] ata1.01: host indicates ignore ATA devices, ignored
[    4.015953] i8042: PNP: PS/2 Controller [PNP0303:PS2K,PNP0f03:PS2M] at 0x60,0x64 irq 1,12
[    4.060801] ata1.00: host indicates ignore ATA devices, ignored
[    4.066633] serio: i8042 KBD port at 0x60,0x64 irq 1
[    4.092944] serio: i8042 AUX port at 0x60,0x64 irq 12
[    4.126250] mousedev: PS/2 mouse device common for all mice
[    4.157530] rtc_cmos 00:00: RTC can wake from S4
[    4.183052] ata2.01: NODEV after polling detection
[    4.255224] input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
[    4.319586]  sda: sda1
[    4.332477]  sdb: sdb1
[    4.347025] rtc_cmos 00:00: rtc core: registered rtc_cmos as rtc0
[    4.399276] ata2.00: ATAPI: Virtual CD, , max MWDMA2
[    4.431517] rtc_cmos 00:00: alarms up to one month, 114 bytes nvram
[    4.478880] ata2.00: configured for MWDMA2
[    4.503424] sd 0:0:0:0: [sda] Attached SCSI disk
[    4.530102] sd 1:0:1:0: [sdb] Attached SCSI disk
[    4.555850] scsi 5:0:0:0: CD-ROM            Msft     Virtual CD/ROM   1.0  PQ: 0 ANSI: 5
[    4.599863] device-mapper: uevent: version 1.0.3
[    4.634508] device-mapper: ioctl: 4.37.0-ioctl (2017-09-20) initialised: dm-devel@redhat.com
[    4.682894] sr 5:0:0:0: [sr0] scsi3-mmc drive: 0x/0x tray
[    4.710876] cdrom: Uniform CD-ROM driver Revision: 3.20
[    4.736253] NET: Registered protocol family 10
[    4.759155] sr 5:0:0:0: Attached scsi CD-ROM sr0
[    4.759227] sr 5:0:0:0: Attached scsi generic sg2 type 5
[    4.791913] Segment Routing with IPv6
[    4.810028] NET: Registered protocol family 17
[    4.833089] Key type dns_resolver registered
[    4.854511] RAS: Correctable Errors collector initialized.
[    4.881874] registered taskstats version 1
[    4.901323] Loading compiled-in X.509 certificates
[    4.927327] Loaded X.509 cert 'Build time autogenerated kernel key: c07d35edf7b06193c240173a61a8831af71c0a7f'
[    4.989722] zswap: loaded using pool lzo/zbud
[    5.017228] Key type big_key registered
[    5.038668] Key type trusted registered
[    5.059148] Key type encrypted registered
[    5.080028] AppArmor: AppArmor sha1 policy hashing enabled
[    5.118302] ima: No TPM chip found, activating TPM-bypass! (rc=-19)
[    5.158763] evm: HMAC attrs: 0x1
[    5.178170]   Magic number: 14:147:451
[    5.198955] rtc_cmos 00:00: setting system clock to 2018-04-06 22:25:43 UTC (1523053543)
[    5.238029] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found
[    5.270211] EDD information not available.
[    5.290771] PM: Hibernation image not present or could not be loaded.
[    5.301618] Freeing unused kernel memory: 2204K
[    5.324506] Write protecting the kernel read-only data: 18432k
[    5.352821] Freeing unused kernel memory: 2024K
[    5.379593] Freeing unused kernel memory: 520K
[    5.403095] x86/mm: Checked W+X mappings: passed, no W+X pages found.
[    5.434250] x86/mm: Checking user space page tables
[    5.463167] x86/mm: Checked W+X mappings: passed, no W+X pages found.
[    5.669033] hv_vmbus: registering driver hv_netvsc
[    5.695562] pps_core: LinuxPPS API ver. 1 registered
[    5.720359] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
[    5.769365] PTP clock support registered
[    5.791509] hv_utils: Registering HyperV Utility Driver
[    5.817175] hv_vmbus: registering driver hv_util
[    5.846822] hidraw: raw HID events driver (C) Jiri Kosina
[    5.874969] hv_vmbus: registering driver hyperv_fb
[    5.902480] hv_vmbus: registering driver hyperv_keyboard
[    5.963148] hv_vmbus: registering driver hid_hyperv
[    6.004888] input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A03:00/device:07/VMBUS:01/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio2/input/input3
[    6.078075] hyperv_fb: Screen resolution: 1152x864, Color depth: 32
[    6.141948] Console: switching to colour frame buffer device 144x54
[    6.240309] hv_utils: Heartbeat IC version 3.0
[    6.295472] input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input4
[    6.391042] hid 0006:045E:0621.0001: input: <UNKNOWN> HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on
[    6.473027] AVX2 version of gcm_enc/dec engaged.
[    6.497948] AES CTR mode by8 optimization enabled
[    6.557328] hv_utils: Shutdown IC version 3.0
[    6.648848] hv_utils: TimeSync IC version 4.0
[    6.673141] hv_utils: VSS IC version 5.0
[    8.160014] raid6: sse2x1   gen()  7412 MB/s
[    8.228018] raid6: sse2x1   xor()  6074 MB/s
[    8.292011] raid6: sse2x2   gen()  9345 MB/s
[    8.360014] raid6: sse2x2   xor()  6118 MB/s
[    8.428008] raid6: sse2x4   gen() 11029 MB/s
[    8.496008] raid6: sse2x4   xor()  7675 MB/s
[    8.564007] raid6: avx2x1   gen() 14075 MB/s
[    8.628017] raid6: avx2x1   xor() 10974 MB/s
[    8.696009] raid6: avx2x2   gen() 16084 MB/s
[    8.764006] raid6: avx2x2   xor() 11445 MB/s
[    8.832008] raid6: avx2x4   gen() 19586 MB/s
[    8.896009] raid6: avx2x4   xor() 13778 MB/s
[    8.918065] raid6: using algorithm avx2x4 gen() 19586 MB/s
[    8.946052] raid6: .... xor() 13778 MB/s, rmw enabled
[    8.971096] raid6: using avx2x2 recovery algorithm
[    8.997467] xor: automatically using best checksumming function   avx
[    9.034938] async_tx: api initialized (async)
[    9.197180] Btrfs loaded, crc32c=crc32c-intel
[    9.878854] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null)
[   12.736684] systemd[1]: systemd 229 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ -LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN)
[   12.828511] systemd[1]: Detected virtualization microsoft.
[   12.857060] systemd[1]: Detected architecture x86-64.
[   12.959984] systemd[1]: Set hostname to <ubuntu>.
[   13.030507] systemd[1]: Initializing machine ID from random generator.
[   13.072285] systemd[1]: Installed transient /etc/machine-id file.
[   16.080182] systemd[1]: Created slice User and Session Slice.
[   16.145440] systemd[1]: Started Trigger resolvconf update for networkd DNS.
[   16.218328] systemd[1]: Listening on LVM2 poll daemon socket.
[   16.284244] systemd[1]: Listening on Journal Audit Socket.
[   16.715603] Loading iSCSI transport class v2.0-870.
[   16.799249] EXT4-fs (sda1): re-mounted. Opts: discard
[   16.930631] iscsi: registered transport (tcp)
[   17.249820] iscsi: registered transport (iser)
[   17.802393] systemd[1]: Starting Create Static Device Nodes in /dev...
[   18.023959] systemd[1]: Starting udev Coldplug all Devices...
[   18.079676] systemd[1]: Starting Load/Save Random Seed...
[   18.134921] systemd[1]: Starting Initial cloud-init job (pre-networking)...
[   18.208319] systemd[1]: Starting Apply Kernel Variables...
[   18.268439] systemd[1]: Mounting Configuration File System...
[   18.331261] systemd[1]: Mounting FUSE Control File System...
[   18.391766] systemd[1]: Mounted Configuration File System.
[   18.466967] systemd[1]: Mounted FUSE Control File System.
[   18.531433] systemd[1]: Started Journal Service.
[   19.173423] systemd-journald[430]: Received request to flush runtime journal from PID 1
[   22.430440] hv_vmbus: registering driver hv_balloon
[   22.436123] hv_balloon: Using Dynamic Memory protocol version 2.0
[   22.980830] piix4_smbus 0000:00:07.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr
[   29.332120] audit: type=1400 audit(1523053568.148:2): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default" pid=622 comm="apparmor_parser"
[   29.332764] audit: type=1400 audit(1523053568.148:3): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-cgns" pid=622 comm="apparmor_parser"
[   29.335127] audit: type=1400 audit(1523053568.151:4): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-with-mounting" pid=622 comm="apparmor_parser"
[   29.335754] audit: type=1400 audit(1523053568.151:5): apparmor="STATUS" operation="profile_load" profile="unconfined" name="lxc-container-default-with-nesting" pid=622 comm="apparmor_parser"
[   30.198130] audit: type=1400 audit(1523053569.014:6): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/sbin/dhclient" pid=624 comm="apparmor_parser"
[   30.198680] audit: type=1400 audit(1523053569.014:7): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=624 comm="apparmor_parser"
[   30.199133] audit: type=1400 audit(1523053569.015:8): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-helper" pid=624 comm="apparmor_parser"
[   30.199626] audit: type=1400 audit(1523053569.015:9): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/connman/scripts/dhclient-script" pid=624 comm="apparmor_parser"
[   30.235163] audit: type=1400 audit(1523053569.051:10): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/bin/lxc-start" pid=626 comm="apparmor_parser"
[   30.343061] audit: type=1400 audit(1523053569.158:11): apparmor="STATUS" operation="profile_load" profile="unconfined" name="/usr/lib/lxd/lxd-bridge-proxy" pid=628 comm="apparmor_parser"
[   30.933376] sd 0:0:0:0: [storvsc] Sense Key : Illegal Request [current]
[   30.933379] sd 0:0:0:0: [storvsc] Add. Sense: Invalid command operation code
[   30.933388] sd 1:0:1:0: [storvsc] Sense Key : Illegal Request [current]
[   30.933389] sd 1:0:1:0: [storvsc] Add. Sense: Invalid command operation code
[   30.933991] sd 0:0:0:0: [storvsc] Sense Key : Illegal Request [current]
[   30.933993] sd 0:0:0:0: [storvsc] Add. Sense: Invalid command operation code
[   30.933998] sd 1:0:1:0: [storvsc] Sense Key : Illegal Request [current]
[   30.933999] sd 1:0:1:0: [storvsc] Add. Sense: Invalid command operation code
[   33.943399] UDF-fs: INFO Mounting volume 'UDF Volume', timestamp 2018/04/07 00:00 (1000)
[   53.677055] EXT4-fs (sda1): resizing filesystem from 576000 to 13106939 blocks
[   54.672997] EXT4-fs (sda1): resized filesystem to 13106939
[   56.534179]  sdb: sdb1
[   69.822405] random: crng init done
[   70.566630] hv_balloon: Max. dynamic memory size: 1024 MB
[   71.227019] EXT4-fs (sdb1): mounted filesystem with ordered data mode. Opts: (null)
[   75.997325] hv_utils: KVP IC version 4.0
[   75.997331] hv_utils: KVP IC version 4.0
[   76.125163] hv_utils: VSS: userspace daemon ver. 129 connected
[   76.423337] new mount options do not match the existing superblock, will be ignored
[  115.094130] ip_tables: (C) 2000-2006 Netfilter Core Team
[  115.345508] nf_conntrack version 0.5.0 (7680 buckets, 30720 max)
[  354.095950] kauditd_printk_skb: 4 callbacks suppressed
[  354.095951] audit: type=1400 audit(1523053892.911:16): apparmor="STATUS" operation="profile_load" profile="unconfined" name="docker-default" pid=6709 comm="apparmor_parser"
[  354.375761] bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
[  354.383964] Bridge firewalling registered
[  354.577900] Initializing XFRM netlink socket
[  354.651299] IPv6: ADDRCONF(NETDEV_UP): docker0: link is not ready
[  693.128432] Ebtables v2.0 registered
[  697.483122] docker0: port 1(vethd358b3c) entered blocking state
[  697.483125] docker0: port 1(vethd358b3c) entered disabled state
[  697.483212] device vethd358b3c entered promiscuous mode
[  697.483388] IPv6: ADDRCONF(NETDEV_UP): vethd358b3c: link is not ready
[  701.083709] eth0: renamed from vethda7291f
[  701.084388] IPv6: ADDRCONF(NETDEV_CHANGE): vethd358b3c: link becomes ready
[  701.084448] docker0: port 1(vethd358b3c) entered blocking state
[  701.084449] docker0: port 1(vethd358b3c) entered forwarding state
[  701.084502] IPv6: ADDRCONF(NETDEV_CHANGE): docker0: link becomes ready
[  853.221644] docker0: port 1(vethd358b3c) entered disabled state
[  853.221726] vethda7291f: renamed from eth0
[  853.723493] docker0: port 1(vethd358b3c) entered disabled state
[  853.723792] device vethd358b3c left promiscuous mode
[  853.723795] docker0: port 1(vethd358b3c) entered disabled state
[  858.389393] systemd-journald[430]: Received SIGTERM from PID 1 (systemd).
[  858.390083] systemd[1]: Stopping Journal Service...
[  858.412918] systemd[1]: Stopped Journal Service.
[  858.447491] systemd[1]: Starting Journal Service...
[  859.094805] systemd[1]: Started Journal Service.
[  926.875778] audit: type=1400 audit(1523054465.691:17): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="/sbin/dhclient" pid=10716 comm="apparmor_parser"
[  926.876681] audit: type=1400 audit(1523054465.692:18): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=10716 comm="apparmor_parser"
[  926.877181] audit: type=1400 audit(1523054465.693:19): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="/usr/lib/NetworkManager/nm-dhcp-helper" pid=10716 comm="apparmor_parser"
[  926.877603] audit: type=1400 audit(1523054465.693:20): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="/usr/lib/connman/scripts/dhclient-script" pid=10716 comm="apparmor_parser"
[  929.554168] hv_utils: VSS: userspace daemon ver. 129 connected
[ 1152.873542] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1152.880731] cbr0: port 1(veth6ab48afb) entered blocking state
[ 1152.880732] cbr0: port 1(veth6ab48afb) entered disabled state
[ 1152.880799] device veth6ab48afb entered promiscuous mode
[ 1152.880925] cbr0: port 1(veth6ab48afb) entered blocking state
[ 1152.880926] cbr0: port 1(veth6ab48afb) entered forwarding state
[ 1152.881180] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1153.569504] device cbr0 entered promiscuous mode
[ 1153.793757] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1153.801378] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1153.801608] cbr0: port 2(veth75836a77) entered blocking state
[ 1153.801610] cbr0: port 2(veth75836a77) entered disabled state
[ 1153.801687] device veth75836a77 entered promiscuous mode
[ 1153.801723] cbr0: port 2(veth75836a77) entered blocking state
[ 1153.801724] cbr0: port 2(veth75836a77) entered forwarding state
[ 1247.326052] Netfilter messages via NETLINK v0.30.
[ 1247.679098] ctnetlink v0.93: registering with nfnetlink.
[ 1328.016253] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[ 1328.028642] cbr0: port 3(veth975f0a01) entered blocking state
[ 1328.028644] cbr0: port 3(veth975f0a01) entered disabled state
[ 1328.028698] device veth975f0a01 entered promiscuous mode
[ 1328.028735] cbr0: port 3(veth975f0a01) entered blocking state
[ 1328.028736] cbr0: port 3(veth975f0a01) entered forwarding state
[ 1328.029056] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 1642.773322] IPv6: ADDRCONF(NETDEV_UP): tun0: link is not ready
[ 1710.284819] IPv6: ADDRCONF(NETDEV_CHANGE): tun0: link becomes ready
[44564.737116] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[44564.748628] cbr0: port 4(veth08e09e71) entered blocking state
[44564.748630] cbr0: port 4(veth08e09e71) entered disabled state
[44564.748745] device veth08e09e71 entered promiscuous mode
[44564.748780] cbr0: port 4(veth08e09e71) entered blocking state
[44564.748782] cbr0: port 4(veth08e09e71) entered forwarding state
[44564.749126] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[50747.673978] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[50747.685680] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[50747.685947] cbr0: port 5(vethfb7574fc) entered blocking state
[50747.685949] cbr0: port 5(vethfb7574fc) entered disabled state
[50747.686014] device vethfb7574fc entered promiscuous mode
[50747.686069] cbr0: port 5(vethfb7574fc) entered blocking state
[50747.686070] cbr0: port 5(vethfb7574fc) entered forwarding state
[50772.174637] cbr0: port 4(veth08e09e71) entered disabled state
[50772.175310] device veth08e09e71 left promiscuous mode
[50772.175313] cbr0: port 4(veth08e09e71) entered disabled state
[50875.345067] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[50875.357568] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[50875.357843] cbr0: port 4(vethdc4d23c2) entered blocking state
[50875.357845] cbr0: port 4(vethdc4d23c2) entered disabled state
[50875.357926] device vethdc4d23c2 entered promiscuous mode
[50875.358002] cbr0: port 4(vethdc4d23c2) entered blocking state
[50875.358013] cbr0: port 4(vethdc4d23c2) entered forwarding state
[50901.973694] cbr0: port 5(vethfb7574fc) entered disabled state
[50901.974792] device vethfb7574fc left promiscuous mode
[50901.974799] cbr0: port 5(vethfb7574fc) entered disabled state
[50990.874957] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[50990.881375] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[50990.881660] cbr0: port 5(vethce37d5f8) entered blocking state
[50990.881662] cbr0: port 5(vethce37d5f8) entered disabled state
[50990.881753] device vethce37d5f8 entered promiscuous mode
[50990.881792] cbr0: port 5(vethce37d5f8) entered blocking state
[50990.881793] cbr0: port 5(vethce37d5f8) entered forwarding state
[51020.374406] cbr0: port 4(vethdc4d23c2) entered disabled state
[51020.375087] device vethdc4d23c2 left promiscuous mode
[51020.375090] cbr0: port 4(vethdc4d23c2) entered disabled state
[51048.797580] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[51048.809869] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[51048.810241] cbr0: port 4(veth7c9d5217) entered blocking state
[51048.810244] cbr0: port 4(veth7c9d5217) entered disabled state
[51048.811685] device veth7c9d5217 entered promiscuous mode
[51048.812155] cbr0: port 4(veth7c9d5217) entered blocking state
[51048.812157] cbr0: port 4(veth7c9d5217) entered forwarding state
[51072.623711] cbr0: port 5(vethce37d5f8) entered disabled state
[51072.624714] device vethce37d5f8 left promiscuous mode
[51072.624717] cbr0: port 5(vethce37d5f8) entered disabled state
[51125.928330] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[51125.954028] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[51125.954345] cbr0: port 5(vetha23b288f) entered blocking state
[51125.954346] cbr0: port 5(vetha23b288f) entered disabled state
[51125.954410] device vetha23b288f entered promiscuous mode
[51125.954453] cbr0: port 5(vetha23b288f) entered blocking state
[51125.954455] cbr0: port 5(vetha23b288f) entered forwarding state
[51127.840271] cbr0: port 5(vetha23b288f) entered disabled state
[51127.841173] device vetha23b288f left promiscuous mode
[51127.841176] cbr0: port 5(vetha23b288f) entered disabled state
[51129.961181] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[51129.969419] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[51129.969750] cbr0: port 5(vethb7bed1db) entered blocking state
[51129.969751] cbr0: port 5(vethb7bed1db) entered disabled state
[51129.969833] device vethb7bed1db entered promiscuous mode
[51129.969885] cbr0: port 5(vethb7bed1db) entered blocking state
[51129.969886] cbr0: port 5(vethb7bed1db) entered forwarding state
[51130.198994] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[51130.209569] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[51130.211758] cbr0: port 6(veth3f4eeef9) entered blocking state
[51130.211759] cbr0: port 6(veth3f4eeef9) entered disabled state
[51130.212961] device veth3f4eeef9 entered promiscuous mode
[51130.213019] cbr0: port 6(veth3f4eeef9) entered blocking state
[51130.213020] cbr0: port 6(veth3f4eeef9) entered forwarding state
[51136.679241] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[51136.685474] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[51136.685728] cbr0: port 7(veth5a4de8a4) entered blocking state
[51136.685730] cbr0: port 7(veth5a4de8a4) entered disabled state
[51136.685791] device veth5a4de8a4 entered promiscuous mode
[51136.685830] cbr0: port 7(veth5a4de8a4) entered blocking state
[51136.685831] cbr0: port 7(veth5a4de8a4) entered forwarding state
[51174.833391] cbr0: port 4(veth7c9d5217) entered disabled state
[51174.833975] device veth7c9d5217 left promiscuous mode
[51174.833978] cbr0: port 4(veth7c9d5217) entered disabled state
[51623.422633] cbr0: port 3(veth975f0a01) entered disabled state
[51623.423429] device veth975f0a01 left promiscuous mode
[51623.423432] cbr0: port 3(veth975f0a01) entered disabled state
[51633.372516] cbr0: port 1(veth6ab48afb) entered disabled state
[51633.373229] device veth6ab48afb left promiscuous mode
[51633.373243] cbr0: port 1(veth6ab48afb) entered disabled state
[51637.688997] cbr0: port 6(veth3f4eeef9) entered disabled state
[51637.689784] device veth3f4eeef9 left promiscuous mode
[51637.689787] cbr0: port 6(veth3f4eeef9) entered disabled state
[51637.742537] cbr0: port 5(vethb7bed1db) entered disabled state
[51637.743233] device vethb7bed1db left promiscuous mode
[51637.743236] cbr0: port 5(vethb7bed1db) entered disabled state
[51637.835968] cbr0: port 2(veth75836a77) entered disabled state
[51637.836853] device veth75836a77 left promiscuous mode
[51637.836856] cbr0: port 2(veth75836a77) entered disabled state
[51638.008571] cbr0: port 7(veth5a4de8a4) entered disabled state
[51638.009253] device veth5a4de8a4 left promiscuous mode
[51638.009256] cbr0: port 7(veth5a4de8a4) entered disabled state
[51939.739188] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[51939.749418] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[51939.749722] cbr0: port 1(vethff35c160) entered blocking state
[51939.749724] cbr0: port 1(vethff35c160) entered disabled state
[51939.749856] device vethff35c160 entered promiscuous mode
[51939.749948] cbr0: port 1(vethff35c160) entered blocking state
[51939.749950] cbr0: port 1(vethff35c160) entered forwarding state
[52988.977394] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[52988.985763] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[52988.986880] cbr0: port 2(veth6d7aed8f) entered blocking state
[52988.986894] cbr0: port 2(veth6d7aed8f) entered disabled state
[52988.988190] device veth6d7aed8f entered promiscuous mode
[52988.988253] cbr0: port 2(veth6d7aed8f) entered blocking state
[52988.988254] cbr0: port 2(veth6d7aed8f) entered forwarding state
[52989.175216] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[52989.182060] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[52989.182313] cbr0: port 3(veth9c002b65) entered blocking state
[52989.182314] cbr0: port 3(veth9c002b65) entered disabled state
[52989.182410] device veth9c002b65 entered promiscuous mode
[52989.182474] cbr0: port 3(veth9c002b65) entered blocking state
[52989.182475] cbr0: port 3(veth9c002b65) entered forwarding state
[52989.256470] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[52989.265534] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[52989.265892] cbr0: port 4(veth3ece7141) entered blocking state
[52989.265894] cbr0: port 4(veth3ece7141) entered disabled state
[52989.265980] device veth3ece7141 entered promiscuous mode
[52989.266086] cbr0: port 4(veth3ece7141) entered blocking state
[52989.266088] cbr0: port 4(veth3ece7141) entered forwarding state
[53235.919429] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[53235.929532] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[53235.929748] cbr0: port 5(veth18b0e026) entered blocking state
[53235.929749] cbr0: port 5(veth18b0e026) entered disabled state
[53235.929825] device veth18b0e026 entered promiscuous mode
[53235.929864] cbr0: port 5(veth18b0e026) entered blocking state
[53235.929866] cbr0: port 5(veth18b0e026) entered forwarding state
[53239.684648] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[53239.693659] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[53239.693862] cbr0: port 6(veth69d6ae5e) entered blocking state
[53239.693864] cbr0: port 6(veth69d6ae5e) entered disabled state
[53239.693919] device veth69d6ae5e entered promiscuous mode
[53239.693954] cbr0: port 6(veth69d6ae5e) entered blocking state
[53239.693956] cbr0: port 6(veth69d6ae5e) entered forwarding state
[53336.417995] cbr0: port 4(veth3ece7141) entered disabled state
[53336.419229] device veth3ece7141 left promiscuous mode
[53336.419232] cbr0: port 4(veth3ece7141) entered disabled state
[53336.699987] cbr0: port 2(veth6d7aed8f) entered disabled state
[53336.700985] device veth6d7aed8f left promiscuous mode
[53336.700989] cbr0: port 2(veth6d7aed8f) entered disabled state
[53344.530806] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[53344.541713] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[53344.542019] cbr0: port 2(veth33cd9931) entered blocking state
[53344.542021] cbr0: port 2(veth33cd9931) entered disabled state
[53344.542161] device veth33cd9931 entered promiscuous mode
[53344.542204] cbr0: port 2(veth33cd9931) entered blocking state
[53344.542206] cbr0: port 2(veth33cd9931) entered forwarding state
[53353.872140] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[53353.881499] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[53353.881718] cbr0: port 4(veth38f04261) entered blocking state
[53353.881720] cbr0: port 4(veth38f04261) entered disabled state
[53353.881799] device veth38f04261 entered promiscuous mode
[53353.881839] cbr0: port 4(veth38f04261) entered blocking state
[53353.881841] cbr0: port 4(veth38f04261) entered forwarding state
[53382.869118] cbr0: port 3(veth9c002b65) entered disabled state
[53382.870035] device veth9c002b65 left promiscuous mode
[53382.870038] cbr0: port 3(veth9c002b65) entered disabled state
[53400.868904] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[53400.881930] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[53400.882151] cbr0: port 3(veth2efc1aea) entered blocking state
[53400.882153] cbr0: port 3(veth2efc1aea) entered disabled state
[53400.882202] device veth2efc1aea entered promiscuous mode
[53400.882240] cbr0: port 3(veth2efc1aea) entered blocking state
[53400.882241] cbr0: port 3(veth2efc1aea) entered forwarding state
[53407.169014] cbr0: port 1(vethff35c160) entered disabled state
[53407.169844] device vethff35c160 left promiscuous mode
[53407.169847] cbr0: port 1(vethff35c160) entered disabled state

nic logs :

$ ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 00:0d:3a:1e:00:84
          inet addr:10.240.0.4  Bcast:10.240.255.255  Mask:255.255.0.0
          inet6 addr: fe80::20d:3aff:fe1e:84/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2351202 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1452121 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:2044348334 (2.0 GB)  TX bytes:257285038 (257.2 MB)

journalctl records about the service

$journalctl -u kubelet
Hint: You are currently not seeing messages from other users and the system.
      Users in the 'systemd-journal' group can see all messages. Pass -q to
      turn off this notice.
-- No entries --

and a snapshot of resource usage of the node
screen shot 2018-04-07 at 5 21 39 pm

@seanknox
Copy link
Contributor

seanknox commented Aug 2, 2018

Closing due inactivity. Feel free to re-open if still an issue.

@seanknox seanknox closed this as completed Aug 2, 2018
@akolodkin
Copy link

We are experiencing same behavior, cluster is loosing nodes due to load. especially with 1 core setup. DS1 VM

@DenisBiondic
Copy link

Same here -> multiple nodes in Not Ready status, presumed since we don't have any resource quotas on pods yet

kubectl describe node shows

Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                    Message
  ----             ------    -----------------                 ------------------                ------                    -------
  OutOfDisk        Unknown   Tue, 07 Aug 2018 20:05:51 +0200   Tue, 07 Aug 2018 20:06:35 +0200   NodeStatusUnknown         Kubelet stopped posting node status.
  MemoryPressure   Unknown   Tue, 07 Aug 2018 20:05:51 +0200   Tue, 07 Aug 2018 20:06:35 +0200   NodeStatusUnknown         Kubelet stopped posting node status.
  DiskPressure     Unknown   Tue, 07 Aug 2018 20:05:51 +0200   Tue, 07 Aug 2018 20:06:35 +0200   NodeStatusUnknown         Kubelet stopped posting node status.
  PIDPressure      False     Tue, 07 Aug 2018 20:05:51 +0200   Fri, 03 Aug 2018 00:14:57 +0200   KubeletHasSufficientPID   kubelet has sufficient PID available
  Ready            Unknown   Tue, 07 Aug 2018 20:05:51 +0200   Tue, 07 Aug 2018 20:06:35 +0200   NodeStatusUnknown         Kubelet stopped posting node status.

Nodes themselves:

aks-nodepool1-37134528-0 NotReady agent 4d v1.10.6
aks-nodepool1-37134528-1 NotReady agent 4d v1.10.6
aks-nodepool1-37134528-2 NotReady agent 4d v1.10.6

What is spooky is that nodes are exactly at 20:05 every evening "Not Ready", and they are at 08:00 in the morning back in ready state.

@mappindrones
Copy link

mappindrones commented Sep 18, 2018

Experiencing a similar issue in cluster built through acs-engine:
agent v1.10.1
Ubuntu 16.04.4 LTS
4.15.0-1023-azure
docker://1.13.1

Experiencing all manner of instability - 504s, containers losing connectivity to db, random container restarts. We've applied the azure-cni-networkmonitor daemonset "patch" but still experiencing a high level of networking issues.

Sep 18 06:32:07 k8s-backup-37692245-8 kernel: [25553.088277] IPv6: ADDRCONF(NETDEV_UP): azveth592e4ac-2: link is not ready
Sep 18 06:32:07 k8s-backup-37692245-8 kernel: [25553.088302] IPv6: ADDRCONF(NETDEV_CHANGE): azveth592e4ac-2: link becomes ready
Sep 18 06:32:07 k8s-backup-37692245-8 kernel: [25553.088359] IPv6: ADDRCONF(NETDEV_CHANGE): azveth592e4ac: link becomes ready
Sep 18 06:32:07 k8s-backup-37692245-8 kernel: [25553.088375] azure0: port 8(azveth592e4ac) entered blocking state
Sep 18 06:32:07 k8s-backup-37692245-8 kernel: [25553.088377] azure0: port 8(azveth592e4ac) entered forwarding state
Sep 18 06:32:07 k8s-backup-37692245-8 kernel: [25553.088954] azure0: port 8(azveth592e4ac) entered disabled state
Sep 18 06:32:07 k8s-backup-37692245-8 kernel: [25553.089055] eth0: renamed from azveth592e4ac-2
Sep 18 06:32:07 k8s-backup-37692245-8 kernel: [25553.112188] azure0: port 8(azveth592e4ac) entered blocking state
Sep 18 06:32:07 k8s-backup-37692245-8 kernel: [25553.112191] azure0: port 8(azveth592e4ac) entered forwarding state
Sep 18 06:32:43 k8s-backup-37692245-8 kernel: [25589.545719] azure0: port 8(azveth592e4ac) entered disabled state
Sep 18 06:32:43 k8s-backup-37692245-8 kernel: [25589.546158] device azveth592e4ac left promiscuous mode
Sep 18 06:32:43 k8s-backup-37692245-8 kernel: [25589.546179] azure0: port 8(azveth592e4ac) entered disabled state
Sep 18 06:50:05 k8s-backup-37692245-8 kernel: [26631.411464] IPv6: ADDRCONF(NETDEV_UP): azveth161ae8b: link is not ready
Sep 18 06:50:05 k8s-backup-37692245-8 kernel: [26631.411977] azure0: port 8(azveth161ae8b) entered blocking state
Sep 18 06:50:05 k8s-backup-37692245-8 kernel: [26631.411978] azure0: port 8(azveth161ae8b) entered disabled state
Sep 18 06:50:05 k8s-backup-37692245-8 kernel: [26631.412104] device azveth161ae8b entered promiscuous mode
Sep 18 06:50:05 k8s-backup-37692245-8 kernel: [26631.452272] IPv6: ADDRCONF(NETDEV_UP): azveth161ae8b-2: link is not ready
Sep 18 06:50:05 k8s-backup-37692245-8 kernel: [26631.452281] IPv6: ADDRCONF(NETDEV_CHANGE): azveth161ae8b-2: link becomes ready
Sep 18 06:50:05 k8s-backup-37692245-8 kernel: [26631.452370] IPv6: ADDRCONF(NETDEV_CHANGE): azveth161ae8b: link becomes ready
Sep 18 06:50:05 k8s-backup-37692245-8 kernel: [26631.452384] azure0: port 8(azveth161ae8b) entered blocking state
Sep 18 06:50:05 k8s-backup-37692245-8 kernel: [26631.452387] azure0: port 8(azveth161ae8b) entered forwarding state
Sep 18 06:50:05 k8s-backup-37692245-8 kernel: [26631.453055] azure0: port 8(azveth161ae8b) entered disabled state
Sep 18 06:50:05 k8s-backup-37692245-8 kernel: [26631.453182] eth0: renamed from azveth161ae8b-2
Sep 18 06:50:05 k8s-backup-37692245-8 kernel: [26631.500179] azure0: port 8(azveth161ae8b) entered blocking state
Sep 18 06:50:05 k8s-backup-37692245-8 kernel: [26631.500182] azure0: port 8(azveth161ae8b) entered forwarding state
Sep 18 06:50:44 k8s-backup-37692245-8 kernel: [26669.830613] azure0: port 8(azveth161ae8b) entered disabled state
Sep 18 06:50:44 k8s-backup-37692245-8 kernel: [26669.831078] device azveth161ae8b left promiscuous mode
Sep 18 06:50:44 k8s-backup-37692245-8 kernel: [26669.831116] azure0: port 8(azveth161ae8b) entered disabled state
Sep 18 06:54:08 k8s-backup-37692245-8 kernel: [26873.581141] IPv6: ADDRCONF(NETDEV_UP): azvethdf7a741: link is not ready
Sep 18 06:54:08 k8s-backup-37692245-8 kernel: [26873.581615] azure0: port 8(azvethdf7a741) entered blocking state
Sep 18 06:54:08 k8s-backup-37692245-8 kernel: [26873.581616] azure0: port 8(azvethdf7a741) entered disabled state
Sep 18 06:54:08 k8s-backup-37692245-8 kernel: [26873.581678] device azvethdf7a741 entered promiscuous mode```

IFconfig shows packets being dropped 

azure0    Link encap:Ethernet  HWaddr 00:0d:3a:06:b7:a1
          inet addr:10.100.3.42  Bcast:0.0.0.0  Mask:255.255.248.0
          inet6 addr: fe80::20d:3aff:fe06:b7a1/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1396090 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1490274 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:5380455646 (5.3 GB)  TX bytes:1404949910 (1.4 GB)

azveth2396295 Link encap:Ethernet  HWaddr 62:e2:52:71:ee:3d
          inet6 addr: fe80::60e2:52ff:fe71:ee3d/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:4567643 errors:0 dropped:19276 overruns:0 frame:0
          TX packets:2407275 errors:0 dropped:11 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:2123881352 (2.1 GB)  TX bytes:2065597278 (2.0 GB)

azveth21783c1 Link encap:Ethernet  HWaddr 6a:58:58:71:62:86
          inet6 addr: fe80::6858:58ff:fe71:6286/64 Scope:Link
          UP BROADCAST RUNNING  MTU:1500  Metric:1
          RX packets:1732555 errors:0 dropped:2 overruns:0 frame:0
          TX packets:1417395 errors:0 dropped:2 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:17552342445 (17.5 GB)  TX bytes:4175873468 (4.1 GB)

@jnoller
Copy link
Contributor

jnoller commented Jun 19, 2019

PLEG Unhealthy is a known defect in Kubernetes upstream with patches looking like they will land in k8s 1.16:
kubernetes/kubernetes#45419
kubernetes/kubernetes#61117

@mingtwan-zz
Copy link

We are experiencing the same issue when I deployed some statefulSet which contains some PVC, seems disk provisioning caused this problem, in the node's "Resource health" page it says "We're sorry, your virtual machine is unavailable because of connectivity loss to the remote disk"

@mooperd
Copy link

mooperd commented Oct 30, 2019 via email

@jnoller
Copy link
Contributor

jnoller commented Oct 30, 2019

@mooperd you’re probably right. If you have a cluster with a 1 tb disk, that I think is a p4 class premium disk with a maximum iops of 200, which means that the OS disk IO is so high due to disk IO contention this occurs.

@mooperd
Copy link

mooperd commented Oct 30, 2019 via email

@jnoller
Copy link
Contributor

jnoller commented Oct 30, 2019

@mooperd I've been debugging this for the service. You're right in most normal IO cases, and top will not show the underlying throttling of the OS disk. Go back to my example:

Node, 1tb OS disk, running linux.

A 1tb disk on that page has a maximum IO - but it also has a maximum IOPs - 5000 Max Iops per disk, and that is your OS disk

Now, factor in the size of the containers - larger docker containers have worse disk IO transactions. The Azure system detects anything doing 256KiB of IO as an IOP

Now on the OS disk you have the docker daemon, kubelet, in memory FS drivers (say cifs, etc).

Looking at kube-metrics data only shows the in-memory kube object view and now OS/Docker level. which means it's short the system level IO calls.

This means in addition to the normal VM limitations - you also have the cache limit / max etc:
https://blogs.technet.microsoft.com/xiangwu/2017/05/14/azure-vm-storage-performance-and-throttling-demystify/

disk sizes

@jnoller
Copy link
Contributor

jnoller commented Jan 8, 2020

Please also see this issue for intermittent nodenotready, DNS latency and other crashed related to system load: #1373

@ghost ghost added the action-required label Jul 22, 2020
@ghost
Copy link

ghost commented Jul 27, 2020

Action required from @Azure/aks-pm

@ghost ghost added the Needs Attention 👋 Issues needs attention/assignee/owner label Jul 27, 2020
@ghost
Copy link

ghost commented Aug 6, 2020

Issue needing attention of @Azure/aks-leads

@palma21 palma21 added stale Stale issue and removed Needs Attention 👋 Issues needs attention/assignee/owner action-required bug known-issue labels Aug 6, 2020
@ghost ghost removed the stale Stale issue label Aug 6, 2020
@palma21 palma21 added the stale Stale issue label Aug 6, 2020
@ghost ghost removed the stale Stale issue label Aug 6, 2020
@ghost
Copy link

ghost commented Aug 7, 2020

@Azure/aks-pm issue needs labels

3 similar comments
@ghost
Copy link

ghost commented Aug 8, 2020

@Azure/aks-pm issue needs labels

@ghost
Copy link

ghost commented Aug 9, 2020

@Azure/aks-pm issue needs labels

@ghost
Copy link

ghost commented Aug 10, 2020

@Azure/aks-pm issue needs labels

@palma21
Copy link
Member

palma21 commented Aug 10, 2020

Sorry about the spam. The bot issue should be fixed now.

A lot of the issues on this ticket have been solved or mitigated upstream or in recent versions of AKS.

We realize perhaps not all problems have been addressed since the thread is running fairly long. If you can kindly open a ticket with your specific issue we can look into it.

Please also refer to recent features that will add increased stability and resilience:

  1. https://docs.microsoft.com/en-us/azure/aks/node-auto-repair
  2. https://docs.microsoft.com/en-us/azure/aks/cluster-configuration#container-runtime-configuration-preview
  3. https://docs.microsoft.com/en-us/azure/aks/cluster-configuration#ephemeral-os-preview
  4. K-node which can be provided by the platform by support request and will be exposed on the API soon: https://github.com/juan-lee/knode

@palma21 palma21 closed this as completed Aug 10, 2020
@ghost ghost locked as resolved and limited conversation to collaborators Sep 10, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests