Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cnnot open the file /var/lib/netdata/health.silencers.json #45

Closed
happysalada opened this issue Sep 9, 2019 · 16 comments · Fixed by #60
Closed

cnnot open the file /var/lib/netdata/health.silencers.json #45

happysalada opened this issue Sep 9, 2019 · 16 comments · Fixed by #60

Comments

@happysalada
Copy link

With netdata 1.17 I get a new issue on master

2019-09-09 19:09:44: netdata ERROR : MAIN : Cannot open the file /var/lib/netdata/health.silencers.json (errno 2, No such file or directory)                                      │
│ 2019-09-09 19:09:44: netdata FATAL : MAIN :Cannot create directory '/var/lib/netdata/registry'. # : Invalid argument                                                              │
│                                                                                                                                                                                   │
│ 2019-09-09 19:09:44: netdata INFO  : MAIN : /usr/libexec/netdata/plugins.d/anonymous-statistics.sh 'FATAL' 'netdata:MAIN' '0023@registry/r:registry_init  /13'                    │
│ 2019-09-09 19:09:45: netdata INFO  : MAIN : EXIT: netdata prepares to exit with code 1...                                                                                         │
│ 2019-09-09 19:09:45: netdata INFO  : MAIN : /usr/libexec/netdata/plugins.d/anonymous-statistics.sh 'EXIT' 'ERROR' '-'                                                             │
│ 2019-09-09 19:09:45: netdata INFO  : MAIN : EXIT: cleaning up the database...     

Here is my values.yml (I have just updated the image tag, and the storage class for the volumes)

replicaCount: 1

image:
  repository: netdata/netdata
  tag: v1.17.0
  pullPolicy: Always

sysctlImage:
  enabled: false
  repository: alpine
  tag: latest
  pullPolicy: Always
  command: []

service:
  type: ClusterIP
  port: 19999

ingress:
  enabled: false
  annotations:
    kubernetes.io/ingress.class: nginx
    kubernetes.io/tls-acme: "true"
  path: /
  hosts:
    - netdata.k8s.local
#  tls:
#    - secretName: netdata-tls
#      hosts:
#        - netdata.k8s.local

rbac:
  create: true

serviceAccount:
  create: true
  name: netdata


master:
  resources: {}
    # limits:
    #  cpu: 4
    #  memory: 4096Mi
    # requests:
    #  cpu: 4
    #  memory: 4096Mi

  nodeSelector: {}

  tolerations: []

  affinity: {}

  env: {}

  podLabels: {}

  podAnnotations: {}

  database:
    persistence: true
    storageclass: "netdata-database"
    volumesize: 2Gi

  alarms:
    persistence: true
    storageclass: "netdata-alarms"
    volumesize: 100Mi

  configs:
    stream:
      enabled: true
      path: /etc/netdata/stream.conf
      data: |
        [11111111-2222-3333-4444-555555555555]
          enabled = yes
          history = 3600
          default memory mode = save
          health enabled by default = auto
          allow from = *
    netdata:
      enabled: true
      path: /etc/netdata/netdata.conf
      data: |
        [global]
          memory mode = save
          bind to = 0.0.0.0:19999
        [plugins]
          cgroups = no
          tc = no
          enable running new plugins = no
          check for new plugins every = 72000
          python.d = no
          charts.d = no
          go.d = no
          node.d = no
          apps = no
          proc = no
          idlejitter = no
          diskspace = no
    health:
      enabled: true
      path: /etc/netdata/health_alarm_notify.conf
      data: |
        SEND_EMAIL="NO"
        SEND_SLACK="YES"
        SLACK_WEBHOOK_URL=""
        DEFAULT_RECIPIENT_SLACK=""
        role_recipients_slack[sysadmin]="${DEFAULT_RECIPIENT_SLACK}"
        role_recipients_slack[domainadmin]="${DEFAULT_RECIPIENT_SLACK}"
        role_recipients_slack[dba]="${DEFAULT_RECIPIENT_SLACK}"
        role_recipients_slack[webmaster]="${DEFAULT_RECIPIENT_SLACK}"
        role_recipients_slack[proxyadmin]="${DEFAULT_RECIPIENT_SLACK}"
        role_recipients_slack[sitemgr]="${DEFAULT_RECIPIENT_SLACK}"
    example:
      enabled: false
      path: /etc/netdata/health.d/example.conf
      data: |
        alarm: example_alarm1
          on: example.random
        every: 2s
        warn: $random1 > (($status >= $WARNING)  ? (70) : (80))
        crit: $random1 > (($status == $CRITICAL) ? (80) : (90))
        info: random
          to: sysadmin

slave:
  resources: {}
    # limits:
    #  cpu: 4
    #  memory: 4096Mi
    # requests:
    #  cpu: 4
    #  memory: 4096Mi

  nodeSelector: {}

  tolerations:
    - operator: Exists
      effect: NoSchedule

  affinity: {}

  podLabels: {}

  podAnnotationAppArmor:
    enabled: true

  podAnnotations: {}

  configs:
    netdata:
      enabled: true
      path: /etc/netdata/netdata.conf
      data: |
        [global]
          memory mode = none
        [health]
          enabled = no
    stream:
      enabled: true
      path: /etc/netdata/stream.conf
      data: |
        [stream]
          enabled = yes
          destination = netdata:19999
          api key = 11111111-2222-3333-4444-555555555555
          timeout seconds = 60
          buffer size bytes = 1048576
          reconnect delay seconds = 5
          initial clock resync iterations = 60
    coredns:
      enabled: true
      path: /etc/netdata/go.d/coredns.conf
      data: |
        update_every: 1
        autodetection_retry: 0
        jobs:
          - url: http://127.0.0.1:9153/metrics
    kubelet:
      enabled: true
      path: /etc/netdata/go.d/k8s_kubelet.conf
      data: |
        update_every: 1
        autodetection_retry: 0
        jobs:
          - url: http://127.0.0.1:10255/metrics
    kubeproxy:
      enabled: true
      path: /etc/netdata/go.d/k8s_kubeproxy.conf
      data: |
        update_every: 1
        autodetection_retry: 0
        jobs:
          - url: http://127.0.0.1:10249/metrics

  env: {}

I'm using rook and cephfs for the volumes, let me know if you want the details.

@cakrit
Copy link
Contributor

cakrit commented Sep 11, 2019

I just can't replicate this. I see the following dirs and permissions on my master:

[christopher@chris-msi helmchart]$ kubectl exec -it netdata-master-0 bash
bash-4.4# ls -l /var/lib/
total 32
drwxr-xr-x    2 root     root          4096 Aug 21 10:16 apk
drwxr-xr-x    2 root     root          4096 Aug 27 15:04 ip6tables
drwxr-xr-x    2 root     root          4096 Aug 27 15:04 iptables
drwxr-xr-x    9 root     root          4096 Aug 27 15:03 libvirt
drwxr-xr-x    2 root     root          4096 Aug 21 10:16 misc
drwxrwsr-x    9 root     netdata       4096 May 24 00:28 netdata
drwxr-xr-x    2 nut      nut           4096 Aug 27 15:04 nut
drwxr-xr-x    2 root     root          4096 Aug 21 10:16 udhcpd
bash-4.4# ls -l /var/lib/netdata
total 44
drwxrws---    3 netdata  netdata       4096 Apr  5 17:04 7b2c4e75-f6e8-35e0-8e67-05dbdb9dcb8e
drwxrws---    3 netdata  netdata       4096 May 24 00:28 ae5fcf36-7dba-11e9-a09a-42010a800009
drwxrws---    3 netdata  netdata       4096 Apr  5 17:04 cecd00db-d926-334c-aba0-8fb4aa9728ec
drwxrws---    3 netdata  netdata       4096 Apr  5 17:04 f25ff910-4ed3-3927-95b6-f7c8e4499b32
drwxrws---    2 netdata  netdata       4096 Jul  9 02:15 health
drwxrwS---    2 root     netdata      16384 Apr  5 17:04 lost+found
-rw-rwx---    1 netdata  netdata         36 Apr  5 17:04 netdata.api.key
drwxrws---    2 netdata  netdata       4096 Apr  5 17:04 registry

What do you see?

@happysalada
Copy link
Author

the node goes into crashloopbackoff
I can't get a shell to it.
Here are some more logs that I collected (including info level this time)

Sep 11 11:54:03 netdata-slave-j4d6v netdata ERROR netdata ERROR : PLUGIN[cgroups] : child pid 32598 exited with code 3.
Sep 11 11:54:04 netdata-master-0 netdata ERROR netdata ERROR : MAIN : Ignoring host prefix '/host': path '/host' failed to stat() (errno 2, No such file or directory)
Sep 11 11:54:04 netdata-master-0 netdata ERROR netdata ERROR : MAIN : LISTENER: Invalid listen port 0 given. Defaulting to 19999. (errno 22, Invalid argument)
Sep 11 11:54:04 netdata-master-0 netdata INFO netdata INFO  : MAIN : resources control: allowed file descriptors: soft = 1048576, max = 1048576
Sep 11 11:54:04 netdata-master-0 netdata INFO netdata INFO  : MAIN : Out-Of-Memory (OOM) score is already set to the wanted value 1000
Sep 11 11:54:04 netdata-master-0 netdata ERROR netdata ERROR : MAIN : Cannot adjust netdata scheduling policy to idle (5), with priority 0. Falling back to nice. (errno 38, Function not implemented)
Sep 11 11:54:04 netdata-master-0 netdata ERROR netdata ERROR : MAIN : Cannot get my current process scheduling policy. (errno 38, Function not implemented)
Sep 11 11:54:04 netdata-master-0 netdata INFO netdata INFO  : MAIN : netdata started on pid 1.
Sep 11 11:54:04 netdata-master-0 netdata INFO netdata INFO  : MAIN : Executing /usr/libexec/netdata/plugins.d/system-info.sh
Sep 11 11:54:04 netdata-slave-j4d6v netdata WARNING cgroup-name.sh: WARNING: cannot find the name of k8s pod with containerID 'docker-0c69e6f9424f8727476251608d73a037905bc4d6c508a606fd115da54d2c28ae.scope'. Setting name to docker-0c69e6f9424f8727476251608d73a037905bc4d6c508a606fd115da54d2c28ae.scope and disabling it
Sep 11 11:54:04 netdata-slave-j4d6v netdata INFO cgroup-name.sh: INFO: cgroup 'kubepods.slice_kubepods-burstable.slice_kubepods-burstable-podf8f0e60c_ac0a_4edd_9bb9_ba0d403dc615.slice_docker-0c69e6f9424f8727476251608d73a037905bc4d6c508a606fd115da54d2c28ae.scope' is called 'docker-0c69e6f9424f8727476251608d73a037905bc4d6c508a606fd115da54d2c28ae.scope'
Sep 11 11:54:04 netdata-slave-j4d6v netdata ERROR netdata ERROR : PLUGIN[cgroups] : child pid 32616 exited with code 3.
Sep 11 11:54:04 netdata-master-0 netdata INFO netdata INFO  : MAIN : NETDATA_SYSTEM_OS_NAME="Alpine Linux"
Sep 11 11:54:04 netdata-master-0 netdata INFO netdata INFO  : MAIN : NETDATA_SYSTEM_OS_ID=alpine
Sep 11 11:54:04 netdata-master-0 netdata INFO netdata INFO  : MAIN : NETDATA_SYSTEM_OS_ID_LIKE=unknown
Sep 11 11:54:04 netdata-master-0 netdata INFO netdata INFO  : MAIN : NETDATA_SYSTEM_OS_VERSION=unknown
Sep 11 11:54:04 netdata-master-0 netdata INFO netdata INFO  : MAIN : NETDATA_SYSTEM_OS_VERSION_ID=3.9.4
Sep 11 11:54:04 netdata-master-0 netdata INFO netdata INFO  : MAIN : NETDATA_SYSTEM_OS_DETECTION=/etc/os-release
Sep 11 11:54:04 netdata-master-0 netdata INFO netdata INFO  : MAIN : NETDATA_SYSTEM_KERNEL_NAME=Linux
Sep 11 11:54:04 netdata-master-0 netdata INFO netdata INFO  : MAIN : NETDATA_SYSTEM_KERNEL_VERSION=3.10.0-957.27.2.el7.x86_64
Sep 11 11:54:04 netdata-master-0 netdata INFO netdata INFO  : MAIN : NETDATA_SYSTEM_ARCHITECTURE=x86_64
Sep 11 11:54:04 netdata-master-0 netdata INFO netdata INFO  : MAIN : NETDATA_SYSTEM_VIRTUALIZATION=unknown
Sep 11 11:54:04 netdata-master-0 netdata INFO netdata INFO  : MAIN : NETDATA_SYSTEM_VIRT_DETECTION=none
Sep 11 11:54:04 netdata-master-0 netdata INFO netdata INFO  : MAIN : NETDATA_SYSTEM_CONTAINER=docker
Sep 11 11:54:04 netdata-master-0 netdata INFO netdata INFO  : MAIN : NETDATA_SYSTEM_CONTAINER_DETECTION=dockerenv
Sep 11 11:54:04 netdata-master-0 netdata ERROR netdata ERROR : MAIN : Cannot open the file /var/lib/netdata/health.silencers.json (errno 2, No such file or directory)
Sep 11 11:54:04 netdata-master-0 netdata FATAL netdata FATAL : MAIN :Cannot create directory '/var/lib/netdata/registry'. # : Invalid argument
Sep 11 11:54:04 netdata-master-0 netdata INFO netdata INFO  : MAIN : /usr/libexec/netdata/plugins.d/anonymous-statistics.sh 'FATAL' 'netdata:MAIN' '0023@registry/r:registry_init  /13'
Sep 11 11:54:05 netdata-slave-j4d6v netdata WARNING cgroup-name.sh: WARNING: cannot find the name of k8s pod with containerID 'docker-0e01cbe91ef8c7e80ccb19740f92c4598410cef79ab9ef3543ad1a4e8ed76581.scope'. Setting name to docker-0e01cbe91ef8c7e80ccb19740f92c4598410cef79ab9ef3543ad1a4e8ed76581.scope and disabling it
Sep 11 11:54:05 netdata-slave-j4d6v netdata INFO cgroup-name.sh: INFO: cgroup 'kubepods.slice_kubepods-burstable.slice_kubepods-burstable-podf8f0e60c_ac0a_4edd_9bb9_ba0d403dc615.slice_docker-0e01cbe91ef8c7e80ccb19740f92c4598410cef79ab9ef3543ad1a4e8ed76581.scope' is called 'docker-0e01cbe91ef8c7e80ccb19740f92c4598410cef79ab9ef3543ad1a4e8ed76581.scope'
Sep 11 11:54:05 netdata-slave-j4d6v netdata ERROR netdata ERROR : PLUGIN[cgroups] : child pid 32637 exited with code 3.
Sep 11 11:54:05 netdata-master-0 netdata INFO netdata INFO  : MAIN : EXIT: netdata prepares to exit with code 1...
Sep 11 11:54:05 netdata-master-0 netdata INFO netdata INFO  : MAIN : /usr/libexec/netdata/plugins.d/anonymous-statistics.sh 'EXIT' 'ERROR' '-' 

I'm don't know how to get a shell to a crashed pod, is there a way?

@happysalada
Copy link
Author

ok it was the storage class. I have the pod running, I'll check the rest later, but I'm sure it will be fine.

Thanks for the quick response and sorry for wasting your time!

@dylanyht
Copy link

I have encountered the same problem. Could you please tell me how to solve it? Why is there no solution?

@cakrit
Copy link
Contributor

cakrit commented Nov 1, 2019

@happysalada said it was the storage class, so I expect he had to modify the value master.alarms.storageclass from the default standard to what he's using.

@cakrit cakrit reopened this Nov 1, 2019
@cakrit
Copy link
Contributor

cakrit commented Nov 1, 2019

I reopened it, so you can verify it works for you too. If this is a common problem, perhaps we should disable persistence by default (i.e. set master.database.persistence to false in values.yaml)

@cakrit
Copy link
Contributor

cakrit commented Nov 3, 2019

@ktsakalozos gave a much better solution at #58, which should fix this. Waiting for feedback that it works with chart version 1.1.10.

@masterkain
Copy link

masterkain commented Nov 6, 2019

    master:
      database:
        persistence: true
        storageclass: "gp2"
        volumesize: 2Gi
      alarms:
        persistence: true
        storageclass: "gp2"
        volumesize: 2Gi

I keep getting

│ Netdata entrypoint script starting
│ 2019-11-06 23:51:12: netdata INFO  : MAIN : Using host prefix directory '/host'
│ 2019-11-06 23:51:13: netdata INFO  : MAIN : SIGNAL: Not enabling reaper
│ 2019-11-06 23:51:13: netdata ERROR : MAIN : LISTENER: Invalid listen port 0 given. Defaulting to 19999. (errno 22, Invalid argument)
│ 2019-11-06 23:51:13: netdata INFO  : MAIN : resources control: allowed file descriptors: soft = 65536, max = 65536
│ 2019-11-06 23:51:13: netdata INFO  : MAIN : Out-Of-Memory (OOM) score is already set to the wanted value 1000
│ 2019-11-06 23:51:13: netdata ERROR : MAIN : Cannot adjust netdata scheduling policy to idle (5), with priority 0. Falling back to nice. (errno 38, Function not implemented)
│ 2019-11-06 23:51:13: netdata ERROR : MAIN : Cannot get my current process scheduling policy. (errno 38, Function not implemented)
│ 2019-11-06 23:51:13: netdata INFO  : MAIN : netdata started on pid 24024.
│ 2019-11-06 23:51:13: netdata INFO  : MAIN : Executing /usr/libexec/netdata/plugins.d/system-info.sh
│ 2019-11-06 23:51:13: netdata INFO  : MAIN : NETDATA_SYSTEM_OS_NAME="Alpine Linux"
│ 2019-11-06 23:51:13: netdata INFO  : MAIN : NETDATA_SYSTEM_OS_ID=alpine
│ 2019-11-06 23:51:13: netdata INFO  : MAIN : NETDATA_SYSTEM_OS_ID_LIKE=unknown
│ 2019-11-06 23:51:13: netdata INFO  : MAIN : NETDATA_SYSTEM_OS_VERSION=unknown
│ 2019-11-06 23:51:13: netdata INFO  : MAIN : NETDATA_SYSTEM_OS_VERSION_ID=3.9.4
│ 2019-11-06 23:51:13: netdata INFO  : MAIN : NETDATA_SYSTEM_OS_DETECTION=/etc/os-release
│ 2019-11-06 23:51:13: netdata INFO  : MAIN : NETDATA_SYSTEM_KERNEL_NAME=Linux
│ 2019-11-06 23:51:13: netdata INFO  : MAIN : NETDATA_SYSTEM_KERNEL_VERSION=4.14.146-119.123.amzn2.x86_64
│ 2019-11-06 23:51:13: netdata INFO  : MAIN : NETDATA_SYSTEM_ARCHITECTURE=x86_64
│ 2019-11-06 23:51:13: netdata INFO  : MAIN : NETDATA_SYSTEM_VIRTUALIZATION=hypervisor
│ 2019-11-06 23:51:13: netdata INFO  : MAIN : NETDATA_SYSTEM_VIRT_DETECTION=/proc/cpuinfo
│ 2019-11-06 23:51:13: netdata INFO  : MAIN : NETDATA_SYSTEM_CONTAINER=docker
│ 2019-11-06 23:51:13: netdata INFO  : MAIN : NETDATA_SYSTEM_CONTAINER_DETECTION=dockerenv
│ 2019-11-06 23:51:13: netdata ERROR : MAIN : Failed to read machine GUID from '/var/lib/netdata/registry/netdata.public.unique.id'
│ 2019-11-06 23:51:13: netdata FATAL : MAIN :Cannot create unique machine id file '/var/lib/netdata/registry/netdata.public.unique.id'. Please fix this. # : Invalid argument
│
│ 2019-11-06 23:51:13: netdata INFO  : MAIN : /usr/libexec/netdata/plugins.d/anonymous-statistics.sh 'FATAL' 'netdata:MAIN' '0321@registry/r:registry_get_th/13'
│ 2019-11-06 23:51:13: netdata INFO  : MAIN : EXIT: netdata prepares to exit with code 1...
│ 2019-11-06 23:51:13: netdata INFO  : MAIN : /usr/libexec/netdata/plugins.d/anonymous-statistics.sh 'EXIT' 'ERROR' '-'
│ 2019-11-06 23:51:13: netdata INFO  : MAIN : EXIT: cleaning up the database...
│ 2019-11-06 23:51:13: netdata INFO  : MAIN : Cleaning up database [0 hosts(s)]...

on the slaves and the pods won't start.

@cakrit
Copy link
Contributor

cakrit commented Nov 7, 2019

If you're getting these errors from the slaves, then the master db and alarms persistent volume configs are irrelevant.
There's something that prevents the netdata user on your slaves from writing to /var/lib/netdata/registry.

I checked some things and I do see something that's not right, the file containing the GUID in the slaves was somehow created by root! The master and any normal installation should have that file being created by user netdata. Not sure if it's related, but it's definitely a suspect. Posting the commands in the next comment.

@cakrit
Copy link
Contributor

cakrit commented Nov 7, 2019

On the master, owned by user netdata

chris@chris-ubuntu-18:~$ kubectl exec -it netdata-master-0 bash
bash-4.4# ls -l /var/lib/netdata/registry/
total 4
-rw-rwx---    1 netdata  netdata         36 Apr  5  2019 netdata.public.unique.id

On the slave, owned by root!

chris@chris-ubuntu-18:~$ kubectl exec -it netdata-slave-6qfmm bash
bash-4.4# ls -l /var/lib/netdata/registry/
total 4
-rw-r--r--    1 root     root            37 Nov  6 10:11 netdata.public.unique.id

But the netdata process properly runs on the slave as netdata:

chris@chris-ubuntu-18:~$ kubectl exec -it netdata-slave-6qfmm bash
bash-4.4# ps faux | grep netdata
  289 netdata   0:09 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
 1786 netdata  10:45 /usr/sbin/netdata -u netdata -D -s /host -p tcp://10.70.9.44:19999

@cakrit
Copy link
Contributor

cakrit commented Nov 7, 2019

Ok, so I see the issue, writing here for me to clear it in my mind and for you to validate the solution:

https://github.com/netdata/helmchart/blob/master/templates/daemonset.yaml#L70

          lifecycle:
            postStart:
              exec:
                command: ["/bin/sh","-c","python -c 'import uuid; import socket; print(uuid.uuid3(uuid.NAMESPACE_DNS, socket.gethostname()))' > /var/lib/netdata/registry/netdata.public.unique.id"]

What this is supposed to do is to cheat a bit, so that new pods can get the same MACHINE_GUID as older pods, running on the same node. It's not at all important that they do so, it was meant to help the netdata registry to not consider every pod restart a new machine.

I really mucked this one up though:

  • In normal situations, the command is actually useless, because netdata has already started (postStart command after all).
  • In your situation, your k8s triggers the command when I wanted it to run, i.e. before netdata actually starts. So it creates the stupid file with root permissions, netdata can't read or write to it and it kisses us goodbye.

There's a similar line on the Statefulset template as well, which by mistake has it on preStop of all places! I'm creating a PR to remove both of them.

@cakrit
Copy link
Contributor

cakrit commented Nov 7, 2019

I remembered why I needed that persistent machine GUID. Without it, the master database engine will create new DB files for those pods. So every time you restart a pod, you will lose all the history. That's no good. So I will try to find another way to fix this.

@cakrit
Copy link
Contributor

cakrit commented Nov 7, 2019

There's no easy way to do this, UUIDs are supposed to be unique. I'm removing them and we'll need to find some other way for the masters' database to keep those pods' long-term history in the same db instance, even after a restart.

@cakrit
Copy link
Contributor

cakrit commented Nov 7, 2019

PR with the fix merged @masterkain, please test.
I would also like to hear from @dylanyht regarding the other issue, so we can close this one.

@masterkain
Copy link

masterkain commented Nov 7, 2019

thanks @cakrit seems to be up and running

I have one last question if I may: I'm trying to run netdata in a linkerd-enabled namespace and although the master picks up the mesh extra containers the slaves won't, how is that?

master
Screenshot 2019-11-07 at 10 35 42

slave
Screenshot 2019-11-07 at 10 35 49

I even tried

    slave:
      podAnnotations:
        linkerd.io/inject: enabled

ref https://linkerd.io/2/features/proxy-injection/

@cakrit
Copy link
Contributor

cakrit commented Nov 7, 2019

I moved the last one to #45. I'll close this issue now and if there's another comment from @dylanyht we can reopen.

@cakrit cakrit closed this as completed Nov 7, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants