Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Per-container stats with lxd #673

Closed
candlerb opened this issue Jul 12, 2016 · 30 comments
Labels
bug

Comments

@candlerb
Copy link

@candlerb candlerb commented Jul 12, 2016

Platform: ubuntu 14.04, lxd from ppa, today's netdata from git

Problem: I am running a bunch of lxd containers but I don't see any of them in netdata output. I know there is support for lxc containers and docker.

I have tried uncommenting # cgroups = yes in /opt/netdata/etc/netdata/netdata.conf but that didn't make a difference (in any case, the comment implies it's enabled by default)

Could it be that there's something missing on my system at the time of building netdata, which is stopping the plugin from being compiled?

Or is it that the cgroup hierarchy for lxd is different to lxc and docker? I attach the hierarchy info for one container called "pc36"

cgroup-pc36.txt
lxcfs-pc36.txt

@candlerb candlerb changed the title Containers and lxd Per-container stats with lxd Jul 12, 2016
@ktsaou

This comment has been minimized.

Copy link
Member

@ktsaou ktsaou commented Jul 12, 2016

There is a cgroups section in netdata.conf (get it from your running server http://your.netdata:19999/netdata.conf). Could you please post it here?

This is mine:

[plugin:cgroups]
    # cgroups plugin resources = yes
    # check for new cgroups every = 10
    # enable cpuacct stat = auto
    # enable cpuacct usage = auto
    # enable memory = auto
    # enable blkio = auto
    # path to /sys/fs/cgroup/cpuacct = /sys/fs/cgroup/cpuacct
    # path to /sys/fs/cgroup/blkio = /sys/fs/cgroup/blkio
    # path to /sys/fs/cgroup/memory = /sys/fs/cgroup/memory
    # max cgroups to allow = 500
    # max cgroups depth to monitor = 0
    # enable new cgroups detected at run time = yes

I didn't have access to lxd for its cgroups are probably disabled by default.

@candlerb

This comment has been minimized.

Copy link
Author

@candlerb candlerb commented Jul 12, 2016

It looks like this:

[plugin:cgroups]
        # cgroups plugin resources = yes
        # check for new cgroups every = 10
        # enable cpuacct stat = auto
        # enable cpuacct usage = auto
        # enable memory = auto
        # enable blkio = auto
        # path to /sys/fs/cgroup/cpuacct = /run/lxcfs/controllers/cpuacct
        # path to /sys/fs/cgroup/blkio = /run/lxcfs/controllers/blkio
        # path to /sys/fs/cgroup/memory = /run/lxcfs/controllers/memory
        # max cgroups to allow = 500
        # max cgroups depth to monitor = 0
        # enable new cgroups detected at run time = yes
@candlerb

This comment has been minimized.

Copy link
Author

@candlerb candlerb commented Jul 12, 2016

Addition info: in the "users" section I see a few high UIDs which are from containers:

image

But actually I have 37 running containers at the moment. Also there is no 'containers' section between Users and Sensors, which is where I believe it should be based on other screenshots I've seen online.

@ktsaou

This comment has been minimized.

Copy link
Member

@ktsaou ktsaou commented Jul 12, 2016

I see in your cgroupsfs there is a /run/lxcfs/controllers/devices/ under which the containers are. However netdata expects them at:

        # path to /sys/fs/cgroup/cpuacct = /run/lxcfs/controllers/cpuacct
        # path to /sys/fs/cgroup/blkio = /run/lxcfs/controllers/blkio
        # path to /sys/fs/cgroup/memory = /run/lxcfs/controllers/memory

Could you please post your /proc/self/mountinfo?

@ktsaou

This comment has been minimized.

Copy link
Member

@ktsaou ktsaou commented Jul 12, 2016

However looking in more detail, it seems the files exist in the directories netdata already checks.

Could you please set:

[global]
    debug flags = 0x00100000

Start netdata, wait 30 seconds, stop it and give me /var/log/netdata/debug.log ?

Remember to remove this flag before starting it again. It produces a lot of debugging output.

@pokui

This comment has been minimized.

Copy link

@pokui pokui commented Jul 12, 2016

working on the same machine as @candlerb

nsrc@brian:~$ cat /proc/self/mountinfo
18 23 0:17 / /sys rw,nosuid,nodev,noexec,relatime - sysfs sysfs rw
19 23 0:4 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw
20 23 0:6 / /dev rw,relatime - devtmpfs udev rw,size=8156856k,nr_inodes=2039214,mode=755
21 20 0:14 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=000
22 23 0:18 / /run rw,nosuid,noexec,relatime - tmpfs tmpfs rw,size=1633688k,mode=755
23 0 252:0 / / rw,relatime - ext4 /dev/dm-0 rw,errors=remount-ro,data=ordered
24 18 0:19 / /sys/fs/cgroup rw,relatime - tmpfs none rw,size=4k,mode=755
25 18 0:20 / /sys/fs/fuse/connections rw,relatime - fusectl none rw
26 18 0:7 / /sys/kernel/debug rw,relatime - debugfs none rw
27 18 0:12 / /sys/kernel/security rw,relatime - securityfs none rw
29 24 0:22 / /sys/fs/cgroup/cpuset rw,relatime - cgroup cgroup rw,cpuset
30 24 0:23 / /sys/fs/cgroup/cpu rw,relatime - cgroup cgroup rw,cpu
31 24 0:24 / /sys/fs/cgroup/cpuacct rw,relatime - cgroup cgroup rw,cpuacct
32 24 0:25 / /sys/fs/cgroup/blkio rw,relatime - cgroup cgroup rw,blkio
33 24 0:26 / /sys/fs/cgroup/memory rw,relatime - cgroup cgroup rw,memory
34 24 0:27 / /sys/fs/cgroup/devices rw,relatime - cgroup cgroup rw,devices
35 24 0:28 / /sys/fs/cgroup/freezer rw,relatime - cgroup cgroup rw,freezer
36 24 0:29 / /sys/fs/cgroup/net_cls rw,relatime - cgroup cgroup rw,net_cls
37 24 0:30 / /sys/fs/cgroup/perf_event rw,relatime - cgroup cgroup rw,perf_event
38 24 0:31 / /sys/fs/cgroup/net_prio rw,relatime - cgroup cgroup rw,net_prio
39 24 0:32 / /sys/fs/cgroup/hugetlb rw,relatime - cgroup cgroup rw,hugetlb
40 24 0:33 / /sys/fs/cgroup/pids rw,relatime - cgroup cgroup rw,pids
28 18 0:21 / /sys/firmware/efi/efivars rw,relatime - efivarfs none rw
41 22 0:34 / /run/lock rw,nosuid,nodev,noexec,relatime - tmpfs none rw,size=5120k
42 22 0:35 / /run/shm rw,nosuid,nodev,relatime - tmpfs none rw
43 22 0:36 / /run/user rw,nosuid,nodev,noexec,relatime - tmpfs none rw,size=102400k,mode=755
44 18 0:37 / /sys/fs/pstore rw,relatime - pstore none rw
45 23 8:2 / /boot rw,relatime - ext2 /dev/sda2 rw,block_validity,barrier,user_xattr,acl,stripe=4
46 23 252:3 / /data rw,noatime - ext4 /dev/mapper/nsrc-data rw,data=ordered
47 45 8:1 / /boot/efi rw,relatime - vfat /dev/sda1 rw,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro
48 23 252:3 /lxd /var/lib/lxd rw,noatime - ext4 /dev/mapper/nsrc-data rw,data=ordered
49 24 0:38 / /sys/fs/cgroup/systemd rw,relatime - cgroup name=systemd rw,name=systemd
50 19 0:39 / /proc/sys/fs/binfmt_misc rw,nosuid,nodev,noexec,relatime - binfmt_misc binfmt_misc rw
52 22 0:40 / /run/rpc_pipefs rw,relatime - rpc_pipefs rpc_pipefs rw
54 22 0:42 / /run/lxcfs/controllers rw,relatime - tmpfs tmpfs rw,size=100k,mode=700
55 54 0:38 / /run/lxcfs/controllers/name=systemd rw,relatime - cgroup name=systemd rw,name=systemd
56 54 0:33 / /run/lxcfs/controllers/pids rw,relatime - cgroup pids rw,pids
57 54 0:32 / /run/lxcfs/controllers/hugetlb rw,relatime - cgroup hugetlb rw,hugetlb
58 54 0:31 / /run/lxcfs/controllers/net_prio rw,relatime - cgroup net_prio rw,net_prio
59 54 0:30 / /run/lxcfs/controllers/perf_event rw,relatime - cgroup perf_event rw,perf_event
60 54 0:29 / /run/lxcfs/controllers/net_cls rw,relatime - cgroup net_cls rw,net_cls
61 54 0:28 / /run/lxcfs/controllers/freezer rw,relatime - cgroup freezer rw,freezer
62 54 0:27 / /run/lxcfs/controllers/devices rw,relatime - cgroup devices rw,devices
63 54 0:26 / /run/lxcfs/controllers/memory rw,relatime - cgroup memory rw,memory
64 54 0:25 / /run/lxcfs/controllers/blkio rw,relatime - cgroup blkio rw,blkio
65 54 0:24 / /run/lxcfs/controllers/cpuacct rw,relatime - cgroup cpuacct rw,cpuacct
66 54 0:23 / /run/lxcfs/controllers/cpu rw,relatime - cgroup cpu rw,cpu
67 54 0:22 / /run/lxcfs/controllers/cpuset rw,relatime - cgroup cpuset rw,cpuset
68 23 0:43 / /var/lib/lxcfs rw,nosuid,nodev,relatime - fuse.lxcfs lxcfs rw,user_id=0,group_id=0,allow_other
69 48 252:3 /lxd/shmounts /var/lib/lxd/shmounts rw,noatime shared:1 - ext4 /dev/mapper/nsrc-data rw,data=ordered
nsrc@brian:~$

I've also attached the debug.

debug.log.txt

@ktsaou

This comment has been minimized.

Copy link
Member

@ktsaou ktsaou commented Jul 12, 2016

Unfortunately it does not have the info I need.

I have added cgroups devices and also the debug info I need in my private fork. Could you please clone and install https://github.com/ktsaou/netdata ?

I need the debug info again.

@ktsaou

This comment has been minimized.

Copy link
Member

@ktsaou ktsaou commented Jul 12, 2016

I think I know what is happening. User netdata does not have access rights to read /run/lxcfs/controllers/XXXX.

Can you check its permissions?

@candlerb

This comment has been minimized.

Copy link
Author

@candlerb candlerb commented Jul 12, 2016

Ah yes!

nsrc@brian:~/netdata$ ls -ld /run/lxcfs
drwx------ 3 root root 60 Jul 11 05:31 /run/lxcfs
nsrc@brian:~/netdata$ mount | grep lxcfs
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,allow_other)

Also debug after building your private fork:
debug.log.txt

@ktsaou

This comment has been minimized.

Copy link
Member

@ktsaou ktsaou commented Jul 12, 2016

Is there a configuration in lxd to create these directories with read permission to others?

@ktsaou

This comment has been minimized.

Copy link
Member

@ktsaou ktsaou commented Jul 12, 2016

BTW, I added error logging to have such errors logged in error.log.

@candlerb

This comment has been minimized.

Copy link
Author

@candlerb candlerb commented Jul 12, 2016

But I can't see how /run/lxcfs gets populated...

nsrc@brian:~/netdata$ mount | grep /run
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755)
rpc_pipefs on /run/rpc_pipefs type rpc_pipefs (rw)

nsrc@brian:~/netdata$ ps auxwww | grep lxcfs
nsrc      4306  0.0  0.0  11752  2260 pts/0    S+   13:57   0:00 grep --color=auto lxcfs
root      8188  1.2  0.0 382472  1756 ?        Ssl  Jul11  23:23 /usr/bin/lxcfs /var/lib/lxcfs

nsrc@brian:~/netdata$ sudo grep -R /run/lxcfs /etc
/etc/init.d/lxcfs:PIDFILE=/var/run/lxcfs.pid

And the contents are different:

root@brian:~/netdata# ls /run/lxcfs
controllers

root@brian:~/netdata# ls /var/lib/lxcfs/
cgroup  proc

I will dig further for lxd config.

@ktsaou

This comment has been minimized.

Copy link
Member

@ktsaou ktsaou commented Jul 12, 2016

Check your /proc/self/mountinfo. You have /sys/fs/cgroup too!
Since these are controlled by kernel, they may have the proper permissions. If this the case, switching the variables in netdata.conf to these paths may fix the problem.

@ktsaou

This comment has been minimized.

Copy link
Member

@ktsaou ktsaou commented Jul 12, 2016

merged the error logging addition and the examination the devices subdir for containers. #674

@candlerb

This comment has been minimized.

Copy link
Author

@candlerb candlerb commented Jul 12, 2016

I can't find the lxd magic which sets up this directory. I could raise as issue on lxc/lxcfs or lxc/lxd.

However I don't understand: should netdata be looking at /var/lib/lxcfs or /run/lxcfs? How does it choose which to use?

nsrc@brian:~$ ls /var/lib/lxcfs/cgroup/cpuacct/
cgroup.clone_children  cgroup.sane_behavior  cpuacct.usage         lxc                release_agent  user
cgroup.procs           cpuacct.stat          cpuacct.usage_percpu  notify_on_release  tasks

nsrc@brian:~$ sudo ls /run/lxcfs/controllers/cpuacct
cgroup.clone_children  cgroup.sane_behavior  cpuacct.usage     lxc            release_agent  user
cgroup.procs           cpuacct.stat      cpuacct.usage_percpu  notify_on_release  tasks

Access to the former seems permitted to normal users, so I've tried configuring this explicitly:

[plugin:cgroups]
        path to /sys/fs/cgroup/cpuacct = /var/lib/lxcfs/cgroup/cpuacct
        path to /sys/fs/cgroup/blkio = /var/lib/lxcfs/cgroup/blkio
        path to /sys/fs/cgroup/memory = /var/lib/lxcfs/cgroup/memory

but still there's no cgroups/containers section in the web UI. New debug:

debug.log.txt

@candlerb

This comment has been minimized.

Copy link
Author

@candlerb candlerb commented Jul 12, 2016

I openly admit to having no clue about how cgroups work.

nsrc@brian:~/netdata$ ls /sys/fs/cgroup/devices
cgroup.clone_children  cgroup.sane_behavior  devices.deny  lxc                release_agent  user
cgroup.procs           devices.allow         devices.list  notify_on_release  tasks

nsrc@brian:~/netdata$ ls /var/lib/lxcfs/cgroup/devices
ls: cannot access /var/lib/lxcfs/cgroup/devices/devices.allow: Permission denied
ls: cannot access /var/lib/lxcfs/cgroup/devices/devices.deny: Permission denied
cgroup.clone_children  cgroup.sane_behavior  devices.deny  lxc                release_agent  user
cgroup.procs           devices.allow         devices.list  notify_on_release  tasks

nsrc@brian:~/netdata$ cat /sys/fs/cgroup/devices/devices.list
a *:* rwm

nsrc@brian:~/netdata$ cat /var/lib/lxcfs/cgroup/devices.list
cat: /var/lib/lxcfs/cgroup/devices.list: Input/output error

nsrc@brian:~/netdata$ sudo cat /var/lib/lxcfs/cgroup/devices.list
cat: /var/lib/lxcfs/cgroup/devices.list: Input/output error

nsrc@brian:~/netdata$ sudo ls /run/lxcfs/controllers/devices
cgroup.clone_children  cgroup.sane_behavior  devices.deny  lxc            release_agent  user
cgroup.procs           devices.allow         devices.list  notify_on_release  tasks

nsrc@brian:~/netdata$ sudo cat /run/lxcfs/controllers/devices/devices.list
a *:* rwm
@candlerb

This comment has been minimized.

Copy link
Author

@candlerb candlerb commented Jul 12, 2016

Yay! it works with:

        path to /sys/fs/cgroup/cpuacct = /sys/fs/cgroup/cpuacct
        path to /sys/fs/cgroup/blkio = /sys/fs/cgroup/blkio
        path to /sys/fs/cgroup/memory = /sys/fs/cgroup/memory
        path to /sys/fs/cgroup/devices = /sys/fs/cgroup/devices

(Aside: for the first container I looked at, "CPU usage per core" is peaking above 250,000% -but that's a separate issue)

@ktsaou

This comment has been minimized.

Copy link
Member

@ktsaou ktsaou commented Jul 12, 2016

cgroups are very simple (ridiculously simple is probably a better term), but the variations among containers managers, kernel versions and inits (e.g. systemd) make it really chaotic. Every system is different.

netdata finds the cgroups mount points by examining /proc/self/mountinfo.

Now I see it parses the contents of the directories, but the there is no pc36 there. If you examine now your running netdata.conf you will see it found /, but nothing more.

To understand how netdata works, this is the commands netdata emulates:

find DIRECTORY -type f -a \( -name cpuacct.stat -o -name cpuacct.usage_percpu -o -name memory.stat -o -name blkio.io_service_bytes -o -name blkio.io_serviced -o -name blkio.throttle.io_service_bytes -o -name blkio.throttle.io_serviced -o -name blkio.io_merged -o -name blkio.io_queued \)

It does this in the 4 directories mentioned above. These 4 directories are detected from /proc/self/mountinfo.

For each file found, it tries to find the relative path in the cgroups hierarchy. Then it has some heuristics based on my findings on which are expected to be containers and which are not. These heuristics only enable or disable the given cgroup in netdata (you can overwrite this decision in netdata.conf)

@candlerb

This comment has been minimized.

Copy link
Author

@candlerb candlerb commented Jul 12, 2016

Here's mountinfo:

nsrc@brian:~/netdata$ cat /proc/self/mountinfo | grep cgroup
24 18 0:19 / /sys/fs/cgroup rw,relatime - tmpfs none rw,size=4k,mode=755
29 24 0:22 / /sys/fs/cgroup/cpuset rw,relatime - cgroup cgroup rw,cpuset
30 24 0:23 / /sys/fs/cgroup/cpu rw,relatime - cgroup cgroup rw,cpu
31 24 0:24 / /sys/fs/cgroup/cpuacct rw,relatime - cgroup cgroup rw,cpuacct
32 24 0:25 / /sys/fs/cgroup/blkio rw,relatime - cgroup cgroup rw,blkio
33 24 0:26 / /sys/fs/cgroup/memory rw,relatime - cgroup cgroup rw,memory
34 24 0:27 / /sys/fs/cgroup/devices rw,relatime - cgroup cgroup rw,devices
35 24 0:28 / /sys/fs/cgroup/freezer rw,relatime - cgroup cgroup rw,freezer
36 24 0:29 / /sys/fs/cgroup/net_cls rw,relatime - cgroup cgroup rw,net_cls
37 24 0:30 / /sys/fs/cgroup/perf_event rw,relatime - cgroup cgroup rw,perf_event
38 24 0:31 / /sys/fs/cgroup/net_prio rw,relatime - cgroup cgroup rw,net_prio
39 24 0:32 / /sys/fs/cgroup/hugetlb rw,relatime - cgroup cgroup rw,hugetlb
40 24 0:33 / /sys/fs/cgroup/pids rw,relatime - cgroup cgroup rw,pids
49 24 0:38 / /sys/fs/cgroup/systemd rw,relatime - cgroup name=systemd rw,name=systemd
55 54 0:38 / /run/lxcfs/controllers/name=systemd rw,relatime - cgroup name=systemd rw,name=systemd
56 54 0:33 / /run/lxcfs/controllers/pids rw,relatime - cgroup pids rw,pids
57 54 0:32 / /run/lxcfs/controllers/hugetlb rw,relatime - cgroup hugetlb rw,hugetlb
58 54 0:31 / /run/lxcfs/controllers/net_prio rw,relatime - cgroup net_prio rw,net_prio
59 54 0:30 / /run/lxcfs/controllers/perf_event rw,relatime - cgroup perf_event rw,perf_event
60 54 0:29 / /run/lxcfs/controllers/net_cls rw,relatime - cgroup net_cls rw,net_cls
61 54 0:28 / /run/lxcfs/controllers/freezer rw,relatime - cgroup freezer rw,freezer
62 54 0:27 / /run/lxcfs/controllers/devices rw,relatime - cgroup devices rw,devices
63 54 0:26 / /run/lxcfs/controllers/memory rw,relatime - cgroup memory rw,memory
64 54 0:25 / /run/lxcfs/controllers/blkio rw,relatime - cgroup blkio rw,blkio
65 54 0:24 / /run/lxcfs/controllers/cpuacct rw,relatime - cgroup cpuacct rw,cpuacct
66 54 0:23 / /run/lxcfs/controllers/cpu rw,relatime - cgroup cpu rw,cpu
67 54 0:22 / /run/lxcfs/controllers/cpuset rw,relatime - cgroup cpuset rw,cpuset
nsrc@brian:~/netdata$

Using your find command on /sys/fs/cgroup:

nsrc@brian:~/netdata$ find /sys/fs/cgroup -type f -a \( -name cpuacct.stat -o -name cpuacct.usage_percpu -o -name memory.stat -o -name blkio.io_service_bytes -o -name blkio.io_serviced -o -name blkio.throttle.io_service_bytes -o -name blkio.throttle.io_serviced -o -name blkio.io_merged -o -name blkio.io_queued \) | grep pc36
/sys/fs/cgroup/memory/lxc/pc36/memory.stat
/sys/fs/cgroup/blkio/lxc/pc36/blkio.throttle.io_serviced
/sys/fs/cgroup/blkio/lxc/pc36/blkio.io_serviced
/sys/fs/cgroup/blkio/lxc/pc36/blkio.throttle.io_service_bytes
/sys/fs/cgroup/blkio/lxc/pc36/blkio.io_service_bytes
/sys/fs/cgroup/blkio/lxc/pc36/blkio.io_merged
/sys/fs/cgroup/blkio/lxc/pc36/blkio.io_queued
/sys/fs/cgroup/cpuacct/lxc/pc36/cpuacct.usage_percpu
/sys/fs/cgroup/cpuacct/lxc/pc36/cpuacct.stat

If I understand correctly, lxcfs is to give a simulated cgroup hierarchy inside a container, but on the host itself I presume talking directly to /sys/fs/cgroup is fine?

@ktsaou

This comment has been minimized.

Copy link
Member

@ktsaou ktsaou commented Jul 12, 2016

If I understand correctly, lxcfs is to give a simulated cgroup hierarchy inside a container, but on the host itself I presume talking directly to /sys/fs/cgroup is fine?

it seems so...

@ktsaou

This comment has been minimized.

Copy link
Member

@ktsaou ktsaou commented Jul 12, 2016

Have you tried it?

@ktsaou ktsaou added the question label Jul 12, 2016
@candlerb

This comment has been minimized.

Copy link
Author

@candlerb candlerb commented Jul 12, 2016

Yes it is working, thank you.

Maybe netdata's search could prefer /sys/fs/cgroup over the other alternatives? But otherwise, configuring it manually is not a major problem. Or I can raise the question over with lxd as to why /run/lxcfs is root-only.

@candlerb

This comment has been minimized.

Copy link
Author

@candlerb candlerb commented Jul 12, 2016

image

@ktsaou

This comment has been minimized.

Copy link
Member

@ktsaou ktsaou commented Jul 12, 2016

wow! a lot of containers!

I guess the per core cpu should / 1000.
let me check how this is calculated...

@ktsaou

This comment has been minimized.

Copy link
Member

@ktsaou ktsaou commented Jul 12, 2016

my bad. I had divided with 1000000 (microseconds) while it should be 1000000000 (nanoseconds). Will merge in a sec.

@ktsaou ktsaou added the bug label Jul 12, 2016
@ktsaou ktsaou added fixed and removed question labels Jul 12, 2016
@ktsaou

This comment has been minimized.

Copy link
Member

@ktsaou ktsaou commented Jul 12, 2016

merged

@pokui

This comment has been minimized.

Copy link

@pokui pokui commented Jul 12, 2016

Yes. We are running a training workshop so lots of containers one for each participant.

Thanks for tracking the bug.

@ktsaou

This comment has been minimized.

Copy link
Member

@ktsaou ktsaou commented Jul 12, 2016

ok. I am closing this. If you need more help, just post.

@ktsaou ktsaou closed this Jul 12, 2016
@candlerb

This comment has been minimized.

Copy link
Author

@candlerb candlerb commented Jul 12, 2016

Thank you so much for your help, and for writing such a fantastic tool.

As it happens, the training workshop is on network monitoring and management. Netdata will certainly get a special mention 👍

@ktsaou

This comment has been minimized.

Copy link
Member

@ktsaou ktsaou commented Jul 12, 2016

nice! thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants
You can’t perform that action at this time.