New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nvme disk activity missing from resource monitor applet #290
Comments
@flexiondotorg @clefebvre @raveit65 @lukefromdc @sc0w |
Yep, I do. |
I do not, but as I see @flexiondotorg does |
Ah, so NVMe drives even have a different device name... this could be the cause of it. |
So what does it take to fix? Just add /dev/nvme* named devices to the mix? What about software raid devices /dev/md*? What about hardware fake raid devices like dmraid? What about dmsetup devices like /dev/dm-? What about /dev/mapper/ devices? What about lvm or lvm2 or volume groups, etc? Let's fix it all at once if they are all related. |
this is a long standing issue in libgtop (13 years and counting! https://gitlab.gnome.org/GNOME/libgtop/issues/3). i experience the issue presently with an nvme drive, but also had in the past on a md raid device. i used this patch to make it work at the time. this will NOT fix the nvme issue but will give an idea of what part of libgtop needs fixing. the old patch is here: fsusage-symlink-md.patch.txt |
I can confirm that multiload-applet doesn't monitor any disk activity of my Samsung SSD 960 EVO nvme drive. |
i sometimes use |
I guess we might use /proc/diskstats instead of relying on libgtop, it does not matter whether device is mounted or not because we want to see there is an IO activity. Right now I miss activity on MD/LVM devices. This can be easily accomplished by monitoring real HW devices ("sd[a-z]" in my case). Support for NVME devices should be as simple as adding corresponding pattern to regex. I do not have that drive but it might be something like "nvme\d+\s" (not sure how the devices are partitioned, my intention is to monitor "root" device only). Can anyone check the pattern? Finally if there is no /proc/diskstats libgtop is used as a fallback. Is this approach acceptable? diff --git a/multiload/linux-proc.c b/multiload/linux-proc.c
index fcd48bff..35a194c2 100644
--- a/multiload/linux-proc.c
+++ b/multiload/linux-proc.c
@@ -100,14 +100,12 @@ GetDiskLoad (int Maximum, int data [3], LoadGraph *g)
static guint64 lastread = 0, lastwrite = 0;
static AutoScaler scaler;
- glibtop_mountlist mountlist;
- glibtop_mountentry *mountentries;
guint i;
int max;
guint64 read, write;
guint64 readdiff, writediff;
-
+ FILE *fdr;
if(first_call)
{
@@ -116,31 +114,62 @@ GetDiskLoad (int Maximum, int data [3], LoadGraph *g)
read = write = 0;
- mountentries = glibtop_get_mountlist (&mountlist, FALSE);
+ if (fdr = fopen("/proc/diskstats", "r"))
+ {
+ char line[255];
+ guint64 s_read, s_write;
+
+ while (fgets(line, 255, fdr))
+ {
+ if (!g_regex_match_simple("\\s(sd[a-z]|sr\\d+)\\s", line, 0, 0))
+ {
+ continue;
+ }
+
+ /*
+ 6 - sectors read
+ 10 - sectors written
+ */
+ if (sscanf(line, "%*d %*d %*s %*d %*d %ld %*d %*d %*d %ld", &s_read, &s_write) == 2)
+ {
+ read += 512 * s_read;
+ write += 512 * s_write;
+ }
+ }
- for (i = 0; i < mountlist.number; i++)
+ fclose(fdr);
+ }
+ else
{
- struct statvfs statresult;
- glibtop_fsusage fsusage;
+ glibtop_mountlist mountlist;
+ glibtop_mountentry *mountentries;
- if (strcmp(mountentries[i].type, "smbfs") == 0
- || strcmp(mountentries[i].type, "nfs") == 0
- || strcmp(mountentries[i].type, "cifs") == 0)
- continue;
+ mountentries = glibtop_get_mountlist (&mountlist, FALSE);
- if (statvfs (mountentries[i].mountdir, &statresult) < 0)
+ for (i = 0; i < mountlist.number; i++)
{
- g_debug ("Failed to get statistics for mount entry: %s. Reason: %s. Skipping entry.",
- mountentries[i].mountdir, strerror(errno));
- continue;
+ struct statvfs statresult;
+ glibtop_fsusage fsusage;
+
+ if (strcmp(mountentries[i].type, "smbfs") == 0
+ || strcmp(mountentries[i].type, "nfs") == 0
+ || strcmp(mountentries[i].type, "cifs") == 0)
+ continue;
+
+ if (statvfs (mountentries[i].mountdir, &statresult) < 0)
+ {
+ g_debug ("Failed to get statistics for mount entry: %s. Reason: %s. Skipping entry.",
+ mountentries[i].mountdir, strerror(errno));
+ continue;
+ }
+
+ glibtop_get_fsusage(&fsusage, mountentries[i].mountdir);
+ read += fsusage.read; write += fsusage.write;
}
- glibtop_get_fsusage(&fsusage, mountentries[i].mountdir);
- read += fsusage.read; write += fsusage.write;
+ g_free(mountentries);
}
- g_free(mountentries);
-
readdiff = read - lastread;
writediff = write - lastwrite;
|
thought i had posted this somewhere, i guess not here. anyways i fixed this on my pc a couple months ago with this:
|
@tofurky In my case, I'm using Samsung's OPAL feature (1 TB EVO 960 NVME) which gives me hardware encryption. That said, I'm not using md*, dm*, lvm/lvm2 and so forth. The OS only uses a plain disk device whereas the hardware encryption should look transparent to the OS. Could you please confirm that https://gitlab.gnome.org/GNOME/libgtop/issues/3 also affects a plain nvme device with nothing around? In my case, the disk in question is named /dev/nvme0n1. Hint: From the perspective of the OS, this is not a partition. Partitions are named e.g. nvme0n1p1, nvme0n1p2 and so forth. n identifies the namespace and p identifies the partition. Just for the record: As of today, I experience the same issue together with Ubuntu 18.04, Kernel 4.15.0-54, gnome-applets 3.28.0 (from the official ubuntu repository). |
yes it does. the libgtop patch in my previous post fixes it for my rootfs which is |
@tofurky Have you sent your patch as a pull request somewhere at https://gitlab.gnome.org/GNOME/libgtop ? It would be great if such a patch would ever make it upstream in libgtop. Maybe open a new issue or PR just for the NVME aspect of it. Maybe explain the relevance when doing the PR:
I filed a new issue at libgtop. @tofurky could you please PR your patch at libgtop, thereby referencing this issue? Added tracking bug: https://gitlab.gnome.org/GNOME/gnome-applets/issues/12 |
Expected behaviour
System Monitor shows all disk activity, including disk nvme devices.
Actual behaviour
No disk activity is shown for nvme disk devices.
Steps to reproduce the behaviour
Add System Monitor panel to Mate desktop panel.
Right click, preferences, enable hard disk monitoring
Use the disk and notice that disk activity will always be missing
MATE general version
Tested and missing versions 1.14.x through 1.18.x
Package version
1.18.4
Linux Distribution
Debian
Link to downstream report of your Distribution
Don't see anything in Debian downstream, since this appears to be an upstream issue. Ubuntu has some users talking about it?
https://ubuntu-mate.community/t/system-monitor-panel-applet/7841
The text was updated successfully, but these errors were encountered: