Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Misreporting disk io (values inflated) #1305

Closed
kportertx opened this issue Jul 19, 2018 · 24 comments
Closed

Misreporting disk io (values inflated) #1305

kportertx opened this issue Jul 19, 2018 · 24 comments

Comments

@kportertx
Copy link

Note: This could be an issue with glances.

Using glances to observe a system (ubuntu18.04 under test, I noticed that the disk throughput on 8 NVME devices were 8x larger than expected when compared with values from iostat. We then noticed that another instance running ubuntu16.04 was reporting the correct values. Glances versions differed on the two instances (2.11.1 vs 2.3) as well as psutil version (5.4.6 vs 3.4.2). Downgrading glances didn't fix the problem, but downgrading psutil did.

FYI, this only affected NVME disks, the regular drives reported the expected values. Also these are GCE instances.

I'm not sure what API glances is using, the following seems like a reasonable guess:

With psutil 3.4.2:

psutil.disk_io_counters(perdisk=True)
Out[5]: 
{'loop0': sdiskio(read_count=4971, write_count=0, read_bytes=6156288, write_bytes=0, read_time=168, write_time=0),
 'loop1': sdiskio(read_count=60, write_count=0, read_bytes=392192, write_bytes=0, read_time=0, write_time=0),
 'nvme0n1': sdiskio(read_count=12330268, write_count=71920, read_bytes=125630955520, write_bytes=75413585920, read_time=1328276, write_time=31400),
 'nvme0n2': sdiskio(read_count=12337189, write_count=71920, read_bytes=125658554368, write_bytes=75413585920, read_time=1338748, write_time=32188),
 'nvme0n3': sdiskio(read_count=12333970, write_count=71921, read_bytes=125648957440, write_bytes=75414634496, read_time=1340996, write_time=32440),
 'nvme0n4': sdiskio(read_count=12337859, write_count=71921, read_bytes=125662195712, write_bytes=75414634496, read_time=1349044, write_time=32272),
 'nvme0n5': sdiskio(read_count=12335489, write_count=71921, read_bytes=125655179264, write_bytes=75414634496, read_time=1355292, write_time=31796),
 'nvme0n6': sdiskio(read_count=12332568, write_count=71921, read_bytes=125640523776, write_bytes=75414634496, read_time=1354424, write_time=29684),
 'sda1': sdiskio(read_count=9915, write_count=38226, read_bytes=392524288, write_bytes=1677148160, read_time=11464, write_time=101900),
 'sda14': sdiskio(read_count=184, write_count=0, read_bytes=746496, write_bytes=0, read_time=84, write_time=0),
 'sda15': sdiskio(read_count=1794, write_count=2, read_bytes=4707328, write_bytes=1024, read_time=6584, write_time=0)}

with psutil 5.4.6:

psutil.disk_io_counters(perdisk=True)
Out[5]: 
{'loop0': sdiskio(read_count=4971, write_count=0, read_bytes=6156288, write_bytes=0, read_time=168, write_time=0),
 'loop1': sdiskio(read_count=60, write_count=0, read_bytes=392192, write_bytes=0, read_time=0, write_time=0),
 'nvme0n1': sdiskio(read_count=12330268, write_count=71920, read_bytes=125630955520, write_bytes=75413585920, read_time=1328276, write_time=31400),
 'nvme0n2': sdiskio(read_count=12337189, write_count=71920, read_bytes=125658554368, write_bytes=75413585920, read_time=1338748, write_time=32188),
 'nvme0n3': sdiskio(read_count=12333970, write_count=71921, read_bytes=125648957440, write_bytes=75414634496, read_time=1340996, write_time=32440),
 'nvme0n4': sdiskio(read_count=12337859, write_count=71921, read_bytes=125662195712, write_bytes=75414634496, read_time=1349044, write_time=32272),
 'nvme0n5': sdiskio(read_count=12335489, write_count=71921, read_bytes=125655179264, write_bytes=75414634496, read_time=1355292, write_time=31796),
 'nvme0n6': sdiskio(read_count=12332568, write_count=71921, read_bytes=125640523776, write_bytes=75414634496, read_time=1354424, write_time=29684),
 'sda1': sdiskio(read_count=9915, write_count=38226, read_bytes=392524288, write_bytes=1677148160, read_time=11464, write_time=101900),
 'sda14': sdiskio(read_count=184, write_count=0, read_bytes=746496, write_bytes=0, read_time=84, write_time=0),
 'sda15': sdiskio(read_count=1794, write_count=2, read_bytes=4707328, write_bytes=1024, read_time=6584, write_time=0)}
@kportertx
Copy link
Author

kportertx commented Jul 19, 2018

Here is a screenshot from glances: https://imgur.com/a/El1w5St

In this tests I created partitions on each nvme drive and wrote to the partitions. You can see that the partitions throughputs are much lower than what glances reports for the disks.

Here is the io coutners for this instance:

psutil.disk_io_counters(perdisk=True)
{'loop0': sdiskio(read_count=8530, write_count=0, read_bytes=9819136, write_bytes=0, read_time=240, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=20),
 'loop1': sdiskio(read_count=59, write_count=0, read_bytes=388096, write_bytes=0, read_time=0, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'nvme0n1': sdiskio(read_count=54637824, write_count=1919886, read_bytes=4452350197760, write_bytes=10793396830208, read_time=6710428, write_time=9016132, read_merged_count=0, write_merged_count=9, busy_time=3493888),
 'nvme0n1p1': sdiskio(read_count=900781, write_count=1606394, read_bytes=9178898432, write_bytes=215369252864, read_time=134596, write_time=8117308, read_merged_count=0, write_merged_count=0, busy_time=1070588),
 'nvme0n2': sdiskio(read_count=54643678, write_count=1915472, read_bytes=4452504895488, write_bytes=7567725395968, read_time=10748396, write_time=12501876, read_merged_count=0, write_merged_count=9, busy_time=4382420),
 'nvme0n2p1': sdiskio(read_count=901222, write_count=1602074, read_bytes=9182654464, write_bytes=214803021824, read_time=314568, write_time=8104096, read_merged_count=0, write_merged_count=0, busy_time=1088976),
 'nvme0n3': sdiskio(read_count=54629862, write_count=1911547, read_bytes=4452043653120, write_bytes=7563609735168, read_time=10772720, write_time=12448844, read_merged_count=0, write_merged_count=9, busy_time=4401864),
 'nvme0n3p1': sdiskio(read_count=899897, write_count=1598149, read_bytes=9175277568, write_bytes=214288564224, read_time=311004, write_time=8095564, read_merged_count=0, write_merged_count=0, busy_time=1088720),
 'nvme0n4': sdiskio(read_count=54624505, write_count=1907548, read_bytes=4451883712512, write_bytes=7559416479744, read_time=10760884, write_time=12481156, read_merged_count=0, write_merged_count=9, busy_time=4420580),
 'nvme0n4p1': sdiskio(read_count=899586, write_count=1594150, read_bytes=9174003712, write_bytes=213764407296, read_time=310768, write_time=8082360, read_merged_count=0, write_merged_count=0, busy_time=1086468),
 'nvme0n5': sdiskio(read_count=54639787, write_count=1903810, read_bytes=4452391583744, write_bytes=7555504242688, read_time=10752992, write_time=12469108, read_merged_count=0, write_merged_count=9, busy_time=4448944),
 'nvme0n5p1': sdiskio(read_count=900548, write_count=1590412, read_bytes=9178988544, write_bytes=213275377664, read_time=311396, write_time=8073924, read_merged_count=0, write_merged_count=0, busy_time=1086088),
 'nvme0n6': sdiskio(read_count=54631693, write_count=1900793, read_bytes=4452180623360, write_bytes=7552340688896, read_time=10755088, write_time=12465432, read_merged_count=0, write_merged_count=9, busy_time=4477848),
 'nvme0n6p1': sdiskio(read_count=899269, write_count=1587395, read_bytes=9173749760, write_bytes=212879933440, read_time=310860, write_time=8065060, read_merged_count=0, write_merged_count=0, busy_time=1085004),
 'nvme0n7': sdiskio(read_count=54627834, write_count=1898153, read_bytes=4452039819264, write_bytes=7549572448256, read_time=10752184, write_time=12451000, read_merged_count=0, write_merged_count=9, busy_time=4498608),
 'nvme0n7p1': sdiskio(read_count=900410, write_count=1584755, read_bytes=9184272384, write_bytes=212533903360, read_time=308808, write_time=8055992, read_merged_count=0, write_merged_count=0, busy_time=1084472),
 'nvme0n8': sdiskio(read_count=54619655, write_count=1895791, read_bytes=4451840327680, write_bytes=7547095711744, read_time=10721076, write_time=12431596, read_merged_count=0, write_merged_count=9, busy_time=4494984),
 'nvme0n8p1': sdiskio(read_count=900297, write_count=1582393, read_bytes=9185759232, write_bytes=212224311296, read_time=307136, write_time=8043476, read_merged_count=0, write_merged_count=0, busy_time=1083080),
 'sda1': sdiskio(read_count=10066, write_count=30488, read_bytes=391778816, write_bytes=1334566912, read_time=14476, write_time=115840, read_merged_count=123, write_merged_count=39726, busy_time=7692),
 'sda14': sdiskio(read_count=184, write_count=0, read_bytes=746496, write_bytes=0, read_time=116, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'sda15': sdiskio(read_count=1794, write_count=2, read_bytes=4707328, write_bytes=1024, read_time=9044, write_time=4, read_merged_count=0, write_merged_count=0, busy_time=84),
 'sdb1': sdiskio(read_count=90, write_count=0, read_bytes=4260864, write_bytes=0, read_time=20, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=4),
 'sdb2': sdiskio(read_count=86, write_count=0, read_bytes=4244480, write_bytes=0, read_time=8, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'sdb3': sdiskio(read_count=86, write_count=0, read_bytes=4244480, write_bytes=0, read_time=16, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'sdb4': sdiskio(read_count=86, write_count=0, read_bytes=4244480, write_bytes=0, read_time=28, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=4),
 'sdb5': sdiskio(read_count=86, write_count=0, read_bytes=4244480, write_bytes=0, read_time=8, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'sdb6': sdiskio(read_count=86, write_count=0, read_bytes=4244480, write_bytes=0, read_time=12, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'sdb7': sdiskio(read_count=86, write_count=0, read_bytes=4244480, write_bytes=0, read_time=8, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'sdb8': sdiskio(read_count=86, write_count=0, read_bytes=4244480, write_bytes=0, read_time=8, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0)}

@giampaolo
Copy link
Owner

Specifically what values are higher than expected? read_bytes and write_bytes? disk_io_counters() implementation between psutil 3.x and 5.x changed consistently so yeah, we may have a problem there.

5.x

def disk_io_counters():
    """Return disk I/O statistics for every disk installed on the
    system as a dict of raw tuples.
    """
    # determine partitions we want to look for
    def get_partitions():
        partitions = []
        with open_text("%s/partitions" % get_procfs_path()) as f:
            lines = f.readlines()[2:]
        for line in reversed(lines):
            _, _, _, name = line.split()
            if name[-1].isdigit():
                # we're dealing with a partition (e.g. 'sda1'); 'sda' will
                # also be around but we want to omit it
                partitions.append(name)
            else:
                if not partitions or not partitions[-1].startswith(name):
                    # we're dealing with a disk entity for which no
                    # partitions have been defined (e.g. 'sda' but
                    # 'sda1' was not around), see:
                    # https://github.com/giampaolo/psutil/issues/338
                    partitions.append(name)
        return partitions

    retdict = {}
    partitions = get_partitions()
    with open_text("%s/diskstats" % get_procfs_path()) as f:
        lines = f.readlines()
    for line in lines:
        # OK, this is a bit confusing. The format of /proc/diskstats can
        # have 3 variations.
        # On Linux 2.4 each line has always 15 fields, e.g.:
        # "3     0   8 hda 8 8 8 8 8 8 8 8 8 8 8"
        # On Linux 2.6+ each line *usually* has 14 fields, and the disk
        # name is in another position, like this:
        # "3    0   hda 8 8 8 8 8 8 8 8 8 8 8"
        # ...unless (Linux 2.6) the line refers to a partition instead
        # of a disk, in which case the line has less fields (7):
        # "3    1   hda1 8 8 8 8"
        # See:
        # https://www.kernel.org/doc/Documentation/iostats.txt
        # https://www.kernel.org/doc/Documentation/ABI/testing/procfs-diskstats
        fields = line.split()
        fields_len = len(fields)
        if fields_len == 15:
            # Linux 2.4
            name = fields[3]
            reads = int(fields[2])
            (reads_merged, rbytes, rtime, writes, writes_merged,
                wbytes, wtime, _, busy_time, _) = map(int, fields[4:14])
        elif fields_len == 14:
            # Linux 2.6+, line referring to a disk
            name = fields[2]
            (reads, reads_merged, rbytes, rtime, writes, writes_merged,
                wbytes, wtime, _, busy_time, _) = map(int, fields[3:14])
        elif fields_len == 7:
            # Linux 2.6+, line referring to a partition
            name = fields[2]
            reads, rbytes, writes, wbytes = map(int, fields[3:])
            rtime = wtime = reads_merged = writes_merged = busy_time = 0
        else:
            raise ValueError("not sure how to interpret line %r" % line)

        if name in partitions:
            ssize = get_sector_size(name)
            rbytes *= ssize
            wbytes *= ssize
            retdict[name] = (reads, writes, rbytes, wbytes, rtime, wtime,
                             reads_merged, writes_merged, busy_time)
    return retdict

3.x:

def disk_io_counters():
    """Return disk I/O statistics for every disk installed on the
    system as a dict of raw tuples.
    """
    # man iostat states that sectors are equivalent with blocks and
    # have a size of 512 bytes since 2.4 kernels. This value is
    # needed to calculate the amount of disk I/O in bytes.
    SECTOR_SIZE = 512

    # determine partitions we want to look for
    partitions = []
    with open_text("%s/partitions" % get_procfs_path()) as f:
        lines = f.readlines()[2:]
    for line in reversed(lines):
        _, _, _, name = line.split()
        if name[-1].isdigit():
            # we're dealing with a partition (e.g. 'sda1'); 'sda' will
            # also be around but we want to omit it
            partitions.append(name)
        else:
            if not partitions or not partitions[-1].startswith(name):
                # we're dealing with a disk entity for which no
                # partitions have been defined (e.g. 'sda' but
                # 'sda1' was not around), see:
                # https://github.com/giampaolo/psutil/issues/338
                partitions.append(name)
    #
    retdict = {}
    with open_text("%s/diskstats" % get_procfs_path()) as f:
        lines = f.readlines()
    for line in lines:
        # http://www.mjmwired.net/kernel/Documentation/iostats.txt
        fields = line.split()
        if len(fields) > 7:
            _, _, name, reads, _, rbytes, rtime, writes, _, wbytes, wtime = \
                fields[:11]
        else:
            # from kernel 2.6.0 to 2.6.25
            _, _, name, reads, rbytes, writes, wbytes = fields
            rtime, wtime = 0, 0
        if name in partitions:
            rbytes = int(rbytes) * SECTOR_SIZE
            wbytes = int(wbytes) * SECTOR_SIZE
            reads = int(reads)
            writes = int(writes)
            rtime = int(rtime)
            wtime = int(wtime)
            retdict[name] = (reads, writes, rbytes, wbytes, rtime, wtime)
    return retdict

@kportertx
Copy link
Author

(sent from phone - please excuse my fat thumbs :D)

Again, I didn't dig into glances to see which psutil api is being used. Also I'm not sure exactly how counters would be used to establish a throughput (would you need to make two calls and compute dx/dt?).

Specifically what values are higher than expected?

From the screenshot you can see that both read and write rates are off by factor of 8.

By inspection of the snippets you provided, I believe nvme0n1 is incorrectly being classified as a partition:

            if name[-1].isdigit():
                # we're dealing with a partition (e.g. 'sda1'); 'sda' will
                # also be around but we want to omit it
                partitions.append(name)

Which means it will execute the final condition which is probably causing the inflation:

        if name in partitions:
            ssize = get_sector_size(name)
            rbytes *= ssize
            wbytes *= ssize
            retdict[name] = (reads, writes, rbytes, wbytes, rtime, wtime,

Both versions share the first issue but the effect of the final condition has structurally changed - could it also be a functional change? If so, it may explain the reason the two version have differing results.

@giampaolo
Copy link
Owner

(sent from phone - please excuse my fat thumbs :D)

I can totally relate :D

Again, I didn't dig into glances to see which psutil api is being used.

It's psutil.disk_io_counters(perdisk=True)

Also I'm not sure exactly how counters would be used to establish a throughput (would you need to make two calls and compute dx/dt?)

Correct.

By inspection of the snippets you provided, I believe nvme0n1 is incorrectly being classified as a partition

That is also correct. There is a PR open for this (#1244). I am not sure why you think that could be the cause of the inflation though (it's probably my fault as I failed to understand your reasoning).

@kportertx
Copy link
Author

I am not sure why you think that could be the cause of the inflation though

Cause non-partitions are not expected to multiply rbytes and wbytes by ssize:

        if name in partitions:
            ssize = get_sector_size(name)
            rbytes *= ssize
            wbytes *= ssize
            retdict[name] = (reads, writes, rbytes, wbytes, rtime, wtime,

@giampaolo
Copy link
Owner

So disk_io_counters() also has the problem of returning things which are not partitions (and should not be returned)?

@kportertx
Copy link
Author

Appears so... FYI, a disk without partitions can still have a read/write load. That is actually common for my daily work - so that 'bug' is a feature :b.

@giampaolo
Copy link
Owner

Can you please paste the output of cat /proc/partitions and iostat?

@kportertx
Copy link
Author

I can tomorrow when I have access to the instance.

@kportertx
Copy link
Author

# cat /proc/partitions && iostat -x && sleep 5 && cat /proc/partitions && iostat -x                                                                                                                                                                                                                                               
major minor  #blocks  name                                                                                                                                                                                                                                                                                                                                                  
                                                                                                                                                                                                                                                                                                                                                                            
   7        0      88964 loop0                                                                                                                                                                                                                                                                                                                                              
   7        1      50016 loop1                                                                                                                                                                                                                                                                                                                                              
   7        2      50476 loop2                                                                                                                                                                                                                                                                                                                                              
   8        0   10485760 sda                                                                                                                                                                                                                                                                                                                                                
   8        1   10372079 sda1                                                                                                                                                                                                                                                                                                                                               
   8       14       4096 sda14                                                                                                                                                                                                                                                                                                                                              
   8       15     108544 sda15                                                                                                                                                                                                                                                                                                                                              
   8       16 3145728000 sdb                                                                                                                                                                                                                                                                                                                                                
   8       17  387973120 sdb1                                                                                                                                                                                                                                                                                                                                               
   8       18  387973120 sdb2                                                                                                                                                                                                                                                                                                                                               
   8       19  387973120 sdb3                                                                                                                                                                                                                                                                                                                                               
   8       20  387973120 sdb4                                                                                                                                                                                                                                                                                                                                               
   8       21  387973120 sdb5                                                                                                                                                                                                                                                                                                                                               
   8       22  387973120 sdb6                                                                                                                                                                                                                                                                                                                                               
   8       23  387973120 sdb7                                                                                                                                                                                                                                                                                                                                               
   8       24  387973120 sdb8                                                                                                                                                                                                                                                                                                                                               
 259        0  393216000 nvme0n1                                                                                                                                                                                                                                                                                                                                            
 259       16  314572800 nvme0n1p1                                                                                                                                                                                                                                                                                                                                          
 259        1  393216000 nvme0n2                                                                                                                                                                                                                                                                                                                                            
 259        9  314572800 nvme0n2p1                                                                                                                                                                                                                                                                                                                                          
 259        2  393216000 nvme0n3                                                                                                                                                                                                                                                                                                                                            
 259        8  314572800 nvme0n3p1                                                                                                                                                                                                                                                                                                                                          
 259        3  393216000 nvme0n4                                                                                                                                                                                                                                                                                                                                            
 259       11  314572800 nvme0n4p1
 259        4  393216000 nvme0n5
 259       10  314572800 nvme0n5p1
 259        5  393216000 nvme0n6
 259       13  314572800 nvme0n6p1
 259        6  393216000 nvme0n7
 259       12  314572800 nvme0n7p1
 259        7  393216000 nvme0n8
 259       14  314572800 nvme0n8p1
Linux 4.15.0-1009-gcp (instance-3)      07/20/18        _x86_64_        (64 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.46    0.00    0.77    8.02    0.00   90.75

Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
loop0            0.10    0.00      0.12      0.00     0.00     0.00   0.00   0.00    0.03    0.00   0.00     1.12     0.00   0.00   0.00
loop1            0.00    0.00      0.01      0.00     0.00     0.00   0.00   0.00    0.02    0.00   0.00     1.95     0.00   0.01   0.00
loop2            0.00    0.00      0.01      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00     1.89     0.00   0.00   0.00
sda              0.26    0.92      5.23     22.02     0.00     0.77   0.56  45.64    1.33    3.43   0.00    19.78    23.93   0.27   0.03
sdb              0.03    0.00      1.05      0.00     0.00     0.00   0.00  72.73    0.29    5.33   0.00    40.33    13.33   0.03   0.00
nvme0n1       3113.00   37.48  30851.30  30509.61     0.00     0.00   0.00   0.00    0.19    3.07   0.46     9.91   813.94   0.07  22.95
nvme0n2       3113.19   37.43  30852.04  25728.00     0.00     0.00   0.00   0.00    0.45    4.25   1.33     9.91   687.36   0.10  31.82
nvme0n3       3112.79   37.38  30850.41  25721.90     0.00     0.00   0.00   0.00    0.46    4.23   1.34     9.91   688.07   0.10  31.88
nvme0n4       3112.57   37.33  30849.53  25715.69     0.00     0.00   0.00   0.00    0.46    4.24   1.34     9.91   688.80   0.10  31.91
nvme0n5       3113.06   37.29  30851.56  25709.89     0.00     0.00   0.00   0.00    0.45    4.24   1.32     9.91   689.48   0.10  32.05
nvme0n6       3112.89   37.25  30850.99  25705.20     0.00     0.00   0.00   0.00    0.45    4.25   1.31     9.91   690.03   0.10  32.21
nvme0n7       3112.58   37.22  30849.67  25701.09     0.00     0.00   0.00   0.00    0.44    4.25   1.29     9.91   690.52   0.10  32.37
nvme0n8       3112.69   37.19  30850.20  25697.44     0.00     0.00   0.00   0.00    0.44    4.22   1.29     9.91   690.95   0.10  32.47

major minor  #blocks  name

   7        0      88964 loop0
   7        1      50016 loop1
   7        2      50476 loop2
   8        0   10485760 sda
   8        1   10372079 sda1
   8       14       4096 sda14
   8       15     108544 sda15
   8       16 3145728000 sdb
   8       17  387973120 sdb1
   8       18  387973120 sdb2
   8       19  387973120 sdb3
   8       20  387973120 sdb4
   8       21  387973120 sdb5
   8       22  387973120 sdb6
   8       23  387973120 sdb7
   8       24  387973120 sdb8
 259        0  393216000 nvme0n1
 259       16  314572800 nvme0n1p1
 259        1  393216000 nvme0n2
 259        9  314572800 nvme0n2p1
 259        2  393216000 nvme0n3
 259        8  314572800 nvme0n3p1
 259        3  393216000 nvme0n4
 259       11  314572800 nvme0n4p1
 259        4  393216000 nvme0n5
 259       10  314572800 nvme0n5p1
 259        5  393216000 nvme0n6
 259       13  314572800 nvme0n6p1
 259        6  393216000 nvme0n7
 259       12  314572800 nvme0n7p1
 259        7  393216000 nvme0n8
 259       14  314572800 nvme0n8p1
Linux 4.15.0-1009-gcp (instance-3)      07/20/18        _x86_64_        (64 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.46    0.00    0.77    8.02    0.00   90.75

Device            r/s     w/s     rkB/s     wkB/s   rrqm/s   wrqm/s  %rrqm  %wrqm r_await w_await aqu-sz rareq-sz wareq-sz  svctm  %util
loop0            0.10    0.00      0.12      0.00     0.00     0.00   0.00   0.00    0.03    0.00   0.00     1.12     0.00   0.00   0.00
loop1            0.00    0.00      0.01      0.00     0.00     0.00   0.00   0.00    0.02    0.00   0.00     1.95     0.00   0.01   0.00
loop2            0.00    0.00      0.01      0.00     0.00     0.00   0.00   0.00    0.00    0.00   0.00     1.89     0.00   0.00   0.00
sda              0.26    0.92      5.23     22.02     0.00     0.77   0.56  45.64    1.33    3.43   0.00    19.78    23.93   0.27   0.03
sdb              0.03    0.00      1.05      0.00     0.00     0.00   0.00  72.73    0.29    5.33   0.00    40.33    13.33   0.03   0.00
nvme0n1       3113.23   37.48  30852.33  30509.00     0.00     0.00   0.00   0.00    0.19    3.07   0.46     9.91   813.94   0.07  22.96
nvme0n2       3113.42   37.43  30853.07  25727.69     0.00     0.00   0.00   0.00    0.45    4.25   1.33     9.91   687.37   0.10  31.82
nvme0n3       3113.02   37.38  30851.43  25721.58     0.00     0.00   0.00   0.00    0.46    4.23   1.34     9.91   688.08   0.10  31.88
nvme0n4       3112.80   37.33  30850.55  25715.37     0.00     0.00   0.00   0.00    0.46    4.24   1.34     9.91   688.81   0.10  31.91
nvme0n5       3113.29   37.29  30852.58  25709.57     0.00     0.00   0.00   0.00    0.45    4.24   1.32     9.91   689.49   0.10  32.05
nvme0n6       3113.11   37.25  30852.01  25704.88     0.00     0.00   0.00   0.00    0.45    4.25   1.31     9.91   690.04   0.10  32.21
nvme0n7       3112.81   37.22  30850.70  25700.78     0.00     0.00   0.00   0.00    0.44    4.25   1.29     9.91   690.53   0.10  32.38
nvme0n8       3112.91   37.19  30851.22  25697.12     0.00     0.00   0.00   0.00    0.44    4.22   1.29     9.91   690.96   0.10  32.47
# fdisk /dev/nvme0n1

Welcome to fdisk (util-linux 2.31.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): p   

Disk /dev/nvme0n1: 375 GiB, 402653184000 bytes, 98304000 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: D64E3D04-7A83-4C46-BAB7-0EC51F515F31
# python -c "import psutil, pprint, time; pprint.pprint(psutil.disk_io_counters(perdisk=True));time.sleep(5);pprint.pprint(psutil.disk_io_counters(perdisk=True));"
{'loop0': sdiskio(read_count=8597, write_count=0, read_bytes=9887744, write_bytes=0, read_time=240, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=20),
 'loop1': sdiskio(read_count=336, write_count=0, read_bytes=671744, write_bytes=0, read_time=8, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=4),
 'loop2': sdiskio(read_count=352, write_count=0, read_bytes=680960, write_bytes=0, read_time=0, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'nvme0n1': sdiskio(read_count=257653236, write_count=3090555, read_bytes=20886713466880, write_bytes=20613680168960, read_time=48034352, write_time=9472428, read_merged_count=0, write_merged_count=9, busy_time=18998588),
 'nvme0n1p1': sdiskio(read_count=203916187, write_count=2777063, read_bytes=2063474257920, write_bytes=1442904670208, read_time=41458520, write_time=8573604, read_merged_count=0, write_merged_count=0, busy_time=16588764),
 'nvme0n2': sdiskio(read_count=257668560, write_count=3086141, read_bytes=20887185817600, write_bytes=17388008734720, read_time=117008756, write_time=13094696, read_merged_count=0, write_merged_count=9, busy_time=26320504),
 'nvme0n2p1': sdiskio(read_count=203926095, write_count=2772743, read_bytes=2063517696000, write_bytes=1442338439168, read_time=106574928, write_time=8696916, read_merged_count=0, write_merged_count=0, busy_time=23045996),
 'nvme0n3': sdiskio(read_count=257633609, write_count=3082216, read_bytes=20886016327680, write_bytes=17383893073920, read_time=117534956, write_time=13030712, read_merged_count=0, write_merged_count=9, busy_time=26370780),
 'nvme0n3p1': sdiskio(read_count=203903638, write_count=2768818, read_bytes=2063421812736, write_bytes=1441823981568, read_time=107073236, write_time=8677432, read_merged_count=0, write_merged_count=0, busy_time=23072532),
 'nvme0n4': sdiskio(read_count=257615982, write_count=3078217, read_bytes=20885454323712, write_bytes=17379699818496, read_time=117231204, write_time=13038472, read_merged_count=0, write_merged_count=9, busy_time=26396768),
 'nvme0n4p1': sdiskio(read_count=203891057, write_count=2764819, read_bytes=2063370280960, write_bytes=1441299824640, read_time=106781084, write_time=8639676, read_merged_count=0, write_merged_count=0, busy_time=23080440),
 'nvme0n5': sdiskio(read_count=257657375, write_count=3074479, read_bytes=20886849191936, write_bytes=17375787581440, read_time=116258764, write_time=13026604, read_merged_count=0, write_merged_count=9, busy_time=26511720),
 'nvme0n5p1': sdiskio(read_count=203918123, write_count=2761081, read_bytes=2063486087168, write_bytes=1440810795008, read_time=105817164, write_time=8631420, read_merged_count=0, write_merged_count=0, busy_time=23166240),
 'nvme0n6': sdiskio(read_count=257640849, write_count=3071462, read_bytes=20886393323520, write_bytes=17372624027648, read_time=114959684, write_time=13031452, read_merged_count=0, write_merged_count=9, busy_time=26646604),
 'nvme0n6p1': sdiskio(read_count=203908407, write_count=2758064, read_bytes=2063450189824, write_bytes=1440415350784, read_time=104515456, write_time=8631080, read_merged_count=0, write_merged_count=0, busy_time=23271124),
 'nvme0n7': sdiskio(read_count=257615426, write_count=3068822, read_bytes=20885507178496, write_bytes=17369855787008, read_time=113555192, write_time=13017140, read_merged_count=0, write_merged_count=9, busy_time=26782820),
 'nvme0n7p1': sdiskio(read_count=203887993, write_count=2755424, read_bytes=2063367618560, write_bytes=1440069320704, read_time=103111816, write_time=8622132, read_merged_count=0, write_merged_count=0, busy_time=23387216),
 'nvme0n8': sdiskio(read_count=257625620, write_count=3066461, read_bytes=20885909733376, write_bytes=17367387439104, read_time=113264468, write_time=12935068, read_merged_count=0, write_merged_count=9, busy_time=26861044),
 'nvme0n8p1': sdiskio(read_count=203906253, write_count=2753063, read_bytes=2063444361216, write_bytes=1439760777216, read_time=102850528, write_time=8546948, read_merged_count=0, write_merged_count=0, busy_time=23465376),
 'sda1': sdiskio(read_count=19708, write_count=64744, read_bytes=433701376, write_bytes=1858785280, read_time=19636, write_time=260376, read_merged_count=123, write_merged_count=63729, busy_time=26316),
 'sda14': sdiskio(read_count=184, write_count=0, read_bytes=746496, write_bytes=0, read_time=116, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'sda15': sdiskio(read_count=1802, write_count=2, read_bytes=4711424, write_bytes=1024, read_time=9048, write_time=4, read_merged_count=0, write_merged_count=0, busy_time=84),
 'sdb1': sdiskio(read_count=42, write_count=0, read_bytes=2121728, write_bytes=0, read_time=12, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'sdb2': sdiskio(read_count=40, write_count=0, read_bytes=2113536, write_bytes=0, read_time=32, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=8),
 'sdb3': sdiskio(read_count=40, write_count=0, read_bytes=2113536, write_bytes=0, read_time=32, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=8),
 'sdb4': sdiskio(read_count=40, write_count=0, read_bytes=2113536, write_bytes=0, read_time=8, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'sdb5': sdiskio(read_count=40, write_count=0, read_bytes=2113536, write_bytes=0, read_time=32, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=8),
 'sdb6': sdiskio(read_count=40, write_count=0, read_bytes=2113536, write_bytes=0, read_time=24, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=4),
 'sdb7': sdiskio(read_count=40, write_count=0, read_bytes=2113536, write_bytes=0, read_time=36, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=12),
 'sdb8': sdiskio(read_count=40, write_count=0, read_bytes=2113536, write_bytes=0, read_time=44, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=12)}
{'loop0': sdiskio(read_count=8597, write_count=0, read_bytes=9887744, write_bytes=0, read_time=240, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=20),
 'loop1': sdiskio(read_count=336, write_count=0, read_bytes=671744, write_bytes=0, read_time=8, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=4),
 'loop2': sdiskio(read_count=352, write_count=0, read_bytes=680960, write_bytes=0, read_time=0, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'nvme0n1': sdiskio(read_count=257686894, write_count=3090654, read_bytes=20888643600384, write_bytes=20614510641152, read_time=48041108, write_time=9472484, read_merged_count=0, write_merged_count=9, busy_time=19001140),
 'nvme0n1p1': sdiskio(read_count=203949845, write_count=2777162, read_bytes=2063715524608, write_bytes=1443008479232, read_time=41465276, write_time=8573660, read_merged_count=0, write_merged_count=0, busy_time=16591316),
 'nvme0n2': sdiskio(read_count=257702635, write_count=3086240, read_bytes=20889137971200, write_bytes=17388839206912, read_time=117021012, write_time=13094748, read_merged_count=0, write_merged_count=9, busy_time=26323632),
 'nvme0n2p1': sdiskio(read_count=203960170, write_count=2772842, read_bytes=2063761715200, write_bytes=1442442248192, read_time=106587184, write_time=8696968, read_merged_count=0, write_merged_count=0, busy_time=23049124),
 'nvme0n3': sdiskio(read_count=257668005, write_count=3082315, read_bytes=20887978999808, write_bytes=17384723546112, read_time=117547752, write_time=13030784, read_merged_count=0, write_merged_count=9, busy_time=26373980),
 'nvme0n3p1': sdiskio(read_count=203938034, write_count=2768917, read_bytes=2063667146752, write_bytes=1441927790592, read_time=107086032, write_time=8677504, read_merged_count=0, write_merged_count=0, busy_time=23075736),
 'nvme0n4': sdiskio(read_count=257650135, write_count=3078316, read_bytes=20887409033216, write_bytes=17380530290688, read_time=117244040, write_time=13038516, read_merged_count=0, write_merged_count=9, busy_time=26399948),
 'nvme0n4p1': sdiskio(read_count=203925210, write_count=2764918, read_bytes=2063614619648, write_bytes=1441403633664, read_time=106793920, write_time=8639720, read_merged_count=0, write_merged_count=0, busy_time=23083624),
 'nvme0n5': sdiskio(read_count=257691308, write_count=3074578, read_bytes=20888796692480, write_bytes=17376618053632, read_time=116271516, write_time=13026652, read_merged_count=0, write_merged_count=9, busy_time=26514836),
 'nvme0n5p1': sdiskio(read_count=203952056, write_count=2761180, read_bytes=2063729524736, write_bytes=1440914604032, read_time=105829916, write_time=8631468, read_merged_count=0, write_merged_count=0, busy_time=23169356),
 'nvme0n6': sdiskio(read_count=257675082, write_count=3071561, read_bytes=20888350654464, write_bytes=17373454499840, read_time=114972124, write_time=13031492, read_merged_count=0, write_merged_count=9, busy_time=26649832),
 'nvme0n6p1': sdiskio(read_count=203942640, write_count=2758163, read_bytes=2063694856192, write_bytes=1440519159808, read_time=104527896, write_time=8631120, read_merged_count=0, write_merged_count=0, busy_time=23274348),
 'nvme0n7': sdiskio(read_count=257649723, write_count=3068921, read_bytes=20887458250752, write_bytes=17370686259200, read_time=113567580, write_time=13017196, read_merged_count=0, write_merged_count=9, busy_time=26785960),
 'nvme0n7p1': sdiskio(read_count=203922290, write_count=2755523, read_bytes=2063611502592, write_bytes=1440173129728, read_time=103124204, write_time=8622188, read_merged_count=0, write_merged_count=0, busy_time=23390360),
 'nvme0n8': sdiskio(read_count=257660072, write_count=3066560, read_bytes=20887874240512, write_bytes=17368217911296, read_time=113277120, write_time=12935108, read_merged_count=0, write_merged_count=9, busy_time=26864376),
 'nvme0n8p1': sdiskio(read_count=203940705, write_count=2753162, read_bytes=2063689924608, write_bytes=1439864586240, read_time=102863180, write_time=8546988, read_merged_count=0, write_merged_count=0, busy_time=23468704),
 'sda1': sdiskio(read_count=19708, write_count=64747, read_bytes=433701376, write_bytes=1858838528, read_time=19636, write_time=260376, read_merged_count=123, write_merged_count=63739, busy_time=26316),
 'sda14': sdiskio(read_count=184, write_count=0, read_bytes=746496, write_bytes=0, read_time=116, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'sda15': sdiskio(read_count=1802, write_count=2, read_bytes=4711424, write_bytes=1024, read_time=9048, write_time=4, read_merged_count=0, write_merged_count=0, busy_time=84),
 'sdb1': sdiskio(read_count=42, write_count=0, read_bytes=2121728, write_bytes=0, read_time=12, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'sdb2': sdiskio(read_count=40, write_count=0, read_bytes=2113536, write_bytes=0, read_time=32, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=8),
 'sdb3': sdiskio(read_count=40, write_count=0, read_bytes=2113536, write_bytes=0, read_time=32, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=8),
 'sdb4': sdiskio(read_count=40, write_count=0, read_bytes=2113536, write_bytes=0, read_time=8, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'sdb5': sdiskio(read_count=40, write_count=0, read_bytes=2113536, write_bytes=0, read_time=32, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=8),
 'sdb6': sdiskio(read_count=40, write_count=0, read_bytes=2113536, write_bytes=0, read_time=24, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=4),
 'sdb7': sdiskio(read_count=40, write_count=0, read_bytes=2113536, write_bytes=0, read_time=36, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=12),
 'sdb8': sdiskio(read_count=40, write_count=0, read_bytes=2113536, write_bytes=0, read_time=44, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=12)}

@giampaolo
Copy link
Owner

giampaolo commented Jul 21, 2018

OK, iostat skips some of those disks (and psutil doesn't). That's what we should do. I'm trying to skim through iostat source code in order to understand where the "skip" of certain occurs:
https://github.com/sysstat/sysstat/blob/master/iostat.c
Any help is welcome. ;)

@giampaolo
Copy link
Owner

I filed a bug on iostat project asking for help:
sysstat/sysstat#185

@kportertx
Copy link
Author

kportertx commented Jul 21, 2018

Btw, if a disk has partitions, I wouldn't expect any IO to the disk that wasn't through a partition.

Before I was saying that I often have disks without partitions that are taking IO. (I realized I was vague before.)

So I believe the current get partitions returns partitions of disks and disks without partitions, this sounds reasonable to me. One of the issues is nvme partitions aren't recognizing their parent disk (which I believe
#1244 addresses). The other issue is that disk IO is inflated by 8x on raw nvme devices.

I can try to get iostat and counters for IO to a raw nvme (no partitions) on Monday if that would be useful.

Currently not near a computer.

giampaolo added a commit that referenced this issue Jul 28, 2018
Fix #1244, #1305, #1312. disk_io_counters(perdisk=False) on Linux was counting disk device + partitions(s) twice
@giampaolo
Copy link
Owner

giampaolo commented Jul 28, 2018

This should now be fixed in #1313. Can you please use latest GIT version and check if it's OK?

@kportertx
Copy link
Author

I'm applying the same load to both nvme0n1p1 and nvme0n2, notice that the load to the raw devices is still inflated by 8x. Also I'm not sure if you were expecting the root device for a partition to still appear in the output:

taken from glances

nvme0n1    2.08G  1.25G
nvme0n1p1   266M   160M
nvme0n2    2.08G  1.25G

Here is the result of calling counters

$ python -c "import psutil, pprint, time; pprint.pprint(psutil.disk_io_counters(perdisk=True));time.sleep(5);pprint.pprint(psutil.disk_io_counters(perdisk=True));"
{'loop0': sdiskio(read_count=8514, write_count=0, read_bytes=9767936, write_bytes=0, read_time=348, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=24),
 'loop1': sdiskio(read_count=53, write_count=0, read_bytes=371712, write_bytes=0, read_time=0, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'loop2': sdiskio(read_count=0, write_count=0, read_bytes=0, write_bytes=0, read_time=0, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'loop3': sdiskio(read_count=0, write_count=0, read_bytes=0, write_bytes=0, read_time=0, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'loop4': sdiskio(read_count=0, write_count=0, read_bytes=0, write_bytes=0, read_time=0, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'loop5': sdiskio(read_count=0, write_count=0, read_bytes=0, write_bytes=0, read_time=0, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'loop6': sdiskio(read_count=0, write_count=0, read_bytes=0, write_bytes=0, read_time=0, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'loop7': sdiskio(read_count=0, write_count=0, read_bytes=0, write_bytes=0, read_time=0, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'md0': sdiskio(read_count=0, write_count=0, read_bytes=0, write_bytes=0, read_time=0, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'nvme0n1': sdiskio(read_count=20412272, write_count=119051, read_bytes=1663729696768, write_bytes=998647365632, read_time=4452052, write_time=435808, read_merged_count=0, write_merged_count=9, busy_time=712420),
 'nvme0n1p1': sdiskio(read_count=20412067, write_count=119048, read_bytes=207960522752, write_bytes=124830875648, read_time=4452016, write_time=435808, read_merged_count=0, write_merged_count=0, busy_time=711348),
 'nvme0n2': sdiskio(read_count=20421643, write_count=119048, read_bytes=1663952322560, write_bytes=998647005184, read_time=4398072, write_time=444648, read_merged_count=0, write_merged_count=0, busy_time=690920),
 'sda': sdiskio(read_count=8885, write_count=46444, read_bytes=312783872, write_bytes=3042857984, read_time=13676, write_time=350056, read_merged_count=55, write_merged_count=80783, busy_time=16836),
 'sda1': sdiskio(read_count=6888, write_count=39422, read_bytes=306333696, write_bytes=3042816000, read_time=7032, write_time=349936, read_merged_count=55, write_merged_count=80775, busy_time=16752),
 'sda14': sdiskio(read_count=160, write_count=0, read_bytes=655360, write_bytes=0, read_time=88, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'sda15': sdiskio(read_count=1786, write_count=2, read_bytes=4688896, write_bytes=1024, read_time=6540, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=64)}
{'loop0': sdiskio(read_count=8514, write_count=0, read_bytes=9767936, write_bytes=0, read_time=348, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=24),
 'loop1': sdiskio(read_count=53, write_count=0, read_bytes=371712, write_bytes=0, read_time=0, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'loop2': sdiskio(read_count=0, write_count=0, read_bytes=0, write_bytes=0, read_time=0, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'loop3': sdiskio(read_count=0, write_count=0, read_bytes=0, write_bytes=0, read_time=0, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'loop4': sdiskio(read_count=0, write_count=0, read_bytes=0, write_bytes=0, read_time=0, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'loop5': sdiskio(read_count=0, write_count=0, read_bytes=0, write_bytes=0, read_time=0, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'loop6': sdiskio(read_count=0, write_count=0, read_bytes=0, write_bytes=0, read_time=0, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'loop7': sdiskio(read_count=0, write_count=0, read_bytes=0, write_bytes=0, read_time=0, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'md0': sdiskio(read_count=0, write_count=0, read_bytes=0, write_bytes=0, read_time=0, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'nvme0n1': sdiskio(read_count=20548958, write_count=119849, read_bytes=1674876583936, write_bytes=1005341474816, read_time=4496868, write_time=439084, read_merged_count=0, write_merged_count=9, busy_time=717372),
 'nvme0n1p1': sdiskio(read_count=20548753, write_count=119846, read_bytes=209353883648, write_bytes=125667639296, read_time=4496832, write_time=439084, read_merged_count=0, write_merged_count=0, busy_time=716284),
 'nvme0n2': sdiskio(read_count=20558881, write_count=119847, read_bytes=1675117297664, write_bytes=1005349502976, read_time=4443992, write_time=448040, read_merged_count=0, write_merged_count=0, busy_time=695812),
 'sda': sdiskio(read_count=8885, write_count=46445, read_bytes=312783872, write_bytes=3042862080, read_time=13676, write_time=350056, read_merged_count=55, write_merged_count=80783, busy_time=16836),
 'sda1': sdiskio(read_count=6888, write_count=39423, read_bytes=306333696, write_bytes=3042820096, read_time=7032, write_time=349936, read_merged_count=55, write_merged_count=80775, busy_time=16752),
 'sda14': sdiskio(read_count=160, write_count=0, read_bytes=655360, write_bytes=0, read_time=88, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=0),
 'sda15': sdiskio(read_count=1786, write_count=2, read_bytes=4688896, write_bytes=1024, read_time=6540, write_time=0, read_merged_count=0, write_merged_count=0, busy_time=64)}

@giampaolo
Copy link
Owner

giampaolo commented Aug 1, 2018

I'm applying the same load to both nvme0n1p1 and nvme0n2, notice that the load to the raw devices is still inflated by 8x.

That is supposed to come directly from /proc/diskstats. psutil is merely reporting it. Please try to cat /proc/diskstats and take a look at the numbers. E.g.:

 259       0 nvme0n1 141175 1 5699664 20780 240671 168373 9314156 158048 0 8492 35540

"read_bytes" is the third column after the disk name (5699664). psutil extracts that and multiplies it by 512 (sector size).

Also I'm not sure if you were expecting the root device for a partition to still appear in the output

Yes, that is expected. The difference introduced in #1313 is that when perdisk=False disk partitions (e.g. nvme0n1p1) are not taken into account. Only root devices are (e.g. nvme0n1).

@kportertx
Copy link
Author

"read_bytes" is the third column after the disk name (5699664). psutil extracts that and multiplies it by 512 (sector size).

The sector size on these disks is 4096, that could be the source of the 8x skew.

@kportertx
Copy link
Author

Interesting read:

https://stackoverflow.com/a/38136179/1364745

Short answer, the sector size is hard-coded to 512B for legacy reasons. So the sector size diskstats is referring to is the 512B constant and not what the disk actually uses.

@giampaolo
Copy link
Owner

The sector size on these disks is 4096, that could be the source of the 8x skew.

How did you determine that? Can you paste the output of:

cat /sys/block/nvme0n1/queue/hw_sector_size
cat /sys/block/nvme0n1p1/queue/hw_sector_size

...and while you're at it also this again: ;-)

cat /proc/diskstats

https://stackoverflow.com/a/38136179/1364745

Interesting read indeed. I am not sure what to get from that though. It's still unclear whether it is advised to use 512 or try to determine the sector size. FWIW psutil currently tries to determine it:

def get_sector_size(name):

I'm not sure this is a reliable method though.
Note: sudo fdisk -l is able to report sector size. My disks all have 512. How about yours?

@kportertx
Copy link
Author

I had done an fdisk earlier in this thread:

# fdisk /dev/nvme0n1

Welcome to fdisk (util-linux 2.31.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): p   

Disk /dev/nvme0n1: 375 GiB, 402653184000 bytes, 98304000 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: D64E3D04-7A83-4C46-

@kportertx
Copy link
Author

BTW, I do not have these instances at the moment as the project using them is current on hold. So I'm not able to run diskstats at the moment.

Also, as mentioned in the initial description, the amplification issue was not present in 3.4.2 which had SECTOR_SIZE hard-coded to 512. https://github.com/giampaolo/psutil/blob/release-3.4.2/psutil/_pslinux.py#L726

@giampaolo
Copy link
Owner

OK, I think we finally got to the hang of it: we should always assume sector size = 512 instead of determining at runtime.

@giampaolo
Copy link
Owner

Fixed. Thanks for helping debugging this. It was tricky.

@kportertx
Copy link
Author

LGTM

Noticed a typo in the next line in the history file:

1309: [Linux] Process.status() is unable to recognie "idle" and "parked" statuses (returns '?').

"recognize"

nlevitt added a commit to nlevitt/psutil that referenced this issue Apr 9, 2019
* origin/master: (182 commits)
  giampaolo#1394 / windows / process exe(): convert errno 0 into ERROR_ACCESS_DENIED; errno 0 occurs when the Python process runs in 'Virtual Secure Mode'
  pre-release
  fix win num_handles() test
  update readme
  fix giampaolo#1111: use a lock to make Process.oneshot() thread safe
  pdate HISTORY
  giampaolo#1373: different approach to oneshot() cache (pass Process instances around - which is faster)
  use PROCESS_QUERY_LIMITED_INFORMATION also for username()
  Linux: refactor _parse_stat_file() and return a dict instead of a list (+ maintainability)
  fix giampaolo#1357: do not expose Process' memory_maps() and io_counters() methods if not supported by the kernel
  giampaolo#1376 Windows: check if variable is NULL before free()ing it
  enforce lack of support for Win XP
  fix giampaolo#1370: improper usage of CloseHandle() may lead to override the original error code resulting in raising a wrong exception
  update HISTORY
  (Windows) use PROCESS_QUERY_LIMITED_INFORMATION access rights (giampaolo#1376)
  update HISTORY
  revert 5398c48; let's do it in a separate branch
  giampaolo#1111 make Process.oneshot() thread-safe
  sort HISTORY
  give CREDITS to @EccoTheFlintstone for giampaolo#1368
  fix ionice set not working on windows x64 due to LENGTH_MISMATCH  (giampaolo#1368)
  make flake8 happy
  give CREDITS to @amanusk for giampaolo#1369 / giampaolo#1352 and update doc
  Add CPU frequency support for FreeBSD (giampaolo#1369)
  giampaolo#1359: add test case for cpu_count(logical=False) against lscpu utility
  disable false positive mem test on travis + osx
  fix PEP8 style mistakes
  give credits to @koenkooi for giampaolo#1360
  Fix giampaolo#1354 [Linux] disk_io_counters() fails on Linux kernel 4.18+ (giampaolo#1360)
  giampaolo#1350: give credits to @amanusk
  FreeBSD adding temperature sensors (WIP) (giampaolo#1350)
  pre release
  sensors_temperatures() / linux: convert defaultdict to dict
  fix giampaolo#1004: Process.io_counters() may raise ValueError
  fix giampaolo#1307: [Linux] disk_partitions() does not honour PROCFS_PATH
  refactor hasattr() checks as global constants
  giampaolo#1197 / linux / cpu_freq(): parse /proc/cpuinfo in case /sys/devices/system/cpu fs is not available
  fix giampaolo#1277 / osx / virtual_memory: 'available' and 'used' memory were not calculated properly
  travis / osx: set py 3.6
  travis: disable pypy; se py 3.7 on osx
  skip test on PYPY + Travis
  fix travis
  fix giampaolo#715: do not print exception on import time in case cpu_times() fails.
  fix different travis failures
  give CREDITS for giampaolo#1320 to @truthbk
  [aix] improve compilation on AIX, better support for gcc/g++ + fix cpu metrics (giampaolo#1320)
  give credits to @alxchk for giampaolo#1346 (sunOS)
  Fix giampaolo#1346 (giampaolo#1347)
  giampaolo#1284, giampaolo#1345 - give credits to @amanusk
  Add parsing for /sys/class/thermal (giampaolo#1345)
  Fix decoding error in tests
  catch UnicodeEncodeError on print()
  use memory tolerance in occasionally failing test
  Fix random 0xC0000001 errors when querying for Connections (giampaolo#1335)
  Correct capitalization of PyPI (giampaolo#1337)
  giampaolo#1341: move open() utilities/wrappers in _common.py
  Refactored ps() function in test_posix (giampaolo#1341)
  fix giampaolo#1343: document Process.as_dict() attrs values
  giampaolo#1332 - update HISTORY
  make psutil_debug() aware of PSUTIL_DEBUG (giampaolo#1332)
  also include PYPY (or try to :P)
  travis: add python 3.7 build
  add download badge
  remove failing test assertions
  remove failing test
  make test more robust
  pre release
  pre release
  set version to 5.4.7
  OSX / SMC / sensors: revert giampaolo#1284 (giampaolo#1325)
  setup.py: add py 3.7
  fix giampaolo#1323: [Linux] sensors_temperatures() may fail with ValueError
  fix failing linux tests
  giampaolo#1321 add unit tests
  giampaolo#1321: refactoring
  make disk_io_counters more robust (giampaolo#1324)
  fix typo
  Fix DeprecationWarning: invalid escape sequence (giampaolo#1318)
  remove old test
  update is_storage_device() docstring
  fix giampaolo#1305 / disk_io_counters() / Linux: assume SECTOR_SIZE is a fixed 512
  giampaolo#1313 remove test which no longer makes sense
  disk_io_counters() - linux: mimic iostat behavior (giampaolo#1313)
  fix wrong reference link in doc
  disambiguate TESTFN for parallel testing
  fix giampaolo#1309: add STATUS_PARKED constant and fix STATUS_IDLE (both on linux)
  give CREDITS to @sylvainduchesne for giampaolo#1294
  retain GIL when querying connections table (giampaolo#1306)
  Update index.rst (giampaolo#1308)
  fix giampaolo#1279: catch and skip ENODEV in net_if_stat()
  appveyor: retire 3.5, add 3.7
  revert file renaming of macos files; get them back to 'osx' prefix
  winmake: add upload-wheels cmd
  Rename OSX to macOS (giampaolo#1298)
  apveyor: reset py 3.4 and remove 3.7 (not available yet)
  try to fix occasional children() failure on Win: https://ci.appveyor.com/project/giampaolo/psutil/build/job/je3qyldbb86ff66h
  appveyor: remove py 3.4 and add 3.7
  giampaolo#1284: give credits to @amanusk + some minor adjustments
  little refactoring
  Osx temps (giampaolo#1284)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants