Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No read throughput shown for the sequential read write workload #152

Open
mrashid2 opened this issue Oct 7, 2021 · 1 comment
Open

No read throughput shown for the sequential read write workload #152

mrashid2 opened this issue Oct 7, 2021 · 1 comment

Comments

@mrashid2
Copy link

mrashid2 commented Oct 7, 2021

I am running a Filebench workload which has provided below:

define fileset name="testF",entries=100,filesize=16m,prealloc,path="/mnt/hasanfs/tmp1"

define process name="readerP",instances=2 {
  thread name="readerT",instances=4 {
  flowop openfile name="openOP",filesetname="testF"
  flowop writewholefile name="writeOP",iters=4,filesetname="testF"
  flowop readwholefile name="readOP",iters=1,filesetname="testF"
  flowop closefile name="closeOP"
  }
}

create files
system "sync"
system "echo 3 > /proc/sys/vm/drop_caches"

run 60

I am running the workload in a Lustre cluster. When I check the log from the server-side, it show the following:

obdfilter.hasanfs-OST0000.stats=
snapshot_time             1633361244.519122 secs.usecs
read_bytes                1 samples [bytes] 4096 4096 4096
write_bytes               8479 samples [bytes] 1048576 1048576 8890875904
destroy                   13 samples [reqs]
statfs                    76 samples [reqs]
preprw                    8480 samples [reqs]
commitrw                  8480 samples [reqs]
ping                      57 samples [reqs]

obdfilter.hasanfs-OST0000.brw_stats=
snapshot_time:         1633361244.519588 (secs.usecs)

                           read      |     write
pages per bulk r/w     rpcs  % cum % |  rpcs        % cum %
256:		         0   0   0   | 8479 100 100

                           read      |     write
discontiguous pages    rpcs  % cum % |  rpcs        % cum %
0:		         0   0   0   | 8479 100 100

                           read      |     write
discontiguous blocks   rpcs  % cum % |  rpcs        % cum %
0:		         0   0   0   | 8479 100 100

                           read      |     write
disk fragmented I/Os   ios   % cum % |  ios         % cum %
1:		         0   0   0   | 7499  88  88
2:		         0   0   0   |  980  11 100

                           read      |     write
disk I/Os in flight    ios   % cum % |  ios         % cum %
1:		         0   0   0   | 7921  83  83
2:		         0   0   0   | 1407  14  98
3:		         0   0   0   |  116   1  99
4:		         0   0   0   |   15   0 100

                           read      |     write
I/O time (1/1000s)     ios   % cum % |  ios         % cum %
2:		         0   0   0   | 1963  23  23
4:		         0   0   0   | 5627  66  89
8:		         0   0   0   |  837   9  99
16:		         0   0   0   |   25   0  99
32:		         0   0   0   |   27   0 100

                           read      |     write
disk I/O size          ios   % cum % |  ios         % cum %
4K:		         0   0   0   |   62   0   0
8K:		         0   0   0   |  127   1   1
16K:		         0   0   0   |  113   1   3
32K:		         0   0   0   |    0   0   3
64K:		         0   0   0   |    0   0   3
128K:		         0   0   0   |   65   0   3
256K:		         0   0   0   |    0   0   3
512K:		         0   0   0   | 1127  11  15
1M:		         0   0   0   | 7965  84 100

Can anyone please explain to me why I am not seeing any read operations in the stats?

@sectorsize512
Copy link
Member

Hi, I'm not familiar with the monitoring tool you use, but you might not see disk I/O reads because all data is cached in the page cache. Your dataset is pretty small and created during the benchmark run, so probably reside in memory by the time reads are executed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants