Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zed 0.7.0 - Increased number of "Missed events" #6425

Closed
SenH opened this issue Jul 28, 2017 · 8 comments · Fixed by #6440
Closed

zed 0.7.0 - Increased number of "Missed events" #6425

SenH opened this issue Jul 28, 2017 · 8 comments · Fixed by #6440
Labels
Component: ZED ZFS Event Daemon

Comments

@SenH
Copy link
Contributor

SenH commented Jul 28, 2017

System information

Type Version/Name
Distribution Name Ubuntu
Distribution Version 12.04.5 LTS
Linux Kernel 3.13.0-110-generic
Architecture x86_64
ZFS Version zfs-0.7.0-1
SPL Version zfs-0.7.0-1

Describe the problem you're observing

After updating to zfs 0.7.0-1, zed is reporting lot's of missed events to syslog. The previous compiled zfs 0.7.0-rc3 did not exhibit these messages. The events are always reported when zfs-auto-snaphot is run (every 15 minutes).

In the code is a suggestion to increase parameter zfs_zevent_len_max. I tried 128/256/512 (default = 64) but it did not change anything at all.

cat /sys/module/zfs/parameters/zfs_zevent_len_max
512
Jul 28 22:15:03 kubrick zed[14779]: Missed 5 events
Jul 28 22:15:04 kubrick zed[14779]: Missed 6 events
Jul 28 22:15:05 kubrick zed[14779]: Missed 12 events
Jul 28 22:15:05 kubrick zed[14779]: Missed 11 events
Jul 28 22:15:06 kubrick zed[14779]: Missed 7 events
Jul 28 22:15:06 kubrick zed[14779]: Missed 11 events
Jul 28 22:15:07 kubrick zed[14779]: Missed 7 events
Jul 28 22:15:07 kubrick zed[14779]: Missed 12 events
Jul 28 22:15:08 kubrick zed[14779]: Missed 6 events
Jul 28 22:15:08 kubrick zed[14779]: Missed 12 events
Jul 28 22:15:08 kubrick zed[14779]: Missed 12 events
Jul 28 22:15:09 kubrick zed[14779]: Missed 6 events
Jul 28 22:17:03 kubrick zed[14779]: Missed 12 events
Jul 28 22:17:04 kubrick zed[14779]: Missed 6 events
Jul 28 22:17:04 kubrick zed[14779]: Missed 12 events
Jul 28 22:17:04 kubrick zed[14779]: Missed 6 events
Jul 28 22:17:05 kubrick zed[14779]: Missed 12 events
Jul 28 22:17:05 kubrick zed[14779]: Missed 8 events
Jul 28 22:17:06 kubrick zed[14779]: Missed 10 events
Jul 28 22:17:06 kubrick zed[14779]: Missed 8 events
Jul 28 22:17:07 kubrick zed[14779]: Missed 10 events
Jul 28 22:17:07 kubrick zed[14779]: Missed 12 events
Jul 28 22:17:08 kubrick zed[14779]: Missed 2 events
Jul 28 22:17:08 kubrick zed[14779]: Missed 4 events
Jul 28 22:17:08 kubrick zed[14779]: Missed 12 events
Jul 28 22:17:08 kubrick zed[14779]: Missed 12 events
Jul 28 22:17:09 kubrick zed[14779]: Missed 2 events
Jul 28 22:17:09 kubrick zed[14779]: Missed 4 events
Jul 28 22:30:04 kubrick zed[14779]: Missed 12 events
Jul 28 22:30:05 kubrick zed[14779]: Missed 6 events
Jul 28 22:30:05 kubrick zed[14779]: Missed 12 events
Jul 28 22:30:05 kubrick zed[14779]: Missed 12 events
Jul 28 22:30:06 kubrick zed[14779]: Missed 6 events
Jul 28 22:30:07 kubrick zed[14779]: Missed 12 events
Jul 28 22:30:07 kubrick zed[14779]: Missed 6 events
Jul 28 22:30:07 kubrick zed[14779]: Missed 12 events
Jul 28 22:30:08 kubrick zed[14779]: Missed 6 events
Jul 28 22:30:08 kubrick zed[14779]: Missed 12 events
Jul 28 22:30:09 kubrick zed[14779]: Missed 12 events
Jul 28 22:30:09 kubrick zed[14779]: Missed 6 events
@behlendorf
Copy link
Contributor

The warnings mean that the events generated by the ZFS kernel modules are generated faster than the ZED able to consume and process them. This isn't harmful, but it could result in more important events such as IO/checksum errors getting missed.

The 0.7.0 release does generate additional events which were not present in 0.6.5.x, such as the command history events. Depending on your workload you might be generating a lot of these and overwhelming the ZED which still processes those events sequentially.

What are the vast majority of events you're seeing?

@SenH
Copy link
Contributor Author

SenH commented Jul 28, 2017

Indeed, it's sysevent.fs.zfs.history_event and in the verbose events one of history_internal_name = "snapshot" | "set" | "destroy"

sudo zpool events | wc -l
514
sudo zpool events | cut -d' ' -f5 | sort -u

sysevent.fs.zfs.history_event
sudo zpool events
TIME                           CLASS
Jul 28 2017 21:15:05.761004066 sysevent.fs.zfs.history_event
Jul 28 2017 21:15:05.765003871 sysevent.fs.zfs.history_event
Jul 28 2017 21:15:05.765003871 sysevent.fs.zfs.history_event
[…]
Jul 28 2017 23:00:08.320408919 sysevent.fs.zfs.history_event
Jul 28 2017 23:00:08.580396800 sysevent.fs.zfs.history_event
Jul 28 2017 23:00:08.580396800 sysevent.fs.zfs.history_event
Jul 28 2017 23:00:08.820385608 sysevent.fs.zfs.history_event

@behlendorf
Copy link
Contributor

If the generation of these events is busty then increasing /sys/module/zfs/parameters/zfs_zevent_len_max to something large, say 10,000 might give the ZED enough time to process the backlog and be an OK workaround for now. This will only cost you a small amount of memory.

In the past we've also talked about allowing the ZED to process events concurrently, it might be time to implement that.

If your OK with a quick fix and rebuilding ZFS you can disable the new history events by commenting out the code which posts these events.

diff --git a/module/zfs/spa_history.c b/module/zfs/spa_history.c
index 73571c0..bc3ae0c 100644
--- a/module/zfs/spa_history.c
+++ b/module/zfs/spa_history.c
@@ -336,7 +336,9 @@ spa_history_log_sync(void *arg, dmu_tx_t *tx)
                 * full command line arguments, requiring the consumer to know
                 * how to parse and understand zfs(1M) command invocations.
                 */
+#if 0
                spa_history_log_notify(spa, nvl);
+#endif
        } else if (nvlist_exists(nvl, ZPOOL_HIST_IOCTL)) {
                zfs_dbgmsg("ioctl %s",
                    fnvlist_lookup_string(nvl, ZPOOL_HIST_IOCTL));

@behlendorf behlendorf added the Component: ZED ZFS Event Daemon label Jul 28, 2017
@SenH
Copy link
Contributor Author

SenH commented Jul 28, 2017

@behlendorf I will try out your suggestions. Thank you so much for providing assistance!

tonyhutter added a commit to tonyhutter/zfs that referenced this issue Aug 1, 2017
While investigating openzfs#6425 I
noticed that ioctl ZIOs were not setting zio->io_delay correctly.  They
would set the start time in zio_vdev_io_start(), but never set the end
time in zio_vdev_io_done(), since ioctls skip it and go straight to
zio_done().  This was causing spurious "delayed IO" events to appear,
which would eventually get rate-limited and displayed as
"Missed events" messages in zed.

To get around the problem, this patch only sets zio->io_delay for read
and write ZIOs, since that's all we care about anyway.

Signed-off-by: Tony Hutter <hutter2@llnl.gov>
@tonyhutter
Copy link
Contributor

Fix out for review: #6440

@behlendorf
Copy link
Contributor

@SenH if it wouldn't be too disruptive it would be great if you could verify @tonyhutter's proposed fix in #6440. You could pretty easily check if you're hitting the same issue by looking for a large number of delay events in the zpool events history.

@SenH
Copy link
Contributor Author

SenH commented Aug 2, 2017

@behlendorf After #6440 I do not have Missed events in syslog anymore but I'm not seeing delay events either.

sudo zpool events -H | cut -d' ' -f6 | sort | uniq -c
      2 sysevent.fs.zfs.config_sync
   1205 sysevent.fs.zfs.history_event
      1 sysevent.fs.zfs.pool_import

@behlendorf
Copy link
Contributor

Great. That's exactly what we'd expect, thanks.

behlendorf pushed a commit that referenced this issue Aug 2, 2017
While investigating #6425 I
noticed that ioctl ZIOs were not setting zio->io_delay correctly.  They
would set the start time in zio_vdev_io_start(), but never set the end
time in zio_vdev_io_done(), since ioctls skip it and go straight to
zio_done().  This was causing spurious "delayed IO" events to appear,
which would eventually get rate-limited and displayed as
"Missed events" messages in zed.

To get around the problem, this patch only sets zio->io_delay for read
and write ZIOs, since that's all we care about anyway.

Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #6425 
Closes #6440
tonyhutter added a commit to tonyhutter/zfs that referenced this issue Aug 2, 2017
While investigating openzfs#6425 I
noticed that ioctl ZIOs were not setting zio->io_delay correctly.  They
would set the start time in zio_vdev_io_start(), but never set the end
time in zio_vdev_io_done(), since ioctls skip it and go straight to
zio_done().  This was causing spurious "delayed IO" events to appear,
which would eventually get rate-limited and displayed as
"Missed events" messages in zed.

To get around the problem, this patch only sets zio->io_delay for read
and write ZIOs, since that's all we care about anyway.

Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes openzfs#6425
Closes openzfs#6440
tonyhutter added a commit that referenced this issue Aug 2, 2017
While investigating #6425 I
noticed that ioctl ZIOs were not setting zio->io_delay correctly.  They
would set the start time in zio_vdev_io_start(), but never set the end
time in zio_vdev_io_done(), since ioctls skip it and go straight to
zio_done().  This was causing spurious "delayed IO" events to appear,
which would eventually get rate-limited and displayed as
"Missed events" messages in zed.

To get around the problem, this patch only sets zio->io_delay for read
and write ZIOs, since that's all we care about anyway.

Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #6425
Closes #6440
rottegift pushed a commit to rottegift/zfs that referenced this issue Aug 3, 2017
While investigating openzfs/zfs#6425 I
noticed that ioctl ZIOs were not setting zio->io_delay correctly.  They
would set the start time in zio_vdev_io_start(), but never set the end
time in zio_vdev_io_done(), since ioctls skip it and go straight to
zio_done().  This was causing spurious "delayed IO" events to appear,
which would eventually get rate-limited and displayed as
"Missed events" messages in zed.

To get around the problem, this patch only sets zio->io_delay for read
and write ZIOs, since that's all we care about anyway.

Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #6425 
Closes #6440
rottegift pushed a commit to rottegift/zfs that referenced this issue Aug 3, 2017
While investigating openzfs/zfs#6425 I
noticed that ioctl ZIOs were not setting zio->io_delay correctly.  They
would set the start time in zio_vdev_io_start(), but never set the end
time in zio_vdev_io_done(), since ioctls skip it and go straight to
zio_done().  This was causing spurious "delayed IO" events to appear,
which would eventually get rate-limited and displayed as
"Missed events" messages in zed.

To get around the problem, this patch only sets zio->io_delay for read
and write ZIOs, since that's all we care about anyway.

Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #6425 
Closes #6440
rottegift pushed a commit to rottegift/zfs that referenced this issue Aug 3, 2017
While investigating openzfs/zfs#6425 I
noticed that ioctl ZIOs were not setting zio->io_delay correctly.  They
would set the start time in zio_vdev_io_start(), but never set the end
time in zio_vdev_io_done(), since ioctls skip it and go straight to
zio_done().  This was causing spurious "delayed IO" events to appear,
which would eventually get rate-limited and displayed as
"Missed events" messages in zed.

To get around the problem, this patch only sets zio->io_delay for read
and write ZIOs, since that's all we care about anyway.

Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #6425 
Closes #6440
rottegift pushed a commit to rottegift/zfs that referenced this issue Aug 15, 2017
While investigating openzfs/zfs#6425 I
noticed that ioctl ZIOs were not setting zio->io_delay correctly.  They
would set the start time in zio_vdev_io_start(), but never set the end
time in zio_vdev_io_done(), since ioctls skip it and go straight to
zio_done().  This was causing spurious "delayed IO" events to appear,
which would eventually get rate-limited and displayed as
"Missed events" messages in zed.

To get around the problem, this patch only sets zio->io_delay for read
and write ZIOs, since that's all we care about anyway.

Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #6425 
Closes #6440
rottegift pushed a commit to rottegift/zfs that referenced this issue Aug 18, 2017
While investigating openzfs/zfs#6425 I
noticed that ioctl ZIOs were not setting zio->io_delay correctly.  They
would set the start time in zio_vdev_io_start(), but never set the end
time in zio_vdev_io_done(), since ioctls skip it and go straight to
zio_done().  This was causing spurious "delayed IO" events to appear,
which would eventually get rate-limited and displayed as
"Missed events" messages in zed.

To get around the problem, this patch only sets zio->io_delay for read
and write ZIOs, since that's all we care about anyway.

Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #6425 
Closes #6440
rottegift pushed a commit to rottegift/zfs that referenced this issue Aug 22, 2017
While investigating openzfs/zfs#6425 I
noticed that ioctl ZIOs were not setting zio->io_delay correctly.  They
would set the start time in zio_vdev_io_start(), but never set the end
time in zio_vdev_io_done(), since ioctls skip it and go straight to
zio_done().  This was causing spurious "delayed IO" events to appear,
which would eventually get rate-limited and displayed as
"Missed events" messages in zed.

To get around the problem, this patch only sets zio->io_delay for read
and write ZIOs, since that's all we care about anyway.

Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #6425 
Closes #6440
rottegift pushed a commit to rottegift/zfs that referenced this issue Aug 23, 2017
While investigating openzfs/zfs#6425 I
noticed that ioctl ZIOs were not setting zio->io_delay correctly.  They
would set the start time in zio_vdev_io_start(), but never set the end
time in zio_vdev_io_done(), since ioctls skip it and go straight to
zio_done().  This was causing spurious "delayed IO" events to appear,
which would eventually get rate-limited and displayed as
"Missed events" messages in zed.

To get around the problem, this patch only sets zio->io_delay for read
and write ZIOs, since that's all we care about anyway.

Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #6425 
Closes #6440
rottegift pushed a commit to rottegift/zfs that referenced this issue Aug 30, 2017
While investigating openzfs/zfs#6425 I
noticed that ioctl ZIOs were not setting zio->io_delay correctly.  They
would set the start time in zio_vdev_io_start(), but never set the end
time in zio_vdev_io_done(), since ioctls skip it and go straight to
zio_done().  This was causing spurious "delayed IO" events to appear,
which would eventually get rate-limited and displayed as
"Missed events" messages in zed.

To get around the problem, this patch only sets zio->io_delay for read
and write ZIOs, since that's all we care about anyway.

Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #6425 
Closes #6440
rottegift pushed a commit to rottegift/zfs that referenced this issue Sep 2, 2017
While investigating openzfs/zfs#6425 I
noticed that ioctl ZIOs were not setting zio->io_delay correctly.  They
would set the start time in zio_vdev_io_start(), but never set the end
time in zio_vdev_io_done(), since ioctls skip it and go straight to
zio_done().  This was causing spurious "delayed IO" events to appear,
which would eventually get rate-limited and displayed as
"Missed events" messages in zed.

To get around the problem, this patch only sets zio->io_delay for read
and write ZIOs, since that's all we care about anyway.

Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #6425 
Closes #6440
rottegift pushed a commit to rottegift/zfs that referenced this issue Sep 4, 2017
While investigating openzfs/zfs#6425 I
noticed that ioctl ZIOs were not setting zio->io_delay correctly.  They
would set the start time in zio_vdev_io_start(), but never set the end
time in zio_vdev_io_done(), since ioctls skip it and go straight to
zio_done().  This was causing spurious "delayed IO" events to appear,
which would eventually get rate-limited and displayed as
"Missed events" messages in zed.

To get around the problem, this patch only sets zio->io_delay for read
and write ZIOs, since that's all we care about anyway.

Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #6425 
Closes #6440
Fabian-Gruenbichler pushed a commit to Fabian-Gruenbichler/zfs that referenced this issue Sep 6, 2017
While investigating openzfs#6425 I
noticed that ioctl ZIOs were not setting zio->io_delay correctly.  They
would set the start time in zio_vdev_io_start(), but never set the end
time in zio_vdev_io_done(), since ioctls skip it and go straight to
zio_done().  This was causing spurious "delayed IO" events to appear,
which would eventually get rate-limited and displayed as
"Missed events" messages in zed.

To get around the problem, this patch only sets zio->io_delay for read
and write ZIOs, since that's all we care about anyway.

Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes openzfs#6425
Closes openzfs#6440
rottegift pushed a commit to rottegift/zfs that referenced this issue Sep 6, 2017
While investigating openzfs/zfs#6425 I
noticed that ioctl ZIOs were not setting zio->io_delay correctly.  They
would set the start time in zio_vdev_io_start(), but never set the end
time in zio_vdev_io_done(), since ioctls skip it and go straight to
zio_done().  This was causing spurious "delayed IO" events to appear,
which would eventually get rate-limited and displayed as
"Missed events" messages in zed.

To get around the problem, this patch only sets zio->io_delay for read
and write ZIOs, since that's all we care about anyway.

Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #6425 
Closes #6440
rottegift pushed a commit to rottegift/zfs that referenced this issue Sep 7, 2017
While investigating openzfs/zfs#6425 I
noticed that ioctl ZIOs were not setting zio->io_delay correctly.  They
would set the start time in zio_vdev_io_start(), but never set the end
time in zio_vdev_io_done(), since ioctls skip it and go straight to
zio_done().  This was causing spurious "delayed IO" events to appear,
which would eventually get rate-limited and displayed as
"Missed events" messages in zed.

To get around the problem, this patch only sets zio->io_delay for read
and write ZIOs, since that's all we care about anyway.

Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #6425 
Closes #6440
rottegift pushed a commit to rottegift/zfs that referenced this issue Sep 9, 2017
While investigating openzfs/zfs#6425 I
noticed that ioctl ZIOs were not setting zio->io_delay correctly.  They
would set the start time in zio_vdev_io_start(), but never set the end
time in zio_vdev_io_done(), since ioctls skip it and go straight to
zio_done().  This was causing spurious "delayed IO" events to appear,
which would eventually get rate-limited and displayed as
"Missed events" messages in zed.

To get around the problem, this patch only sets zio->io_delay for read
and write ZIOs, since that's all we care about anyway.

Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #6425 
Closes #6440
rottegift pushed a commit to rottegift/zfs that referenced this issue Sep 24, 2017
While investigating openzfs/zfs#6425 I
noticed that ioctl ZIOs were not setting zio->io_delay correctly.  They
would set the start time in zio_vdev_io_start(), but never set the end
time in zio_vdev_io_done(), since ioctls skip it and go straight to
zio_done().  This was causing spurious "delayed IO" events to appear,
which would eventually get rate-limited and displayed as
"Missed events" messages in zed.

To get around the problem, this patch only sets zio->io_delay for read
and write ZIOs, since that's all we care about anyway.

Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #6425 
Closes #6440
rottegift pushed a commit to rottegift/zfs that referenced this issue Sep 29, 2017
While investigating openzfs/zfs#6425 I
noticed that ioctl ZIOs were not setting zio->io_delay correctly.  They
would set the start time in zio_vdev_io_start(), but never set the end
time in zio_vdev_io_done(), since ioctls skip it and go straight to
zio_done().  This was causing spurious "delayed IO" events to appear,
which would eventually get rate-limited and displayed as
"Missed events" messages in zed.

To get around the problem, this patch only sets zio->io_delay for read
and write ZIOs, since that's all we care about anyway.

Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #6425 
Closes #6440
rottegift pushed a commit to rottegift/zfs that referenced this issue Oct 4, 2017
While investigating openzfs/zfs#6425 I
noticed that ioctl ZIOs were not setting zio->io_delay correctly.  They
would set the start time in zio_vdev_io_start(), but never set the end
time in zio_vdev_io_done(), since ioctls skip it and go straight to
zio_done().  This was causing spurious "delayed IO" events to appear,
which would eventually get rate-limited and displayed as
"Missed events" messages in zed.

To get around the problem, this patch only sets zio->io_delay for read
and write ZIOs, since that's all we care about anyway.

Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #6425 
Closes #6440
rottegift pushed a commit to rottegift/zfs that referenced this issue Oct 5, 2017
While investigating openzfs/zfs#6425 I
noticed that ioctl ZIOs were not setting zio->io_delay correctly.  They
would set the start time in zio_vdev_io_start(), but never set the end
time in zio_vdev_io_done(), since ioctls skip it and go straight to
zio_done().  This was causing spurious "delayed IO" events to appear,
which would eventually get rate-limited and displayed as
"Missed events" messages in zed.

To get around the problem, this patch only sets zio->io_delay for read
and write ZIOs, since that's all we care about anyway.

Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #6425 
Closes #6440
rottegift pushed a commit to rottegift/zfs that referenced this issue Oct 6, 2017
While investigating openzfs/zfs#6425 I
noticed that ioctl ZIOs were not setting zio->io_delay correctly.  They
would set the start time in zio_vdev_io_start(), but never set the end
time in zio_vdev_io_done(), since ioctls skip it and go straight to
zio_done().  This was causing spurious "delayed IO" events to appear,
which would eventually get rate-limited and displayed as
"Missed events" messages in zed.

To get around the problem, this patch only sets zio->io_delay for read
and write ZIOs, since that's all we care about anyway.

Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #6425 
Closes #6440
rottegift pushed a commit to rottegift/zfs that referenced this issue Oct 10, 2017
While investigating openzfs/zfs#6425 I
noticed that ioctl ZIOs were not setting zio->io_delay correctly.  They
would set the start time in zio_vdev_io_start(), but never set the end
time in zio_vdev_io_done(), since ioctls skip it and go straight to
zio_done().  This was causing spurious "delayed IO" events to appear,
which would eventually get rate-limited and displayed as
"Missed events" messages in zed.

To get around the problem, this patch only sets zio->io_delay for read
and write ZIOs, since that's all we care about anyway.

Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #6425 
Closes #6440
rottegift pushed a commit to rottegift/zfs that referenced this issue Oct 17, 2017
While investigating openzfs/zfs#6425 I
noticed that ioctl ZIOs were not setting zio->io_delay correctly.  They
would set the start time in zio_vdev_io_start(), but never set the end
time in zio_vdev_io_done(), since ioctls skip it and go straight to
zio_done().  This was causing spurious "delayed IO" events to appear,
which would eventually get rate-limited and displayed as
"Missed events" messages in zed.

To get around the problem, this patch only sets zio->io_delay for read
and write ZIOs, since that's all we care about anyway.

Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #6425 
Closes #6440
rottegift pushed a commit to rottegift/zfs that referenced this issue Oct 18, 2017
While investigating openzfs/zfs#6425 I
noticed that ioctl ZIOs were not setting zio->io_delay correctly.  They
would set the start time in zio_vdev_io_start(), but never set the end
time in zio_vdev_io_done(), since ioctls skip it and go straight to
zio_done().  This was causing spurious "delayed IO" events to appear,
which would eventually get rate-limited and displayed as
"Missed events" messages in zed.

To get around the problem, this patch only sets zio->io_delay for read
and write ZIOs, since that's all we care about anyway.

Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #6425 
Closes #6440
rottegift pushed a commit to rottegift/zfs that referenced this issue Nov 16, 2017
While investigating openzfs/zfs#6425 I
noticed that ioctl ZIOs were not setting zio->io_delay correctly.  They
would set the start time in zio_vdev_io_start(), but never set the end
time in zio_vdev_io_done(), since ioctls skip it and go straight to
zio_done().  This was causing spurious "delayed IO" events to appear,
which would eventually get rate-limited and displayed as
"Missed events" messages in zed.

To get around the problem, this patch only sets zio->io_delay for read
and write ZIOs, since that's all we care about anyway.

Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #6425 
Closes #6440
rottegift pushed a commit to rottegift/zfs that referenced this issue Nov 17, 2017
While investigating openzfs/zfs#6425 I
noticed that ioctl ZIOs were not setting zio->io_delay correctly.  They
would set the start time in zio_vdev_io_start(), but never set the end
time in zio_vdev_io_done(), since ioctls skip it and go straight to
zio_done().  This was causing spurious "delayed IO" events to appear,
which would eventually get rate-limited and displayed as
"Missed events" messages in zed.

To get around the problem, this patch only sets zio->io_delay for read
and write ZIOs, since that's all we care about anyway.

Reviewed-by: Giuseppe Di Natale <dinatale2@llnl.gov>
Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Tony Hutter <hutter2@llnl.gov>
Closes #6425 
Closes #6440
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Component: ZED ZFS Event Daemon
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants