Skip to content

Commit

Permalink
Linux AIO Support
Browse files Browse the repository at this point in the history
nfsd uses do_readv_writev() to implement fops->read and fops->write.
do_readv_writev() will attempt to read/write using fops->aio_read and
fops->aio_write, but it will fallback to fops->read and fops->write when
AIO is not available. However, the fallback will perform a call for each
individual data page. Since our default recordsize is 128KB, sequential
operations on NFS will generate 32 DMU transactions where only 1
transaction was needed. That was unnecessary overhead and we implement
fops->aio_read and fops->aio_write to eliminate it.

ZFS originated in OpenSolaris, where the AIO API is entirely implemented
in userland's libc by intelligently mapping them to VOP_WRITE, VOP_READ
and VOP_FSYNC.  Linux implements AIO inside the kernel itself. Linux
filesystems therefore must implement their own AIO logic and nearly all
of them implement fops->aio_write synchronously. Consequently, they do
not implement aio_fsync(). However, since the ZPL works by mapping
Linux's VFS calls to the functions implementing Illumos' VFS operations,
we instead implement AIO in the kernel by mapping the operations to the
VOP_READ, VOP_WRITE and VOP_FSYNC equivalents. We therefore implement
fops->aio_fsync.

One might be inclined to make our fops->aio_write implementation
synchronous to make software that expects this behavior safe. However,
there are several reasons not to do this:

1. Other platforms do not implement aio_write() synchronously and since
the majority of userland software using AIO should be cross platform,
expectations of synchronous behavior should not be a problem.

2. We would hurt the performance of programs that use POSIX interfaces
properly while simultaneously encouraging the creation of more
non-compliant software.

3. The broader community concluded that userland software should be
patched to properly use POSIX interfaces instead of implementing hacks
in filesystems to cater to broken software. This concept is best
described as the O_PONIES debate.

4. Making an asynchronous write synchronous is non sequitur.

Any software dependent on synchronous aio_write behavior will suffer
data loss on ZFSOnLinux in a kernel panic / system failure of at most
zfs_txg_timeout seconds, which by default is 5 seconds. This seems like
a reasonable consequence of using non-compliant software.

It should be noted that this is also a problem in the kernel itself
where nfsd does not pass O_SYNC on files opened with it and instead
relies on a open()/write()/close() to enforce synchronous behavior when
the flush is only guarenteed on last close.

Exporting any filesystem that does not implement AIO via NFS risks data
loss in the event of a kernel panic / system failure when something else
is also accessing the file. Exporting any file system that implements
AIO the way this patch does bears similar risk. However, it seems
reasonable to forgo crippling our AIO implementation in favor of
developing patches to fix this problem in Linux's nfsd for the reasons
stated earlier. In the interim, the risk will remain. Failing to
implement AIO will not change the problem that nfsd created, so there is
no reason for nfsd's mistake to block our implementation of AIO.

It also should be noted that `aio_cancel()` will always return
`AIO_NOTCANCELED` under this implementation. It is possible to implement
aio_cancel by deferring work to taskqs and use `kiocb_set_cancel_fn()`
to set a callback function for cancelling work sent to taskqs, but the
simpler approach is allowed by the specification:

```
Which operations are cancelable is implementation-defined.
```

http://pubs.opengroup.org/onlinepubs/009695399/functions/aio_cancel.html

The only programs on my system that are capable of using `aio_cancel()`
are QEMU, beecrypt and fio use it according to a recursive grep of my
system's `/usr/src/debug`. That suggests that `aio_cancel()` users are
rare. Implementing aio_cancel() is left to a future date when it is
clear that there are consumers that benefit from its implementation to
justify the work.

Lastly, it is important to know that handling of the iovec updates differs
between Illumos and Linux in the implementation of read/write. On Linux,
it is the VFS' responsibility whle on Illumos, it is the filesystem's
responsibility.  We take the intermediate solution of copying the iovec
so that the ZFS code can update it like on Solaris while leaving the
originals alone. This imposes some overhead. We could always revisit
this should profiling show that the allocations are a problem.

Signed-off-by: Richard Yao <ryao@gentoo.org>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #223
Closes #2373
  • Loading branch information
ryao authored and behlendorf committed Sep 5, 2014
1 parent 1ca56e6 commit cd3939c
Show file tree
Hide file tree
Showing 4 changed files with 126 additions and 39 deletions.
7 changes: 5 additions & 2 deletions include/sys/zpl.h
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@
#include <linux/writeback.h>
#include <linux/falloc.h>
#include <linux/task_io_accounting_ops.h>
#include <linux/aio.h>

/* zpl_inode.c */
extern void zpl_vap_init(vattr_t *vap, struct inode *dir,
Expand All @@ -46,9 +47,11 @@ extern dentry_operations_t zpl_dentry_operations;

/* zpl_file.c */
extern ssize_t zpl_read_common(struct inode *ip, const char *buf,
size_t len, loff_t pos, uio_seg_t segment, int flags, cred_t *cr);
size_t len, loff_t *ppos, uio_seg_t segment, int flags,
cred_t *cr);
extern ssize_t zpl_write_common(struct inode *ip, const char *buf,
size_t len, loff_t pos, uio_seg_t segment, int flags, cred_t *cr);
size_t len, loff_t *ppos, uio_seg_t segment, int flags,
cred_t *cr);
extern long zpl_fallocate_common(struct inode *ip, int mode,
loff_t offset, loff_t len);

Expand Down
2 changes: 1 addition & 1 deletion module/zfs/zfs_replay.c
Original file line number Diff line number Diff line change
Expand Up @@ -673,7 +673,7 @@ zfs_replay_write(zfs_sb_t *zsb, lr_write_t *lr, boolean_t byteswap)
zsb->z_replay_eof = eod;
}

written = zpl_write_common(ZTOI(zp), data, length, offset,
written = zpl_write_common(ZTOI(zp), data, length, &offset,
UIO_SYSSPACE, 0, kcred);
if (written < 0)
error = -written;
Expand Down
150 changes: 116 additions & 34 deletions module/zfs/zpl_file.c
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,12 @@ zpl_fsync(struct file *filp, struct dentry *dentry, int datasync)
return (error);
}

static int
zpl_aio_fsync(struct kiocb *kiocb, int datasync)
{
struct file *filp = kiocb->ki_filp;
return (zpl_fsync(filp, filp->f_path.dentry, datasync));
}
#elif defined(HAVE_FSYNC_WITHOUT_DENTRY)
/*
* Linux 2.6.35 - 3.0 API,
Expand All @@ -137,6 +143,11 @@ zpl_fsync(struct file *filp, int datasync)
return (error);
}

static int
zpl_aio_fsync(struct kiocb *kiocb, int datasync)
{
return (zpl_fsync(kiocb->ki_filp, datasync));
}
#elif defined(HAVE_FSYNC_RANGE)
/*
* Linux 3.1 - 3.x API,
Expand All @@ -163,85 +174,133 @@ zpl_fsync(struct file *filp, loff_t start, loff_t end, int datasync)

return (error);
}

static int
zpl_aio_fsync(struct kiocb *kiocb, int datasync)
{
return (zpl_fsync(kiocb->ki_filp, kiocb->ki_pos,
kiocb->ki_pos + kiocb->ki_nbytes, datasync));
}
#else
#error "Unsupported fops->fsync() implementation"
#endif

ssize_t
zpl_read_common(struct inode *ip, const char *buf, size_t len, loff_t pos,
uio_seg_t segment, int flags, cred_t *cr)
static inline ssize_t
zpl_read_common_iovec(struct inode *ip, const struct iovec *iovp, size_t count,
unsigned long nr_segs, loff_t *ppos, uio_seg_t segment,
int flags, cred_t *cr)
{
int error;
ssize_t read;
struct iovec iov;
uio_t uio;
int error;

iov.iov_base = (void *)buf;
iov.iov_len = len;

uio.uio_iov = &iov;
uio.uio_resid = len;
uio.uio_iovcnt = 1;
uio.uio_loffset = pos;
uio.uio_iov = (struct iovec *)iovp;
uio.uio_resid = count;
uio.uio_iovcnt = nr_segs;
uio.uio_loffset = *ppos;
uio.uio_limit = MAXOFFSET_T;
uio.uio_segflg = segment;

error = -zfs_read(ip, &uio, flags, cr);
if (error < 0)
return (error);

read = len - uio.uio_resid;
read = count - uio.uio_resid;
*ppos += read;
task_io_account_read(read);

return (read);
}

inline ssize_t
zpl_read_common(struct inode *ip, const char *buf, size_t len, loff_t *ppos,
uio_seg_t segment, int flags, cred_t *cr)
{
struct iovec iov;

iov.iov_base = (void *)buf;
iov.iov_len = len;

return (zpl_read_common_iovec(ip, &iov, len, 1, ppos, segment,
flags, cr));
}

static ssize_t
zpl_read(struct file *filp, char __user *buf, size_t len, loff_t *ppos)
{
cred_t *cr = CRED();
ssize_t read;

crhold(cr);
read = zpl_read_common(filp->f_mapping->host, buf, len, *ppos,
read = zpl_read_common(filp->f_mapping->host, buf, len, ppos,
UIO_USERSPACE, filp->f_flags, cr);
crfree(cr);

if (read < 0)
return (read);
return (read);
}

static ssize_t
zpl_aio_read(struct kiocb *kiocb, const struct iovec *iovp,
unsigned long nr_segs, loff_t pos)
{
cred_t *cr = CRED();
struct file *filp = kiocb->ki_filp;
size_t count = kiocb->ki_nbytes;
ssize_t read;
size_t alloc_size = sizeof (struct iovec) * nr_segs;
struct iovec *iov_tmp = kmem_alloc(alloc_size, KM_SLEEP);
bcopy(iovp, iov_tmp, alloc_size);

ASSERT(iovp);

crhold(cr);
read = zpl_read_common_iovec(filp->f_mapping->host, iov_tmp, count,
nr_segs, &kiocb->ki_pos, UIO_USERSPACE, filp->f_flags, cr);
crfree(cr);

kmem_free(iov_tmp, alloc_size);

*ppos += read;
return (read);
}

ssize_t
zpl_write_common(struct inode *ip, const char *buf, size_t len, loff_t pos,
uio_seg_t segment, int flags, cred_t *cr)
static inline ssize_t
zpl_write_common_iovec(struct inode *ip, const struct iovec *iovp, size_t count,
unsigned long nr_segs, loff_t *ppos, uio_seg_t segment,
int flags, cred_t *cr)
{
int error;
ssize_t wrote;
struct iovec iov;
uio_t uio;
int error;

iov.iov_base = (void *)buf;
iov.iov_len = len;

uio.uio_iov = &iov;
uio.uio_resid = len,
uio.uio_iovcnt = 1;
uio.uio_loffset = pos;
uio.uio_iov = (struct iovec *)iovp;
uio.uio_resid = count;
uio.uio_iovcnt = nr_segs;
uio.uio_loffset = *ppos;
uio.uio_limit = MAXOFFSET_T;
uio.uio_segflg = segment;

error = -zfs_write(ip, &uio, flags, cr);
if (error < 0)
return (error);

wrote = len - uio.uio_resid;
wrote = count - uio.uio_resid;
*ppos += wrote;
task_io_account_write(wrote);

return (wrote);
}
inline ssize_t
zpl_write_common(struct inode *ip, const char *buf, size_t len, loff_t *ppos,
uio_seg_t segment, int flags, cred_t *cr)
{
struct iovec iov;

iov.iov_base = (void *)buf;
iov.iov_len = len;

return (zpl_write_common_iovec(ip, &iov, len, 1, ppos, segment,
flags, cr));
}

static ssize_t
zpl_write(struct file *filp, const char __user *buf, size_t len, loff_t *ppos)
Expand All @@ -250,14 +309,34 @@ zpl_write(struct file *filp, const char __user *buf, size_t len, loff_t *ppos)
ssize_t wrote;

crhold(cr);
wrote = zpl_write_common(filp->f_mapping->host, buf, len, *ppos,
wrote = zpl_write_common(filp->f_mapping->host, buf, len, ppos,
UIO_USERSPACE, filp->f_flags, cr);
crfree(cr);

if (wrote < 0)
return (wrote);
return (wrote);
}

static ssize_t
zpl_aio_write(struct kiocb *kiocb, const struct iovec *iovp,
unsigned long nr_segs, loff_t pos)
{
cred_t *cr = CRED();
struct file *filp = kiocb->ki_filp;
size_t count = kiocb->ki_nbytes;
ssize_t wrote;
size_t alloc_size = sizeof (struct iovec) * nr_segs;
struct iovec *iov_tmp = kmem_alloc(alloc_size, KM_SLEEP);
bcopy(iovp, iov_tmp, alloc_size);

ASSERT(iovp);

crhold(cr);
wrote = zpl_write_common_iovec(filp->f_mapping->host, iov_tmp, count,
nr_segs, &kiocb->ki_pos, UIO_USERSPACE, filp->f_flags, cr);
crfree(cr);

kmem_free(iov_tmp, alloc_size);

*ppos += wrote;
return (wrote);
}

Expand Down Expand Up @@ -646,8 +725,11 @@ const struct file_operations zpl_file_operations = {
.llseek = zpl_llseek,
.read = zpl_read,
.write = zpl_write,
.aio_read = zpl_aio_read,
.aio_write = zpl_aio_write,
.mmap = zpl_mmap,
.fsync = zpl_fsync,
.aio_fsync = zpl_aio_fsync,
#ifdef HAVE_FILE_FALLOCATE
.fallocate = zpl_fallocate,
#endif /* HAVE_FILE_FALLOCATE */
Expand Down
6 changes: 4 additions & 2 deletions module/zfs/zpl_xattr.c
Original file line number Diff line number Diff line change
Expand Up @@ -239,6 +239,7 @@ zpl_xattr_get_dir(struct inode *ip, const char *name, void *value,
{
struct inode *dxip = NULL;
struct inode *xip = NULL;
loff_t pos = 0;
int error;

/* Lookup the xattr directory */
Expand All @@ -261,7 +262,7 @@ zpl_xattr_get_dir(struct inode *ip, const char *name, void *value,
goto out;
}

error = zpl_read_common(xip, value, size, 0, UIO_SYSSPACE, 0, cr);
error = zpl_read_common(xip, value, size, &pos, UIO_SYSSPACE, 0, cr);
out:
if (xip)
iput(xip);
Expand Down Expand Up @@ -357,6 +358,7 @@ zpl_xattr_set_dir(struct inode *ip, const char *name, const void *value,
ssize_t wrote;
int lookup_flags, error;
const int xattr_mode = S_IFREG | 0644;
loff_t pos = 0;

/*
* Lookup the xattr directory. When we're adding an entry pass
Expand Down Expand Up @@ -407,7 +409,7 @@ zpl_xattr_set_dir(struct inode *ip, const char *name, const void *value,
if (error)
goto out;

wrote = zpl_write_common(xip, value, size, 0, UIO_SYSSPACE, 0, cr);
wrote = zpl_write_common(xip, value, size, &pos, UIO_SYSSPACE, 0, cr);
if (wrote < 0)
error = wrote;

Expand Down

5 comments on commit cd3939c

@ilovezfs
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this commit permit not using innodb_use_native_aio=0 in /etc/mysql/my.cnf? I tried removing it, but mysql would not work unless I put it back.

@ryao
Copy link
Contributor Author

@ryao ryao commented on cd3939c Sep 8, 2014

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should.

@behlendorf
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it should assuming innodb only requires AIO support. If it also requires something like Direct IO which isn't yet supported then it won't.

@ryao
Copy link
Contributor Author

@ryao ryao commented on cd3939c Sep 8, 2014

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ilovezfs It turns out that MySQL's innodb_use_native_aio also uses Direct IO, as @behlendorf had suspected. This is discussed in issue #224.

@sayap
Copy link

@sayap sayap commented on cd3939c Jan 12, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On Percona Xtradb Cluster (PXC) 5.6, the performance is pretty bad with innodb_use_native_aio = ON.

With innodb_use_native_aio = ON, there will be 2 background threads that will call zpl_iter_write via the io_submit syscall, i.e. buf_flush_lru_manager_thread and buf_flush_page_cleaner_thread. Due to sync reads from read-modify-write or metadata cache miss, these 2 threads most likely won't be able to saturate the I/O, and so dirty pages flushing may start to fall behind, which then leads to the dreaded "InnoDB: Warning: difficult to find free blocks".

With innodb_use_native_aio = OFF, there can be an arbitrary number of io_handler_thread that will call zpl_iter_write via the pwrite64 syscall, as per the innodb_write_io_threads parameter. To saturate the I/O, we can just add more I/O threads.

This was tested with PXC 5.6 on ZFS 2.0.0, but I think the behavior would be similar on other flavors/versions of MySQL, as the architecture of InnoDB would remain roughly the same.

EDIT: After doing a bunch more testing with sysbench-tpcc, innodb_use_native_aio = ON appears to have slightly better overall performance compared to innodb_use_native_aio = OFF, with both ZFS 2.0.0 and 2.1.0. While innodb_use_native_aio = OFF might be able to flush dirty pages 5~10% faster under certain configurations, having all the extra I/O threads also takes away CPU time, I think.

Please sign in to comment.