darwin: optimize uv_fs_write? #686

bnoordhuis opened this Issue Jan 17, 2013 · 3 comments


None yet

3 participants


The core of uv_fs_write() currently looks like this:

static ssize_t uv__fs_write(uv_fs_t* req) {
  ssize_t r;

  /* Serialize writes on OS X, concurrent write() and pwrite() calls result in
   * data loss. We can't use a per-file descriptor lock, the descriptor may be
   * a dup().
#if defined(__APPLE__)
  static pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;

  if (req->off < 0)
    r = write(req->file, req->buf, req->len);
    r = pwrite(req->file, req->buf, req->len, req->off);

#if defined(__APPLE__)

  return r;

libuv uses a global write lock because the XNU versions of write() and pwrite() are not thread-safe.

Global locks are bad for throughput, obviously. Suggested alternative approach:

  1. Have a hash table of locks.
  2. Look up the inode with fstat().
  3. Use the inode as a key in the aforementioned hash table (i.e. granular locking).

The level of granularity would have to be established empirically. Benchmarks should prove (or disprove) if it actually has a positive impact.

But is it worth the time? I hope and assume no one uses OS X for serious workloads.

@bnoordhuis bnoordhuis was assigned Jan 17, 2013
indutny commented Jan 17, 2013

I believe you're right, hash table in libuv... meh.

Though, if you think it's really needed - I can implement one.


The hash table could be a simple pthread_mutex_t locks[32], with key lookup done with key = locks[inode % ARRAY_SIZE(locks)].

But don't waste time on it, it's a nice-to-have. Node.js has had the Big Lock since forever and no one's complained about it (much).

saghul commented Nov 26, 2014

Closing old stalled issues, please reopen at https://github.com/libuv/libuv if still needed.

@saghul saghul closed this Nov 26, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment