Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Arc+l2arc #1612

Closed
wants to merge 3 commits into from
Closed

Arc+l2arc #1612

wants to merge 3 commits into from

Conversation

behlendorf
Copy link
Contributor

A stack of patches designed to improve to behavior of the arc and l2arc under memory pressure. These patches significantly improve the behavior for the workloads I've tested but I'd like to see additional testing done with real workloads.

There are some patches mixed in here which should probably have their own pull requests, but I wanted to put everything out there for some initial review. Comments for each patch should clearly describe the intent and reasoning behind the change.

@ryao
Copy link
Contributor

ryao commented Jul 27, 2013

I have not studied L2ARC in enough depth to comment on the L2ARC changes, but everything else looks fine to me.

It might be a good idea to send some of these patches to Illumos. With that said, I opened pull requests with related patches that I have not sent to the Illumos developers either, so that advice applies as much to me as it does to you.

@DeHackEd
Copy link
Contributor

# zpool iostat -v 2
                capacity     operations    bandwidth
pool         alloc   free   read  write   read  write
-----------  -----  -----  -----  -----  -----  -----
whoopass1    5.73T  4.27T    373     18  4.11M   638K
  Array      5.73T  4.27T    373     18  4.11M   638K
cache            -      -      -      -      -      -
  SSD-part1  16.0E   222G    153      3  2.57M   167K
-----------  -----  -----  -----  -----  -----  -----
                capacity     operations    bandwidth
pool         alloc   free   read  write   read  write
-----------  -----  -----  -----  -----  -----  -----
whoopass1    5.73T  4.27T    776     81  3.14M  2.51M
  Array      5.73T  4.27T    776     81  3.14M  2.51M
cache            -      -      -      -      -      -
  SSD-part1  16.0E   222G    480      8  7.50M   296K
-----------  -----  -----  -----  -----  -----  -----
# zpool list -v
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
whoopass1    10T  5.73T  4.27T    57%  1.00x  ONLINE  -
  Array    10T  5.73T  4.27T         -
cache      -      -      -      -      -      -
  SSD-part1   111G  16.0E   222G         -

That's a new one. SSD is only 120 GB.

In (possibly) related news I did modify zfs_arc_max while the system was running and the ARC was full. It didn't immediately take so I also echo 3 > /proc/sys/vm/drop_caches which seemed to do the trick. That's the only odd thing I did to this system since bootup.

SPL also has issue spl#268 patch applied.

@edillmann edillmann mentioned this pull request Jul 31, 2013
@behlendorf
Copy link
Contributor Author

The safest commits on this branch were merged while the rest continues to under go testing.

bce45ec Make arc+l2arc module options writable
c93504f Change l2arc_norw default to zero
6e1d727 Fix inaccurate arcstat_l2_hdr_size calculations

@DeHackEd Thanks for noticing that, it looks like we've perhaps rolled the counter negative. It should be harmless but I'll get it fixed in the next refresh of the branch.

@behlendorf
Copy link
Contributor Author

Merged from previous patch stack.

78d7a5d Write dirty inodes on close

Decrease the mimimum ARC size from 1/32 of total system memory
(or 64MB) to a much smaller 4MB.

1) Large systems with over a 1TB of memory are being deployed
   and reserving 1/32 of this memory (32GB) as the mimimum
   requirement is overkill.

2) Tiny systems like the raspberry pi may only have 256MB of
   memory in which case 64MB is far too large.

The ARC should be reclaimable if the VFS determines it needs
the memory for some other purpose.  If you want to ensure the
ARC is never completely reclaimed due to memory pressure you
may still set a larger value with zfs_arc_min.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Currently all l2arc headers count against the meta limit and
the only way to release them is to cleanup an l2arc device.
This means that given a large enough l2arc device the headers
can accumulate until they push out all other meta data.

While we absolutely don't want to commonly discard these headers,
because it invalidates any blocks written to the l2arc device,
it should be possible to reclaim them.  If memory is desperately
needed by an application it's preferable to reclaim this memory
and pay the performance penalty.

To do this a l2arc_evict_headers() function has been added.  It
will attempt to reclaim N bytes of l2arc headers by repeatedly
advancing the write hand and evicting blocks from the l2arc.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
A GPF may occur if an l2arc buffer is evicted before the write
completes for the l2arc buffer.  The l2arc_write_done() function
unconditionally starts walking the list at the write head so we
must verify that l2arc_evict() has not removed it from the list.

This long standing issue was exposed by allowing arc_adjust() to
evict entries from the l2arc due to memory pressue.  Prior to
this change buffers would only be evicted immediately in front
of the write hand or during l2arc cleanup.  In neither case could
we wrap around remove the write head as part of l2arc_evict().

PID: 3182   TASK: ffff88020df28080  CPU: 0   COMMAND: "z_null_int/0"
  [exception RIP: l2arc_write_done+128]
 6 [ffff880210e97c58] zio_done at ffffffffa0597e0c [zfs]
 7 [ffff880210e97cd8] zio_done at ffffffffa05982a7 [zfs]
 8 [ffff880210e97d58] zio_done at ffffffffa05982a7 [zfs]
 9 [ffff880210e97dd8] zio_execute at ffffffffa05948a3 [zfs]
10 [ffff880210e97e18] taskq_thread at ffffffffa041d8d8 [spl]
11 [ffff880210e97ee8] kthread at ffffffff81090d26
12 [ffff880210e97f48] kernel_thread at ffffffff8100c14a

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
@behlendorf
Copy link
Contributor Author

Closing in favor of #1967.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants