Skip to content

Commit

Permalink
[LogFS] Split large truncated into smaller chunks
Browse files Browse the repository at this point in the history
Truncate would do an almost limitless amount of work without invoking
the garbage collector in between.  Split it up into more manageable,
though still large, chunks.

Signed-off-by: Joern Engel <joern@logfs.org>
  • Loading branch information
Joern Engel committed Apr 20, 2010
1 parent b863907 commit b6349ac
Showing 1 changed file with 26 additions and 8 deletions.
34 changes: 26 additions & 8 deletions fs/logfs/readwrite.c
Original file line number Diff line number Diff line change
Expand Up @@ -1837,19 +1837,37 @@ static int __logfs_truncate(struct inode *inode, u64 size)
return logfs_truncate_direct(inode, size);
}

int logfs_truncate(struct inode *inode, u64 size)
/*
* Truncate, by changing the segment file, can consume a fair amount
* of resources. So back off from time to time and do some GC.
* 8 or 2048 blocks should be well within safety limits even if
* every single block resided in a different segment.
*/
#define TRUNCATE_STEP (8 * 1024 * 1024)
int logfs_truncate(struct inode *inode, u64 target)
{
struct super_block *sb = inode->i_sb;
int err;
u64 size = i_size_read(inode);
int err = 0;

logfs_get_wblocks(sb, NULL, 1);
err = __logfs_truncate(inode, size);
if (!err)
err = __logfs_write_inode(inode, 0);
logfs_put_wblocks(sb, NULL, 1);
size = ALIGN(size, TRUNCATE_STEP);
while (size > target) {
if (size > TRUNCATE_STEP)
size -= TRUNCATE_STEP;
else
size = 0;
if (size < target)
size = target;

logfs_get_wblocks(sb, NULL, 1);
err = __logfs_truncate(inode, target);
if (!err)
err = __logfs_write_inode(inode, 0);
logfs_put_wblocks(sb, NULL, 1);
}

if (!err)
err = vmtruncate(inode, size);
err = vmtruncate(inode, target);

/* I don't trust error recovery yet. */
WARN_ON(err);
Expand Down

0 comments on commit b6349ac

Please sign in to comment.