Skip to content

Cannot persist large files to ext2 disk correctly #14

Closed
@rick-masters

Description

@rick-masters

There is a bug that prevents writing large files to an ext2 disk correctly.

This can be demonstrated with the following commands:

dd if=/dev/random of=bigfile bs=1024 count=150000
sha256sum bigfile
sync
reboot
# After login:
sha256sum bigfile

Result: you will either get an I/O error of a different checksum.

The problem is in the calculation of a block index for a triple-indirect block in fs/ext2/inode.c:

Fiwix/fs/ext2/inode.c

Lines 326 to 333 in 6e036aa

if(level == EXT2_TIND_BLOCK) {
if(!(buf3 = bread(i->dev, indblock[block], blksize))) {
printk("%s(): returning -EIO\n", __FUNCTION__);
brelse(buf);
return -EIO;
}
tindblock = (__blk_t *)buf3->data;
block = tindblock[tblock / BLOCKS_PER_IND_BLOCK(i->sb)];

Here, tblock has not been adjusted to account for the number of blocks skipped by the last traversal.

I believe this is the appropriate code to adjust tblock before calculating the block index:

tindblock = (__blk_t *)buf3->data;
tblock -= BLOCKS_PER_DIND_BLOCK(i->sb) * block;
block = tindblock[tblock / BLOCKS_PER_IND_BLOCK(i->sb)];

Without this adjustment, tblock / BLOCKS_PER_IND_BLOCK(i->sb) will exceed the bounds (0..255) for tindblock and will write into memory beyond the size of the disk block. The block numbers stored in these indexes may be readable while in memory, but they cannot be persisted to disk through a reboot because the disk blocks only hold 256 entries. So, after rebooting and reloading the blocks from disk, indexing beyond 255 will produce an invalid block number.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions