Skip to content

slowpeek/unmkinitramfs-turbo

Repository files navigation

About

This repo provides improved unmkinitramfs from Debian/Ubuntu:

unmkinitramfs-classic
The original tool with -s, --scan option added
unmkinitramfs-classic-turbo
The improved tool with minimal changes. Runs on bash, requires xxd
unmkinitramfs-turbo
The improved tool with many changes. Runs on bash, requires xxd and file

Scan mode

The -s, --scan option makes it not unpack anything, but print calculated offset and size of embedded cpio archives. For the last entry only offset is printed. The calculation step is the only non-trivial part in unpacking initrd. Sample run:

> unmkinitramfs-classic -s initrd/ubuntu-22.04.4
0 77312
77312 7208960
7286272

unmkinitramfs-turbo, given more than one -s option (-ss for short), additionally prints the chunk format:

> unmkinitramfs-turbo -ss initrd/ubuntu-22.04.4
cpio   0 77312
cpio   77312 7208960
zstd   7286272

Performance boost

The benchmarks are courtesy of hyperfine.

Generic initrd contains some uncompressed cpio archives plus the last one compressed. Since there is no archive size header for cpio, the scripts have to parse headers of each file in uncompressed ones.

The original tool works well when the uncompressed archives contain a few files. But it gets really slow when there are lots. For example, initrd of ubuntu 22.04.4 contains two uncompressed cpio, each holding a single file (amd and intel microcode). Both tools are quick:

CommandMean [ms]Min [ms]Max [ms]Relative
`unmkinitramfs-classic -s ubuntu-22.04.4`227.0 ± 0.9225.7228.42.62 ± 0.12
`unmkinitramfs-turbo -s ubuntu-22.04.4`86.6 ± 4.078.999.21.00

In case of ubuntu 23.10.1. there is one extra uncompressed archive, containing kernel modules and firmware files, 1985 files overall. This time the story is different, the turbo tool turns out to be an order of magnitude faster:

CommandMean [s]Min [s]Max [s]Relative
`unmkinitramfs-classic -s ubuntu-23.10.1`15.617 ± 0.08515.55315.84510.39 ± 0.43
`unmkinitramfs-turbo -s ubuntu-23.10.1`1.503 ± 0.0611.4071.6001.00

Performance boost explained

The original tool is limited to external commands to parse binary input data.

The turbo tools parse a hex dump of input data instead, with help of bash’s read -N to process data in chunks, all in a single pass.

Why xxd, not od from coreutils

In the following, we ignore any whitespaces in tools’ output, e.g. “a b\nc” and “abc” are considered equal.

First and foremost: od is slow.

To dump in hex one uses od -txN .. where N could be 1, 2, 4 and 8. N greatly affects the speed, there is no reason to use any value below 8:

CommandMean [s]Min [s]Max [s]Relative
`xxd -p ubuntu-22.04.4`1.301 ± 0.0031.2981.3071.00
`od -vAn -tx8 ubuntu-22.04.4`3.553 ± 0.0333.5093.6082.73 ± 0.03
`od -vAn -tx4 ubuntu-22.04.4`6.394 ± 0.0496.3166.4704.92 ± 0.04
`od -vAn -tx2 ubuntu-22.04.4`12.035 ± 0.11411.83512.1799.25 ± 0.09
`od -vAn -tx1 ubuntu-22.04.4`23.053 ± 0.23922.67723.37917.72 ± 0.19

The problem with -txN, N>1 is od treats sequences of N bytes as a whole and prints those in native byte order. Example for an amd64 machine:

> echo -n 01234567 | od -vAn -tx4
 33323130 37363534

The correct order can be forced with --endian=big option:

> echo -n 01234567 | od -vAn -tx4 --endian=big
 30313233 34353637

But the option is a relatively new one, only introduced in 2014. There is no such option in still supported ubuntu 14.04. On older little endian systems one still can utilize the -tx2 speed boost with help of dd:

> echo -n 0123 | dd conv=swab 2>/dev/null | od -vAn -tx2
 3031 3233

Back to the best case of -tx8 --endian=big. It is still not a drop-in replacement for xxd -p. When the data size is not a multiple of N and we use -txN, N>1, it gets padded with zeroes:

> echo -n 01235 | od -vAn -tx4 --endian=big
 30313233 35000000

So to make a correct dump one must know the data size ahead and take it into account. Such function wraps it up:

function xxdp_like_od() {
    size=$(stat -c%s "$1")
    (( residue = size % 8 )) || true

    {
        if (( residue )); then
            od -vAn -tx1 -N"$residue"
        fi

        if (( size > residue )); then
            od -vAn -tx8 --endian=big
        fi
    } <"$1"
}

Sample run:

> head -c 27 /dev/zero >sample
> xxdp_like_od sample
00 00 00
  0000000000000000 0000000000000000
 0000000000000000

So, compared to xxd, od is slow and picky.

About

unmkinitramfs on steroids

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages