Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support squashfs as compression algorithm #6273

Open
stgraber opened this issue Oct 3, 2019 · 2 comments

Comments

@stgraber
Copy link
Member

commented Oct 3, 2019

LXD already supports unpacking squashfs in a number of situations but it doesn't support using mksquashfs to generate squashfs based images or backups.

The existing compression logic for images and backups should be updated so that one can set the compression_algorithm to squashfs.

Consumption of squashfs images should already pretty much just work, consuming squashfs backups will need a few code updates too.

@stgraber stgraber added this to the later milestone Oct 3, 2019
@ulziibuyan

This comment has been minimized.

Copy link

commented Oct 4, 2019

I would like to work on this

@stgraber

This comment has been minimized.

Copy link
Member Author

commented Oct 4, 2019

So I forgot just how dependent we unfortunately are of tarballs...
tarballs are very convenient internally because we can build them and alter file attributes without ever having to write anything on the actual filesystem, so I don't expect us to be moving away from them as the main building block for images and backups.

What we effectively want in this case is a way to use squashfs as a sort of compressor for a standard tar stream as is coming out of Export(). Looking around, our best bet is to support squashfs-tools-ng found at https://github.com/AgentD/squashfs-tools-ng/. That project has a very convenient tar2sqfs which appears to do exactly what we want. We'll deal separately with adding this binary to our snap package.

We'll want to stream the tarball output of Export() through tar2sqfs --compresor xz --no-skip FILENAME

For this, you'll want to:

  • Add an API extension called compression_squashfs to shared/version/api.go and doc/api-extensions.md
  • Update validateCompression in lxd/cluster/config.go to look for tar2sqfs if squashfs is passed as value.
  • Update the logic in imgPostContInfo in lxd/images.go to recognize squashfs and setup an I/O PIPE from a Writer handed over to Export() into tar2sqfs, writing to a temporary image file (imageFile). This will need a bit of reshuffling as that's not exactly the way the other compressors are handed.
  • Add decompression logic for squashfs (sqfs2tar) for getImageMetadata in lxd/images.go.
  • Add compression logic similar to the image case for backupCreateTarball in lxd/backup.go.
  • Add decompression logic for squashfs (sqfs2tar) for backupGetInfo in lxd/backup.go.
  • Add decompression logic for squashfs (sqfs2tar) for containerCreateFromBackup in lxd/container.go.

With all that done, you should be able to:

  • lxc config set images.compression_algorithm squashfs
  • lxc config set backups.compression_algorithm squashfs
  • lxc publish test-container --alias test-image
  • lxc export test-image
  • lxc image delete test-image
  • lxc image import test-image.squashfs --alias test-image (name may be different)
  • lxc launch test-image test-container1
  • lxc export test-container
  • lxc delete test-container
  • lxc import test-container.squashfs (name may be different)

This would effectively exercise all the code paths above and should result in two working containers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
2 participants
You can’t perform that action at this time.