Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.Sign up
GitHub is where the world builds software
Millions of developers and companies build, ship, and maintain their software on GitHub — the largest and most advanced development platform in the world.
Multiple bugs in libarchive sandboxing code #743
caught this post elsewhere.
There seem to be three libarchive issues described in this document. (Actually four, but the first listed "Vulnerability #1" was already fixed.) The key first step, of course, is formulate and commit appropriate tests for each of these. Then we can work through the best way to solve them.
Here are a few thoughts based on a cursory skim:
This describes a bad interaction between the symlink tests and the deep directory handling. I could be talked into dumping the deep-directory handling as the author suggests, though I would like to at least try to find a better solution.
This describes another problem with symlink checks: I suspect this is a simple fix to ensure that the matching prefix terminates at a path component. I recall someone else mentioning this issue but cannot recall if it was already fixed.
This claims that hard links with data payloads can escape the sandbox. The first question I would need to research is whether this capability in archive_write_disk is used by any non-tar format. In particular, I recall that newc cpio format handles hard links backwards from traditional tar, which may rely on this code. If newc relies on this, then the author's advice to just remove this capability isn't really feasible. (Even apart from newc, I'm a bit reluctant to drop a POSIX-mandated feature of this sort.)
Here via John Leyden's tweet.
I don't have the time to test the portsnap attacks, but I can confirm that the libarchive/tar and bspatch attacks work on our 10.x machines, and I'm happy to test any libarchive/tar fixes.
Judging by the painstaking amount of work put into the bspatch exploit especially, I think it's highly unlikely that the creator lacks the means to deploy it via mitm. Otherwise, I've never seen anything like this in terms of apparent work/reward. It would be comical if it weren't so horrifying. Think of all those locked-down fbsd machines that have no external-facing daemons/services and that perform only updates. Our telecommunications floor alone has several dozen.
Someone needs to alert the fbsd mailing lists (-current, -security?) pronto. I'd rather not mail them myself from work. And we should also get more details on the linux distributions.
…ames Because check_symlinks is handled separately from the deep-directory support, very long pathnames cause problems. Previously, the code ignored most failures to lstat() a path component. In particular, this led to check_symlinks always passing for very long paths, which in turn provides a way to evade the symlink checks in the sandboxing code. We now fail on unrecognized lstat() failures, which plugs this hole at the cost of disabling deep directory support when the user requests sandboxing. TODO: This probably cannot be completely fixed without entirely reimplementing the deep directory support to integrate the symlink checks. I want to reimplement the deep directory hanlding someday anyway; openat() and related system calls now provide a much cleaner way to handle deep directories than the chdir approach used by this code.