Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal for a static way to "prove" an AppImage has capabilities such as sandboxes #839

Open
Elv13 opened this issue Aug 14, 2018 · 23 comments

Comments

@Elv13
Copy link

Elv13 commented Aug 14, 2018

This issue document a discussion with @TheAssassin regarding how to provide a
way to securely prove that an AppImage runs in a FireJail sandbox before
execution with and without AppImageLauncher.

What is the issue

Flatpak and Snap both enforce sandboxing but AppImage doesn't. It isn't a good
way to sell this technology if the user perceive it as less secure. On the
other hand, Flatpak/Snap are much larger and integrated stacks that try to fix
too many problem while AppImage try to solve the bundling and nothing else.

That doesn't say AppImage cannot be secure. In fact, many already use external
sandbox projects to provide some level of extra security. One major downside of
such AppImages is that there is little way to know ahead of time:

  • If there is sandboxing
  • If the sandbox is actually used
  • What is the profile
  • Has the sandbox been altered to render it a no-op (or worst)
  • Will it be equally secure if I don't have FireJail installed

Where can it be solved

A solution to this problem and be provided by a combination of improvements to
appimagetool to ensure a sandbox is used. This solution has to be used in 2
main places:

  • When the user downloads the AppImage, he/she/it has to be told whether or not
    there is guaranteed sandboxing.
  • If AppImageLauncher is installed, it should display this information
  • If the Dolphin file manager is used, it should add a mark to sandboxed
    AppImages
  • If the KDE Store was to distribute AppImages, it should also provide this
    information

What does this solution does not attempt to solve

In the Unix philosophy, we have little tools that solve a problem and only one
problem. This solution doesn't attempt to prove that:

  • FireJail is secure
  • The profile is actually restricted enough
  • The application contains malware or crytpo currency miner.
  • appimagetool, the runtime etc. are perfect bug free

Enough talking, what's your damn idea?

When creating "SandboxedAppImageTool"

  1. Create a signed AppRun or other kind of entry point known and proven to start
    FireJail (probably the runtime is more suitable for that)
  2. Provide a FALLBACK signed FireJail binary. Maybe from a trusted distribution.
  3. Sign that FireJail as a known unmodified version so the AppImage creator
    cannot weaken it
  4. Ship that in "SandboxedAppImageTool" and enforce that the profile has to
    be provided (first we need to provide a standardized way to distribute firejail profiles within AppImages)

When creating an AppImage

  1. Provide a profile with a standardized name in a standardized place
  2. Provide a program entry point within the jail, entry points outside are
    prohibited

When upload an AppImage to a "store"

  1. Inspect (mount or extract or just read the embedded filesystem image) the AppImage
  2. Check that the AppRun (or else) and FireJail signature match the known one
  3. Extract the profile and parse it
  4. Decide if the AppImage has sandboxing based on that
    4.1) Warn about AppImage with old FireJail with known CVE

When the user without AppImageLauncher and/or without FireJail get an AppImage

  1. Download from a source you trust
  2. See that this source says the AppImage provide sandboxing and read the
    profile (or trust them to validate it's secure)
  3. Run it (with the fallback FireJail if you have none installed)

When you have AppImageLauncher

  1. Get an AppImage from a wider range of source
  2. See in your Dolphin it has the Jail watermark
  3. See the AppImageLauncher popup saying that there is sandboxing with an option
    to see the profile.
  4. Decide if you want to use it

What are the problems with this

Someone will need to get and keep a certificate to sign the "trusted"
runtime and firejail. The AppImage team is unsure whether they can provide that.

It solves a tiny part of the wider problem regarding sandboxing. It doesn't
provide a DBus proxy, doesn't manage overlays, no X11 firewall. Getting such
capability could be implemented in a similar way, but isn't part of this
suggestion.

Why do you still think it's necessary

Because "linting" and "static analysis" are one of the only way to add feature
without bloating the AppImage project with more thing. It should still
concentrate on bundling and let other services to handle their part. You still
have a semi centralized way to prove such services are enabled and this is
what I propose.

@probonopd
Copy link
Member

Thanks @Elv13. Does it need to be so complex? When a new AppImage is opened, the signature could be checked against an (ideally peer-produced) index of "known good" keys, and if it has one, be executed as a fully trusted application without sandboxing; if it doesn't, then run in untrusted = sandboxed mode.

Beginnings of this are already coded in the optional appimaged daemon, but someone who is interested and skilled in sandboxing (= not me) would have to take it up from there.

What I don't want to do is maintain a central list of "known good" keys, or have it maintained by one person or entity. Instead, we should think about how we can best peer produce a web of trust.

@TheAssassin
Copy link
Member

@probonopd you got it wrong, it seems. It took me a while to understand this as well. I would offer to explain it to you off GitHub this week. Just let me know when you're free.

@probonopd
Copy link
Member

That's my point - it is long and complicated but I like short and easy. :-) No offense

@TheAssassin
Copy link
Member

It's actually less complicated than one might think at first. In the end, one of the "disadvantages" users see when looking at AppImage is that it's said to be "less secure" because of the lack of sandboxing. There are multiple problems that need to be solved in order to make sandboxing compatible with our decentralized approach, but this proposal could solve one of these.

As I explained to @Elv13 already, this is not interesting until we have a secure way to distribute Firejail profiles within AppImages (which is a big problem we have no solution for yet), but once we can do that, this proposal will become relevant.

@probonopd
Copy link
Member

If you distribute the profiles inside the AppImage, then a) you give preference to one certain sandbox and b) a "bad" app could simply ship a "relaxed" profile. I think the level of isolation needs to be determined by something on the system (e.g., appimaged or a replacement) or the user, not by the app or something in the AppImage. Because we can trust the system, but not the app.

@Elv13
Copy link
Author

Elv13 commented Aug 15, 2018

Thanks @Elv13. Does it need to be so complex?

I think that yes, it has. I am sorry it has to be, but there is little to do about it. Computer security is often a very complex research topic and how to handle problems a very imperative and restrictive process.

What is done here is called a chain of trust. This is a very well documented topic with a lot of good read available on the Internet (note that it is not an x.509 certificate chain of trust, that's one of the application of the concept, but not the one proposed here). I will not try to summarize in too much details because there is a lot of existing documentation on the topic. That being said, you can view it that way. When you go from A to B, each steps a payload takes have to carry a "trust" from its predecessor to its successor in a way that can be cryptographically provable. This chain has to be carefully studied to ensure it has no missing links. If it has, the whole chain is worthless. This is why long documentation is mandatory for this field of study. There is official forms for this paperwork but I will spare them to you because they are mindbogglingly verbose and boring.

In the case of this proposal, the original trusted objects are:

  • A known good FireJail binary. We could consider taking it from Debian or Alpine Linux because they provide signed packages. It moves the duty of creating correct binaries to them. They are trusted and well organized project the user trust.
  • A way to start this FireJail or the system one
  • The AppImage project list of trusted FireJail binary and way to start it
  • A proof that a specific profile is used by default (but may be overridden by let say appimaged with a more strict profile)

Those trusted elements have to get as close to the user as possible. They have to get there before the user executes anything. I use bold for that sentence because it is key. This whole scheme exists to allow both user with AppImageLauncher/appimaged and the users without it to get these trusted elements.

As I explained to @Elv13 already, this is not interesting until we have a secure way to distribute Firejail profiles within AppImages (which is a big problem we have no solution for yet), but once we can do that, this proposal will become relevant.

This position may be worth reconsidering. While generally true, this solution may be out of scope for such system. What I mean here is that the "software store" might want to parse the .profile and judge by itself if it's secure or not. In a fully distributed environment, the AppImage project might not want to be the gatekeeper of good vs. bad profiles. This is something that can be forwarded to a third party service or just make a very nice GUI where the user can visualize it him/her/itself. This is only good as long as this third party service is good or if the user is technical enough to understand the visualization. But to respect the Unix philosophy, this burden should not be centralized in the AppImage project. It's "nice to have", but otherwise out of your project primary scope (software bundling and deployment tools). Having such solution built-in would carry a maintenance and liability cost too (which you want to avoid).

If you distribute the profiles inside the AppImage, then a) you give preference to one certain sandbox and b) a "bad" app could simply ship a "relaxed" profile. I think the level of isolation needs to be determined by something on the system (e.g., appimaged or a replacement) or the user, not by the app or something in the AppImage. Because we can trust the system, but not the app.

Again, this is generally true, but as the previous question, out of scope. The appimaged may want to enforce "better" sandboxes and it is a good thing. However if the profile can be extracted and statically analyzed before the AppImage is executed, this burden is moved to the static analysis tool. Also, if the user doesn't have appimaged, but downloaded the AppImage from a source that provided that static analysis (and the user trust that software store/provider), then he/she/it knows the minimum amount of sandboxing provided by the said image. Again, more strict profiles can be choosen if the AppImageLauncher/appimaged provides such option and is installed.

This proposal is intended to statically prove a minimum set of sandboxing capabilities without executing anything. "minimal" is the important part. If the user system has toolting to provide a better experience, then great, this user will have a more secure sandboxing. But the fact is that as of now, very few distributions ship such appimage related runtime by default so providing this capability in the middleman is more important in the short term. Plus, it fits great in the decentralized nature of the AppImage concept since your organization has only to provide some elements of the "chain of trust" rather than carry it all.

@TheAssassin
Copy link
Member

The way I see it, the only viable way to distribute profiles for applications is to put those into the AppImages. There is no chance to download profiles automatically for applications from the internet, because how would you securely match an application and the profile? There are no "one fits it all" profiles. Profiles will always be application specific. That's how e.g., AppArmor works, too, the profiles are shipped in the application packages.

Until profile distribution hasn't been solved, we don't need any sort of Firejail distribution system, because without any profiles, distributing Firejail is pointless.

If someone has suggestions how we (or some external project) could set up a secure infrastructure for distributing the profiles, please provide a detailed description of that.

Re. things like "better sandboxes": Again, there are no "one fits all" profiles. You can't just use random profile to sandbox an application. The more strict it is, the more likely it is to break something. The more open it is, the less secure it will be. It is impossible to design some "general" profile.

@probonopd
Copy link
Member

I don't get the point of the AppImage defining its own profiles. Isn't this like a potential criminal to define his own laws?

@TheAssassin
Copy link
Member

Of course you need to check whether the profile is "good" (read: secure, trustworthy, etc.). That's the part we need to solve.

Firejail currently tries to use some "generic" profile, but it's not secure at all, and doesn't really restrict anything. This just doesn't work. We need AppImage specific profiles, and the easiest way to accomplish this is to ship the profiles as part of the bundle.

The only realistic way to check whether the profile is trustworthy is maintaining a public-key infrastructure (similar to how TLS certificates are managed), but this has huge drawbacks, too. And that only proves the profile is created by a trustworthy origin, not that it's secure.

As @Elv13 described, it's really complex to get this "trust of chain" right, and many aspects need to be regarded.

@probonopd
Copy link
Member

Firejail currently tries to use some "generic" profile, but it's not secure at all, and doesn't really restrict anything. This just doesn't work.

Correct. It is essentially a placeholder.

We need AppImage specific profiles, and the easiest way to accomplish this is to ship the profiles as part of the bundle.

I am trying to think simple here. Trusted apps should run unrestricted, whereas untrusted apps should run highly restricted (e.g., no write rights in $HOME).

@RalphPfeiffer
Copy link

What are the differences between AppImage and snap?
By the way, is there an AppImage store like the snap one?

@probonopd
Copy link
Member

What are the differences between AppImage and snap?

Please see https://github.com/AppImage/AppImageKit/wiki/Similar-projects. In a nutshell, with AppImage, "one app = one file", no runtime needs to be installed on the system first.

By the way, is there an AppImage store like the snap one?

There is https://appimage.github.io/apps/ as an overview, but we recommend that you download applications directly from the author's website.

@zot
Copy link

zot commented Jan 16, 2019

Is there a data area in AppImages that users can edit (or could there be)? If so, tools could use that to store / extract profiles for things like Firejail.

@probonopd
Copy link
Member

AppImages are read-only by design, so that one can e.g., easily calculate checksums, verify signatures, and do delta updates. So, the profile would either have to be supplied as part of the AppImage by its creator, or would have to be stored externally outside of the AppImage.

@shoogle
Copy link

shoogle commented Jan 17, 2019

This may not be a problem that AppImage has to solve. There seems to be a movement away from blanket permissions that must be granted at install time, to more finely grained permissions that can given or withheld at runtime on an as-needs basis. This has the advantage that users are not pressured into pressing an "accept all" button at the outset but can actually pick and choose which permissions to give based on the features they actually want to use.

For example, when installing certain Android apps it used to say "this app needs permission to access files and folders on your device", and you would either grant that permission or you wouldn't be able to install the app. However, these days apps install and run without requiring any permissions until you actually try to use a feature that actually needs permission, so you can use the Dropbox app to browse your online storage, but as soon as you try to access the local storage you are prompted to give permission.

Anyway, the point is that full sandboxing will probably become the default, and it will be up to apps to request services on an as-needs basis and to handle withheld permissions gracefully. If this happens there would be no need for AppImages to define security profiles, as that would be something that the application itself would negotiate directly with the system after it was already running.

@TheAssassin
Copy link
Member

@shoogle this is something we plan for AppImageLauncher -- Android-style permission requesting & granting. Authors might optionally include a little meta file requesting a few permissions, otherwise we might pick sane defaults (full network access but no personal data access or something like that).

@shoogle
Copy link

shoogle commented Jan 17, 2019

@TheAssassin, sounds awesome! I'd have thought that runtime permissions would need to be implemented at a lower level (i.e. in the system libraries) but if you can bolt it on afterwards then that's pretty impressive! I suppose if you can find a way to intercept, and potentially reject, standard filesystem requests then the application will just handle it using its existing exception mechanism. There's a risk the application might display a "permission denied" message that indicates the user doesn't have permission to access the file, when in reality it is the application that doesn't have permission, but apps would eventually be updated to take this into account.

@Elv13
Copy link
Author

Elv13 commented Jan 17, 2019

I'd have thought that runtime permissions would need to be implemented at a lower level (i.e. in the system libraries)

That's not really possible in POSIX. Attack vector like ROP or even gobject-introspection / QMetaType scripts would easily build a privilege escalation "exploits". Even dlopen is a problem if you are doing library level permission filtering. You need sandboxing with some IPC like dbus or an equivalent for the sanboxed apps to interact with the outside. Those things cannot just be ELF symbols with permission attributes/tokens in the function definition. The closest you can get is SELinux style mandatory access control (MAC) we had 15 years ago. It has proven to be a dead end. It's useful, but doesn't solves much.

@probonopd
Copy link
Member

@shoogle this is something we plan for AppImageLauncher

Actually let's plan this for libappimage.

@TheAssassin
Copy link
Member

This cannot be "planned for libappimage", this is waaaaaay out of scope...

The "plan" here is to combine something like firejail with AppImageLauncher. AppImageLauncher can enforce (to some extent) the execution through the sandbox, and it can generate a profile for every app by asking the user a set of simple questions ("allow network access", "allow access to your files", ...).

@probonopd
Copy link
Member

probonopd commented Jan 17, 2019

The beginnings of it have been in appimaged for years, waiting for someone to pick the whole sandboxing topic up. The most basic implementation would be to deny unsigned AppImages certain rights. But I hope we can discuss and design a UX for it project-wide.

@TheAssassin
Copy link
Member

It's not like anyone can generate a PGP key and replace the signatures in any AppImage...

@probonopd
Copy link
Member

Hence we'd need a list of trusted PGP keys. This list could either be centrally stored, or, as is my personal preference, somehow built from the user's "social graph".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants