Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Atom needs a better security model #1763

Closed
nathansobo opened this issue Mar 17, 2014 · 51 comments
Closed

Atom needs a better security model #1763

nathansobo opened this issue Mar 17, 2014 · 51 comments

Comments

@nathansobo
Copy link
Contributor

@nathansobo nathansobo commented Mar 17, 2014

If we're going to add GitHub integration to Atom, we need to cache credentials inside the app. We had previously implemented GitHub auth for our Gists package, but we removed both the package and GitHub authentication from the beta release because we were afraid of providing GitHub credentials to every random Atom package people installed. We can't avoid this forever, though. To realize Atom's full potential, we need to solve the security issue in a robust way.

What would the ideal security model look like?

  • Every package is loaded in its own context.
  • Every package's metadata includes a whitelist of the following:
    • Security-critical properties of the atom global requested by the package.
    • Security-critical built-in modules (such as fs) requested by the package.
    • Whether the package is allowed to load custom native modules.
    • GitHub permissions the package needs access to and any other credentials.
  • When installing a package, the requested permissions are clearly presented.
  • If a package attempts to reference or require a resource that isn't white-listed, the user will be explicitly prompted to grant the package that permission.

How can we implement this?

There's a fairly mature project called Google Caja, whose goal is to allow multiple mutually-suspicious JavaScript modules to run on the same page securely. To support older versions of JavaScript, they employed a complex whole-program transformation to achieve this. But ES5 incorporates several standard features that allow a context to be secured by running a small JavaScript library called SES (short for Secure ECMAScript).

My understanding is still evolving, but it seems like we should be able to run the SES script in a fresh context to tamper-proof any globals that we inject into it and secure them from malicious modification or any other kind of privilege escalation. To do this, we'd basically need to provide our own alternative module loading mechanism, which I started experimenting with this weekend with a fork of Node's built-in module.js.

What if we miss something?

Security is tough because you only have to screw up once to get exploited. We probably will make a few mistakes. But based on my reading about ES5 and the Caja project, security should be possible in theory. If we implement a security model that leaks, we can fix the problems as we discover them. But if we don't even try then things will never be secure. I think it's worth trying, then subjecting our best effort to an internal and possible external audit.

What haven't I thought of?

This is all new territory for me. Let me know what I'm missing.

/cc @github/security @zcbenz @ptoomey3

@zephraph
Copy link

@zephraph zephraph commented Jan 30, 2015

Can we revive this conversation, or at least provide some news on any progress made for developers?

I'm planning a series of modules so I can manage my GitHub workflow, starting with github-notifications. I'm not finished with the package yet, but I don't have a secure way to deal with the tokens. For this first package it may be fine just to ask the user to restrict it to everything but notifications and plop it in... but ultimately, that's not going to cut it.

Eventually people are going to start developing more packages that use GitHub's API or the API of another service with or without the core team. It'd be much better if we could head this off and establish either a secure way of storing tokens or a "best practices" document for how to go about doing this on one's on.

Granted, a lot of work could already be done on this that I don't know about, but that's why I'd like to hear an updated status on the state of API security inside Atom.

@anaisbetts
Copy link
Contributor

@anaisbetts anaisbetts commented Jan 30, 2015

Any module has full access to the filesystem, any attempts to sandbox plugins without restricting fs access is at very best, a speed bump, and full access to the machine from plugins is part of the value proposition of Atom.

@zephraph
Copy link

@zephraph zephraph commented Jan 30, 2015

Surely there's something we can do though.

@anaisbetts
Copy link
Contributor

@anaisbetts anaisbetts commented Jan 30, 2015

@zephraph In my opinion, the best thing to do is to start running static / sandboxed execution analysis of the packages uploaded to atom.io and flag them for manual code review if they fail, and possibly reject / remove malicious packages.

@zephraph
Copy link

@zephraph zephraph commented Jan 30, 2015

That should probably be done regardless. I guess this kinda ties into #1013.

@nathansobo
Copy link
Contributor Author

@nathansobo nathansobo commented Jan 30, 2015

My main idea for a better security model is to load each package in its own JS context and use Secure ECMAScript or something similar to lock down the native prototypes and implement an object-capabilities security model.

In that environment, a package would only have access to globals that are explicitly injected into their context and wouldn't be able to perform nefarious actions by modifying global prototypes. We would also need to enhance require to account for the security model. What I'm thinking is the package.json would contain some sort of manifest of permissions requested by the package and the user would be explicitly prompted.

Obviously, the surface area of Atom's entire API is pretty tough to secure perfectly. Even if we disallowed file system access via require("fs"), attackers could potentially use Atom's buffer API to access the file system in ways that are hard to defend. Fully securing the API would be pretty challenging in that it would require us to implement secure shims for various aspects of Atom's API so that certain packages couldn't do things like call TextBuffer::save. Membranes is an interesting idea which require Harmony Proxies to make something like this easier to implement, but I don't think the underlying technologies are ready for prime time in V8.

All that said, I think loading packages in their own context with Secure ECMAScript could allow us to implement a secure credentials store that kept all credentials in the system keychain accessible from the main context. I'm not a security expert, but I think this would prevent attackers from using file system access to circumvent the sandboxing because the contents of the system keychain would be encrypted on disk.

The key point here is that there has been some solid research done into JS security models and technologies in place today to make object-capabilities possible. We just have to start implementing it. If someone wants to start exploring in this direction I would welcome more input.

@zephraph
Copy link

@zephraph zephraph commented Jan 30, 2015

This might be completely useless, but I did just run across jailed. I'm no security expert either so I won't be of much help, but I do want to see this conversation progress.

@nathansobo
Copy link
Contributor Author

@nathansobo nathansobo commented Jan 30, 2015

@zephraph Jailed looks interesting. I'll take a look. Security isn't on our 1.0 roadmap currently but it's something we can start thinking about / experimenting with for sure.

@ptoomey3
Copy link
Contributor

@ptoomey3 ptoomey3 commented Jan 30, 2015

I haven't looked into JS jailing in depth (i.e. I'm definitely no expert). So, can someone who has looked at this in more depth comment on a purely object-capabilities solution (like caja, Secure ECMAScript, etc) vs. using atom-shell/chromium native contexts ala https://developers.google.com/v8/embed#contexts. I don't know for sure, but I would imagine this is the basis for Chrome's extension model. If so, we would have some pretty solid ground to stand on. I have zero appreciation for the implementation difficulty between a pure JS solution vs. leveraging "browser features", but I thought it worth adding to the discussion.

@mark-hahn
Copy link
Contributor

@mark-hahn mark-hahn commented Jan 30, 2015

FWIW, none of my 17 packages could run in a sandbox/jail.

On Fri, Jan 30, 2015 at 3:03 PM, Patrick Toomey notifications@github.com
wrote:

I haven't looked into JS jailing in depth (i.e. I'm definitely no expert).
So, can someone who has looked at this in more depth comment on a purely
object-capabilities solution (like caja, Secure ECMAScript, etc) vs. using
atom-shell/chromium native contexts ala
https://developers.google.com/v8/embed#contexts. I don't know for sure,
but I would imagine this is the basis for Chrome's extension model. If so,
we would have some pretty solid ground to stand on. I have zero
appreciation for the implementation difficulty between a pure JS solution
vs. leveraging "browser features", but I thought it worth adding to the
discussion.


Reply to this email directly or view it on GitHub
#1763 (comment).

@zephraph
Copy link

@zephraph zephraph commented Jan 31, 2015

@ptoomey3, after reading that link I think contexts are definitely what we should be investigating. That's pretty awesome!

@mark-hahn, the idea certainly isn't to break all of your packages. Still, taking the approach where security is an afterthought is really, really unhealthy.

@mark-hahn
Copy link
Contributor

@mark-hahn mark-hahn commented Jan 31, 2015

, taking the approach where security is an afterthought is really, really
unhealthy.

I don't understand. I can hide code in my package to erase someone's disk
if I want to. This isn't the web. I have no idea how security could work
in node.

On Fri, Jan 30, 2015 at 7:21 PM, Zephraph notifications@github.com wrote:

@ptoomey3 https://github.com/ptoomey3, after reading that link I
definitely think contexts are definitely what we should be investigating.
That's pretty awesome!

@mark-hahn https://github.com/mark-hahn, the idea certainly isn't to
break all of your packages. Still, taking the approach where security is an
afterthought is really, really unhealthy.


Reply to this email directly or view it on GitHub
#1763 (comment).

@zephraph
Copy link

@zephraph zephraph commented Jan 31, 2015

Which is why open source software is so great. You and I might not have any good ideas to solve it, but someone else might. Regardless, I'm just looking for a way to keep my tokens safe from other packages. That's really all I want.

@mark-hahn
Copy link
Contributor

@mark-hahn mark-hahn commented Jan 31, 2015

I worked on the package bug-report with @lee-dohm. He demanded that any credentials not be stored in the folder .atom because that folder is sometimes backed up publicly. So I put them directly in ~. This is about the only thing in Atom I can think of that improves security..

White-listing things like fs won't help. A large percentage of apps need it. Once someone has fs all bets are off. You can obfuscate the credentials all you want and all I have to do is copy your code to read them.

The only way to trust an app is to read the source carefully before installing or upgrading. That is up to the user. I certainly can't do it.

BTW, this problem is not unique to Atom. ST and every other editor with plug-ins has the same problem. It would just take more work to get the credentials. Actually, every single app you install on a desktop can do this.

I am normally a positive person but trying to add security to Atom is tilting at windmills.

@nathansobo
Copy link
Contributor Author

@nathansobo nathansobo commented Jan 31, 2015

We'll see. We wouldn't implement anything that made it suck to hack on Atom but I'm keeping an open mind. I want to err on the side of power but I'm optimistic we can have security and power by applying some creative thinking.

@zephraph
Copy link

@zephraph zephraph commented Feb 3, 2015

As a temporary stop-gap solution, would it be possible to make a package, atom-shell add-on, or something that logs all http requests/external traffic and alerts/informs the user? The trick is making it where another package can't temporarily disable it to do its dirty business invisibly. I assume this means running Atom with an optional flag like --monitor-http or something.

@mark-hahn
Copy link
Contributor

@mark-hahn mark-hahn commented Feb 3, 2015

Having visibility is good. Very few packages talk to the internet so that
might be feasible. It never occurred to me that an evil package would have
to go to the net to collect the credentials. Or maybe it wouldn't have to?

I would recommend it be a package -- just enable/disable the package to
turn on and off.

The warning should have a button to whitelist the package responsible.
After a while you should get no false alarms. False alarms would destroy
the usefulness. You would start to just click OK on everything.

Whitelisting them when they happen is as good as requiring packages to get
permission when installed. It's even better because you would know if the
activity is happening when it should. I.E. when you perform some activity
to trigger the action.

On Mon, Feb 2, 2015 at 5:02 PM, Zephraph notifications@github.com wrote:

As a temporary stop-gap solution, would it be possible to make a package,
atom-shell add-on, or something that logs all http requests/external
traffic and alerts/informs the user? The trick is making it where another
package can't temporarily disable it to do its dirty business invisibly. I
assume this means running it with an optional flag like --monitor-http or
something.


Reply to this email directly or view it on GitHub
#1763 (comment).

@mark-hahn
Copy link
Contributor

@mark-hahn mark-hahn commented Feb 3, 2015

It occurs to me that the evil package could corrupt the policing package.
If it went to the trouble to find the credentials this wouldn't be much
more work. Even if it was a C++ add-on the evil package could intercept
the UI.

I don't want to be a broken record, but in general this can't be solved.

On Mon, Feb 2, 2015 at 5:17 PM, Mark Hahn mark@hahnca.com wrote:

Having visibility is good. Very few packages talk to the internet so that
might be feasible. It never occurred to me that an evil package would have
to go to the net to collect the credentials. Or maybe it wouldn't have to?

I would recommend it be a package -- just enable/disable the package to
turn on and off.

The warning should have a button to whitelist the package responsible.
After a while you should get no false alarms. False alarms would destroy
the usefulness. You would start to just click OK on everything.

Whitelisting them when they happen is as good as requiring packages to get
permission when installed. It's even better because you would know if the
activity is happening when it should. I.E. when you perform some activity
to trigger the action.

On Mon, Feb 2, 2015 at 5:02 PM, Zephraph notifications@github.com wrote:

As a temporary stop-gap solution, would it be possible to make a package,
atom-shell add-on, or something that logs all http requests/external
traffic and alerts/informs the user? The trick is making it where another
package can't temporarily disable it to do its dirty business invisibly. I
assume this means running it with an optional flag like --monitor-http or
something.


Reply to this email directly or view it on GitHub
#1763 (comment).

@nathansobo
Copy link
Contributor Author

@nathansobo nathansobo commented Feb 3, 2015

@mark-hahn Isn't the application keychain on OS X designed to protect against access to credentials via the file system? I think it uses IPC or something to ensure the information can only be accessed inside specific applications. In that case, the only way to get the credentials would be to go through an official API.

@benogle
Copy link
Contributor

@benogle benogle commented Feb 3, 2015

the only way to get the credentials would be to go through an official API

They could just require keytar, and request creds. Because it's loaded into the atom context, it'll ask the user if atom can access the keychain. If they've already granted access to atom at some point, then any requests to the keychain (from any random package!) would be silent.

@mark-hahn
Copy link
Contributor

@mark-hahn mark-hahn commented Feb 3, 2015

Isn't the application keychain on OS X designed to protect against
access to credentials

If the package that needs the credentials can get them then any package can.

On Mon, Feb 2, 2015 at 6:35 PM, Ben Ogle notifications@github.com wrote:

the only way to get the credentials would be to go through an official API

They could just require keytar, and request creds. Because it's loaded
into the atom context, it'll ask the user if atom can access the keychain.
If they've already granted access to atom at some point, then any requests
to the keychain (from any random package!) would be silent.


Reply to this email directly or view it on GitHub
#1763 (comment).

@nathansobo
Copy link
Contributor Author

@nathansobo nathansobo commented Feb 3, 2015

They could just require keytar, and request creds.

We could disallow them from requiring keytar or any other non-whitelisted native library.

If the package that needs the credentials can get them then any package can.

It seems like you didn't read my ideas about using an object-capabilities model, which would actually allow us to make this statement untrue. Only the package that needed the credentials would have access.

@zephraph
Copy link

@zephraph zephraph commented Feb 3, 2015

@mark-hahn, It doesn't have to go to the internet to get your credentials, but if a package you're not expecting to access an external network starts doing so then it may point you to take a closer look at that package.

Package creators can theoretically do all sorts of nasty things, but the larger and more complicated the exploit the more likely someone will see it and report them.

@anaisbetts
Copy link
Contributor

@anaisbetts anaisbetts commented Feb 3, 2015

We could disallow them from requiring keytar or any other non-whitelisted native library.

If you try to go down an iOS-style sandbox route, then you've hamstrung Atom packages quite a bit for a gain that is not particularly valuable imho. The likelihood that you'll make a bulletproof sandbox given the platform is very low (i.e. someone truly malicious will still have their way, Atom is too big to sanely sandbox), but you will have made it much, much harder for legitimate package authors.

Plugins are code running on someone's computer, just like any other program I download off the Internet. That's Okay. People curl bash scripts and run them under sudo, for fucks sake. We don't need to try to sandbox software that users explicitly chose to install on their computers.

What we need to do is reduce the discoverability of code that doesn't do the Right Thing, by kicking them off of the package registry, before they cause problems for users. If people install code from a random GitHub repo that does shady shit that's their business, but they should feel safe that anything they find from atom.io is Legit™.

@lee-dohm
Copy link
Contributor

@lee-dohm lee-dohm commented Feb 3, 2015

I tend to agree with @paulcbetts on this one.

I feel like Emacs has a good model here, being that there are multiple package repositories run by different groups ... some updating faster with fewer hoops to jump through to get your package out there and others updating much more slowly, but consequently much more cautious and offering a more curated approach. This would give people more options on how safe they want to be, even giving the community opportunities to build a policing infrastructure beyond what GitHub is prepared to provide.

@envygeeks
Copy link
Contributor

@envygeeks envygeeks commented Feb 5, 2015

The logging idea is fine but only if it's in the users face with a whitelist for certain URL's but it should also take note of any file system access outside of the project folder and any folder that it explicitly states it wants access to, this exponentially increases and exposes the chances of somebody catching something odd going on. For example if you install a package that highlights color's from scss files and it decides to access ~/.local/share that should be alerted because why does it need access to data in ~/.local/share it's configurations should be in ~/.config or ~/.atom/config or preferably done through atom itself who will decide what is best.

You could also provide apparmor (which I would be happen to help with) profiles for users who need extreme security which will lock down atom to specific directories. You could also decrease exposure on Linux by having a polkit process that atom communicates with over dbus but that is mostly for writing and reading and not really much to do with reducing exposure.

@zephraph
Copy link

@zephraph zephraph commented Feb 5, 2015

I'm starting to feel like this is really just a moot point.

I've decided I'm going to write a github-auth package for the stuff I'm working on and use keytar to store the tokens. In big bright letters I'll write ANY PACKAGE CAN ACCESS YOUR TOKENS!... Yeah. It kinda sucks, but I guess it's not that bad. It's made me definitely stop and think before downloading any packages.

Hey guys, do you think we could have a "trusted developer" status or something? I'm not sure how one would go about getting the status, but I think it would help a lot of people. I.E. If someone on the Atom team wrote a package, it's probably safe to use.

@benogle
Copy link
Contributor

@benogle benogle commented Feb 17, 2015

@oreoshake
Copy link

@oreoshake oreoshake commented Feb 17, 2015

Disclaimer: I know very little about Atom's model.

Has a more granular model been discussed? I see a few capabilities listed that may or may not be needed by an application at all. Not that presenting users with permission models is highly effective, but hey.

FWIW, none of my 17 packages could run in a sandbox/jail.

What is the specific issue preventing your package from running in a sandbox? Are these packages something the majority of users would install or are they somewhat of a corner case? Is a more granular model possible or will all plugins need complete access?

using atom-shell/chromium native contexts

This sounds very interesting to me (being ignorant). Is there a reason this model can't be used or is sandboxing entirely out of the question?

I don't understand. I can hide code in my package to erase someone's disk if I want to.

And I don't think that should be allowed either. I'd want to jail this by default too. Again, a more granular model would limit this. Again, I have no idea how feasible this is.

@mark-hahn
Copy link
Contributor

@mark-hahn mark-hahn commented Feb 17, 2015

What is the specific issue preventing your package from running in a
sandbox?

Mostly node features. I need access to disk IO, networking, etc..

Are these packages something the majority of users would install

They should install them (grin). See live-archive which was called
amazing in the Atom blog.

And I don't think that should be allowed either.

How can the sandbox prevent erasing a disk while allowing normal disk
access?

On Tue, Feb 17, 2015 at 3:28 PM, Neil Matatall notifications@github.com
wrote:

Disclaimer: I know very little about Atom's model.

Has a more granular model been discussed? I see a few capabilities listed
that may or may not be needed by an application at all. Not that presenting
users with permission models is highly effective, but hey.

FWIW, none of my 17 packages could run in a sandbox/jail.

What is the specific issue preventing your package from running in a
sandbox? Are these packages something the majority of users would install
or are they somewhat of a corner case? Is a more granular model possible or
will all plugins need complete access?

using atom-shell/chromium native contexts

This sounds very interesting to me (being ignorant). Is there a reason
this model can't be used or is sandboxing entirely out of the question?

I don't understand. I can hide code in my package to erase someone's disk
if I want to.

And I don't think that should be allowed either. I'd want to jail this by
default too. Again, a more granular model would limit this. Again, I have
no idea how feasible this is.


Reply to this email directly or view it on GitHub
#1763 (comment).

@benogle
Copy link
Contributor

@benogle benogle commented Feb 18, 2015

After talking to @ptoomey3 there are things we could do to improve trust on packages from atom.io

  • Package quality metrics. We've talked about this in the past. Showing how many people are actually using the package via uninstalls and disables. cc @thedaniel
  • Some kind of "Registered developer program" where people verify identity
  • Some kind of package approval process for the unverified folks
  • Not installing or loading packages that have been flagged/restricted, periodically checking the restricted list and disabling if a package shows up on the list
  • Maybe static analysis for native modules/disk access that requires approval to make it into the package directory

@oreoshake
Copy link

@oreoshake oreoshake commented Feb 18, 2015

And this is where I might be boo'd off stage:

How can the sandbox prevent erasing a disk while allowing normal disk access?

chroot'ing when possible, which obviously only handles the case where the extension needs access to a directory, not a specific directory/file.

See live-archive which was called amazing in the Atom blog.

That is pretty 🆒 😎

@ptoomey3
Copy link
Contributor

@ptoomey3 ptoomey3 commented Feb 18, 2015

How can the sandbox prevent erasing a disk while allowing normal disk
access?

This is all possible, it just isn't easy (and hence maybe not worth doing). Applications that want to restrict this kind of thing can have a trusted broker that all callers must pass through to perform "privileged actions". So, in the case of disk access you could have a trusted broker that validates that each access abides by the policy. But, retrofitting such a model into an application that was not designed that way from the start is not easy and has definite downsides. For example, applications you download from the Apple App Store have filesystem sandboxing. If you want to grant access to the filesystem a trusted broker file open dialog is used to explicitly authorize access to a specific file/etc. But, this creates issues such as how you grant access to an entire subdirectory, etc. So, while it is possible, I'm not sure the benefits outweigh the downsides in this case.

@mark-hahn
Copy link
Contributor

@mark-hahn mark-hahn commented Feb 18, 2015

@benogle: Those are great ideas. Security at the reputation level is the
only thing that seems possible. I'm still a skeptic about static analysis
though. If I knew the algorithm used by static analysis I could easily get
around it.

Showing how many people are actually using the package via uninstalls and
disables.

This would be awesome even if not related to security. I have to decide
what packages to support/improve and if I knew none of the downloads were
being used (which is probably often the case) I wouldn't waste my efforts.

On Tue, Feb 17, 2015 at 4:10 PM, Patrick Toomey notifications@github.com
wrote:

How can the sandbox prevent erasing a disk while allowing normal disk
access?

This is all possible, it just isn't easy (and hence maybe not worth
doing). Applications that want to restrict this kind of thing can have a
trusted broker that all callers must pass through to perform "privileged
actions". So, in the case of disk access you could have a trusted broker
that validates that each access abides by the policy. But, retrofitting
such a model into an application that was not designed that way from the
start is not easy and has definite downsides. For example, applications you
download from the Apple App Store have filesystem sandboxing. If you want
to grant access to the filesystem a trusted broker file open dialog is used
to explicitly authorize access to a specific file/etc. But, this creates
issues such as how you grant access to an entire subdirectory, etc. So,
while it is possible, I'm not sure the benefits outweigh the downsides in
this case.


Reply to this email directly or view it on GitHub
#1763 (comment).

@envygeeks
Copy link
Contributor

@envygeeks envygeeks commented Feb 18, 2015

Just like you are skeptical of static analysis, I think security by reputation is a joke and a game and not only is it a game, but it's a game I could game, setup and win rather quickly but not only is it a game, it's a abusive dictatorial scheme that lends itself to dickery by douche cookies who get a hair up their ass.

I am not saying that is you, I am expressing my utter disdain for fake security and abusive systems.


To be honest, if you cannot implement full security with sandboxes and triggers log everything and alert it like it's an NoC because then you put the problem on the user by throwing it in their face and saying "I'm pretty sure this is going wrong, report it or ignore it."

@mark-hahn
Copy link
Contributor

@mark-hahn mark-hahn commented Feb 18, 2015

it's a abusive dictatorial scheme that lends itself to dickery by douche
cookies who get a hair up their ass.

So you have an opinion?

On Tue, Feb 17, 2015 at 7:40 PM, Jordon Bedwell notifications@github.com
wrote:

Just like you are skeptical of static analysis, I think security by
reputation is a joke and a game and not only is it a game, but it's a game
I could game, setup and win rather quickly but not only is it a game, it's
a abusive dictatorial scheme that lends itself to dickery by douche cookies
who get a hair up their ass.

I am not saying that is you, I am expressing my utter disdain for fake

security and abusive systems.

To be honest, if you cannot implement full security with sandboxes and
triggers log everything and alert it like it's an NoC because then you put
the problem on the user by throwing it in their face and saying "I'm pretty
sure this is going wrong, report it or ignore it."


Reply to this email directly or view it on GitHub
#1763 (comment).

@thedaniel
Copy link
Contributor

@thedaniel thedaniel commented Feb 18, 2015

Let's keep it civil, folks.

@anaisbetts
Copy link
Contributor

@anaisbetts anaisbetts commented Feb 18, 2015

What do we want?

Let's focus back on the end goal - that at the end of the day, people can trust the plugins hosted at atom.io to be Legitimate and not do unexpected things (either malicious or otherwise). This is the kind of goal that the Pareto Principal applies to; it doesn't have to be 100% bulletproof, it needs to be a balance between time investment / packages being useful, vs. the concerns of security. A few small things will get you a lot of benefit, even if it's not a silver bullet solution.

Every Package Should Have an Owner

In the short term, guaranteeing that every package has an associated GitHub account (in lieu of a repo) would solve the identity problem, especially if it flagged brand new GitHub accounts for review (i.e. you can't automate creating a GH acct then immediately posting a shady package).

Bad packages are Revoked Immediately

If I were to pick a 2nd feature to add, it'd be the package blacklist that @benogle suggested - if a package is reported and verified to be malicious, being able to immediately disable it from loading on anyone's machine would be extremely valuable for mitigating damage.

If Atom even did just these two measures, it would be a huge step towards that end goal above, without a significant time investment (since both of these are pretty straightforward to add into the existing infrastructure)

@zephraph
Copy link

@zephraph zephraph commented Feb 18, 2015

I'm all for that.

I think the worst thing we could possibly do is nothing. Anything, no matter how small, is better than what we currently have.

@envygeeks
Copy link
Contributor

@envygeeks envygeeks commented Feb 18, 2015

@mark-hahn Yes Dear Leader. Seriously though, in all seriousness I was not saying that was you, I was simply responding to the idea... never trying to imply that could be or was you, but if you took it that way (personally) that's fair enough and there is nothing I can or will do to resolve that matter because my point on those systems stands and has been proven time and time again.


@paulcbetts A banning system is great, disabling something on the computer is not great. Unless that "disabling crap on my computer" can be disabled I (and I'm sure plenty of others) would have to dissent and diverge because it's my problem once it hits my computer, not Atom's problem. There are many "would you allow somebody into your home without your explicit notice and approval to disable this" scenarios and they are all "no" for me, because it's my problem, it's my property. Alert the user and let them take responsibility, it is their computer and they should be conscious of all decisions because it is their property.

@anaisbetts
Copy link
Contributor

@anaisbetts anaisbetts commented Feb 18, 2015

@envygeeks You do know that every major desktop and mobile OS also has this capability, right?

@envygeeks
Copy link
Contributor

@envygeeks envygeeks commented Feb 18, 2015

@paulcbetts Except, I use Linux and my Android phone does not use Android Market.

@zephraph
Copy link

@zephraph zephraph commented Feb 18, 2015

Eh, you could always go for the best of both worlds. Automatically disable it and give a message saying that it was disabled for x reason. Have a new section in the settings-view packages tab called "banned/blocked/malicious/etc" and give the user the ability to add an exception for a certain package with a big warning sign before the package is actually enabled.

Also, do you think we'd need a system or method of appealing bans/blocks/whatever?

@envygeeks
Copy link
Contributor

@envygeeks envygeeks commented Feb 18, 2015

I would be fine with that, as long as I was aware and have a reason and can reverse it if I see fit (the idea you had about exceptions.)

I personally think it would be a good idea to have an appeal system just to be sure but that all hinges on Atom dev team not manually auditing anything that gets banned before banning but, that would put a heavy strain on them so since I suggested it, I am obligated to volunteer if they do and I am volunteering if they do.

@anaisbetts
Copy link
Contributor

@anaisbetts anaisbetts commented Feb 18, 2015

Also, do you think we'd need a system or method of appealing bans/blocks/whatever?

I think that this will be so rare that it can be ad-hoc; if you're using a ban it's not a "grey area", it's an emergency situation and the violation is probably pretty egregious.

@pandrei
Copy link
Member

@pandrei pandrei commented Mar 17, 2015

I'm looking to tackle a part of this issue, more specifically this as part of a Google Summer of Code program, anyone can help me understand what's the general direction this should head in ? Actually, at this point, any info is useful!

Thank you,
Andrei

@envygeeks
Copy link
Contributor

@envygeeks envygeeks commented Mar 17, 2015

tbh, we solved this problem entirely on our systems with Docker and IPTables. We even provide docker debs for people who want a similar situation (and I'd be happy to release our build recipe for those if people want them.)

@pandrei
Copy link
Member

@pandrei pandrei commented Mar 17, 2015

@envygeeks , I might be wrong but I believe what you said is different than this issue. The idea is to provide this security model to Atom as a text editor(application), while what you describe implies doing changes at operating system level.

You can guarantee the mentioned security on your Docker and IPTables setup, but that would not work on my local machine having a Centos 6.x where I control root priviledges.

@envygeeks
Copy link
Contributor

@envygeeks envygeeks commented Mar 17, 2015

@pandrei They are two approaches to solving the same problem. Except instead of waiting we figured out how to solve it now because we don't have time to wait until later. Sure what you are doing is more direct but what I did was now without the "later later later" that always happens.

@nathansobo
Copy link
Contributor Author

@nathansobo nathansobo commented Mar 17, 2015

My concern is that each platform has its own story regarding containerization, and that still only solves the problem at an application-wide level.

For this reason, I'm interested in an application-level solution that allows granular control over package permissions in a cross-platform way. It will probably be quite challenging to get right, but it seems worth attempting to see what kind of obstacles we encounter. To be clear, no matter what solution we implement, everything that is possible in packages now should remain possible, the only difference being that users will explicitly grant packages various permissions at installation time. If you're installing a package that requires no escalated permissions, you don't need to think as hard or trust the author. If you install a package with heightened permissions, you need to think more carefully.

In my mind, a cross-platform solution could be divided into two major parts.

  1. Provide context isolation for Atom packages. This implies loading Atom packages in their own context and controlling what modules they are allowed to require via a whitelist. This may also entail constructing a membrane that can selectively limit access core APIs. At this level, loading untrusted native modules would be an explicit permission. If granted, all other bets would be off from a security perspective.
  2. Provide isolation for native modules. This is more of a 🌝 🚀. The idea would be to extend our granular permissions control even to modules that load native code via a native isolation mechanism such as NaCl. We would also need to keep the build process for native modules straightforward.

There are a lot of unknowns in this entire endeavor. For example, if a package has DOM access does that represent an attack vector based on their ability to invoke commands? What other attack vectors are we not thinking about? Is it possible to expose a minimal set of permissions that scales up gracefully while retaining the ease of use of the API?

In the face of a challenging problem, there are always a million reasons why it won't work. I'm interested in exploring how we might elegantly solve our problems to figure out how we can make it work. It's true that no existing text editor has a security model, but it's never been our plan to settle for the same limits of what has come before, at least not without trying. I welcome critical feedback on this proposal, but I'd like it to be specific and constructive. If someone can expose an attack vector in a specific way that moves the conversation forward because we can then discuss how it might be mitigated.

Again, no one should worry that we plan on dumbing anything down or taking away any of the power we've worked so hard to create for package authors.

@lock
Copy link

@lock lock bot commented Apr 1, 2018

This issue has been automatically locked since there has not been any recent activity after it was closed. If you can still reproduce this issue in Safe Mode then please open a new issue and fill out the entire issue template to ensure that we have enough information to address your issue. Thanks!

@lock lock bot locked and limited conversation to collaborators Apr 1, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet