Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Security between apps #44

Open
sandhawke opened this issue Jan 27, 2016 · 34 comments
Open

Security between apps #44

sandhawke opened this issue Jan 27, 2016 · 34 comments

Comments

@sandhawke
Copy link

This might split off into a dozen issues, but let's start with one.

In solid today, every application has the same access to your pod. It can read all your data, delete all your data, and write new data wherever it wants in your storage. It can use your contact information to put customized messages in the inbox of each person you knows with an inbox. It can send a copy of all your data to a secret server, with or without telling you. It can blackmail you or ask for ransom.

It's like desktop software, for better or worse.

To expose yourself to this, you have to visit a website with malicious code (and sometimes do a webid login). This might happen because you have been tricked into visiting the site (check out this new solid app!), or it might be a site you trust which has become infected.

In this sense, solid is no worse than many browser security bugs that have allowed malicious websites to take over the machines of people who visit. Still, I don't think any vendors purposely choose to give their browsers such vulnerabilities, and if they did, I don't think the customers would be at all happy about it. So we should probably figure out how to tighten this up before users do anything important with solid.

I've heard several ideas about how to solve this problem, but it's not clear to me how to put them together in a way that works.

  1. A "trustworthy app" certification program (also called "beneficent apps")
  2. Each app-provider or app-version needs to obtain your permission before interacting with your pod or someone else's pod in particular ways, like reading your contacts, or your financial data, or writing to a friend's inbox. Origin fields in permissions are a way to implement this.
  3. Apps run in some kind of sandbox/jail so they need to ask permission before making HTTP requests to random servers (things not your pod).
  4. Apps are run from your own pod, so you have a copy of the code, in case forensics are needed, and to ensure permissions are given to only a particular version
@bblfish
Copy link
Member

bblfish commented Jan 29, 2016

For the sandbox solution 3) one will need to research the webappsec WG's "Confinement with Origin Web Labels" spec, which stems out of the research http://cowl.ws/ . I have not investigated this enough to know, but it looks like it could help solve some problems.

@bblfish
Copy link
Member

bblfish commented Jan 30, 2016

One could also use WebAccessControl to give access to an origin as suggested in the WebAccessControl wiki

[] acl:accessToClass [ acl:regex "https://bblfish.example.io/.*" ];  
   acl:mode acl:Write; 
   acl:agentClass [ acl:origin <https://apps.rww.io/> ] . 

This could add the correct Access Control headers

Access-Control-Allow-Origin: https://apps.rww.io

@bblfish
Copy link
Member

bblfish commented Jan 31, 2016

But one can also protect resources using the acl:origin relation, so that an application can protect a space only from code from one origin. This would allow webid-tls authentication, and security against other origins overwriting the code.

[] acl:accessToClass [ acl:regex "https://bblfish.solid.example/.*" ];  
   acl:mode acl:Write; 
   acl:agentClass [ acl:origin <https://apps.rww.io/> ] .

@sandhawke
Copy link
Author

Yes, but then the question is how does the user use multiple apps with the same data? How does their medical-status tracking app get access to their blood-sugar-history data? Perhaps something like the android permissions dialog, but with many more options. Alas, I think we know such dialog boxes don't really work -- people end up just saying yes to everything because it's too hard to understand. Maybe asking at usage time would make them tractable.

@bblfish
Copy link
Member

bblfish commented Jan 31, 2016

I opened an issue on wac:origin.
[updated this text after work on wac:origin Feb 2nd 2016]

So first I'd replace the previous example with:

[] acl:accessToClass [ acl:regex "https://bblfish.solid.example/.*" ];  
   acl:mode acl:Write; 
   acl:agent [ acl:user </card#me>; 
               acl:origin <https://apps.rww.io/> ] .

Then to answer @sandhawke latest point one could create a group of application origins trusted by the user with

<#trustedApps> a foaf:Group;
     foaf:member [ acl:user </card#me>;
                   acl:origin <https://medical-status.apps.rww.io/> ],
                 [ acl:user </card#me>; 
                   acl:origin <https://blood-sugar-history.apps.hi-project.org> ] .

which could then be used in ACLs like this:

[] acl:accessToClass [ acl:regex "https://bblfish.solid.example/medical/.*" ];  
   acl:mode acl:Write; 
   acl:agentClass <#trustedApps> .

That would allow one to express that the blood-sugar apps and the medical-status apps both had access to all the data in the medical folder, even if those apps both used WebID-TLS to authenticate, and yet not allow a shopping app to access that information (even when logged in with the same WebID!)

@bblfish
Copy link
Member

bblfish commented Feb 2, 2016

What is interesting is that once one adds acl:origin type restrictions to apps, then I think one will quickly move to mostly using those in ACLs for write access, and much less the simple acl:agent relation to a WebID, as that gives every application access. For read access it may be that there is less need to restrict to a group of apps. Another place where direct (non-app-intermediated) authentication may be useful would be to authenticate in order to generate one time passwords sent to the e-mail address.

@bblfish bblfish mentioned this issue Feb 2, 2016
2 tasks
@sandhawke
Copy link
Author

I'm pretty sure people are deeply concerned about read access (aka privacy), as well.

All this acl discussion is in the details of how one might implement some of the possible approaches.

This may be pre-mature until we have an outline of a viable UX.

@deiu deiu added the discussion label Feb 4, 2016
@timbl timbl changed the title Every App Has Root Origin-based Access Control for Web Apps (was: Every App Has Root) Feb 7, 2016
@timbl
Copy link
Contributor

timbl commented Feb 7, 2016

The current design that both the user AND the webapp (if any) have to have the access required. It is not true that every application has root. Shall we close this issue as a misunderstanding. IS this not documented properly?

@timbl
Copy link
Contributor

timbl commented Feb 7, 2016

Changed title to reflect the topic rather than an incorrect summary of the situation

@timbl
Copy link
Contributor

timbl commented Feb 7, 2016

This is an important issue, as origin based access control, the fact that BOTH the untrusted app AND the user must have rights needed is very important for security.

@sandhawke sandhawke changed the title Origin-based Access Control for Web Apps (was: Every App Has Root) Inter-app security Feb 7, 2016
@sandhawke sandhawke changed the title Inter-app security Security between apps Feb 7, 2016
@sandhawke
Copy link
Author

Sorry for the provocative phrasing about "root". But origin is only a small piece of a solution to the general issue I meant to record. Hopefully the new title is more appropriate.

@sandhawke
Copy link
Author

But maybe, as I said at the start of the issue, this should be split. Perhaps:

  • granting access to (origin+user)
  • granting access to (frozen code + user)
  • what are the right permission chunks (granularity), and what is the process for defining it (like "all contacts", "professional contacts", "phone numbers", "home phone numbers of professional contacts", .... Etc)
  • how do we approach the UI for these permissions
  • mitigating risk/damage from app malice/error. Can users roll-back? Can they delete messages sent as them by a malicious/broken app?
  • how do we develop an ecosystem which drives toward more appropriate trust placement and more trust

@bblfish
Copy link
Member

bblfish commented Feb 7, 2016

There are many new tools coming up to help solve this:

And a whole bunch more at webappsec

For roll back one should allow versioning through headers, where each resource can point to the head and the previous version and perhaps a full history. One would need to find a way to roll back using only HTTP.

@sandhawke
Copy link
Author

Oh, excellent, I've been looking for things like that, especially the first (COWL) and didn't know about. So hard to follow everything relevant!

@elf-pavlik
Copy link
Member

I experiment in direction of every app having its own private key. I notice @bblfish also experiments with somehow relevant direction #47

This could allow to grand permissions based on recognising app which either signed the payload using Linked Data Signatures or uses HTTP Signatures

BTW: https://www.w3.org/Social/track/actions/62

See also: video - dotJS 2014 - James Halliday (substack) - Immutable offline webapps & http://hyperboot.org/

Q: how approach based on Origin plays with native mobile/desktop apps or CLI apps?

@sandhawke
Copy link
Author

I don't see how client-side apps can have private keys. Anyone can just look at the app code, see the key, and copy it to use themselves.

Assuming we're talking about running apps in unmodified browsers, I think origin is our main tool. We can be a bit more secure if we copy every app to it's a suborigin of the user's pod (like alice-app37121.example.com) and run it from there. Then we know what bytes are running, which might be useful in forensics, or metadata (like security reviews).

Desktop apps seem like a lost cause, since there's no security between desktop apps as it is, except for the root/user division, which is pretty hard to use for this. Maybe a linux-containers system could help. It's rather like the modified-clients game cheating scenario. I don't know state of the art for that; if there's a strong solution, I haven't heard of it.

Mobile apps, ... have security between the apps, but I still don't see a way for the server to know which app code is really talking to it.

@elf-pavlik
Copy link
Member

I don't see how client-side apps can have private keys. Anyone can just look at the app code, see the key, and copy it to use themselves.

I generate key pair when I start using given app, currently using

soon switching to

For now I just store private key in https://github.com/mozilla/localForage but hopefully further work on WebCryptoAPI will provide better solution for storing private key

I also look at using separate servers for idp and datasets / blobs, idp will have higher security requirements and only manage adding and revoking public keys via special app (identity manager / keymaster). For now to get started person will need to copy&paste public key PEM from the app which generated it to identity manager which can add it to idp.

@sandhawke
Copy link
Author

I don't see the use case. I mean, yes, if security is broken somewhere this might limit the intrusion a bit. But the basic model appears to provide the same security functionality as cookies.

@bblfish
Copy link
Member

bblfish commented Mar 15, 2016

The private key generated by the WebCrypto API cannot be inspected by JS code if it is generated in the correct manner. Check out the generateKey method. Here is some code being used in KeyStore.scala

@sandhawke
Copy link
Author

And cookies cannot be inspected by JavaScript from other origins, so what is the new security functionality?

@bblfish
Copy link
Member

bblfish commented Mar 15, 2016

If generated correctly the private key cannot be inspected by the same origin.
For stronger requirements use TLS client certs. See also https://github.com/w3ctag/client-certificates

@sandhawke
Copy link
Author

What additional functionality does the that restriction provide?

@bblfish
Copy link
Member

bblfish commented Mar 15, 2016

The generateKey method is defined as

  Promise<any> generateKey(AlgorithmIdentifier algorithm,
                          boolean extractable,
                          sequence<KeyUsage> keyUsages );

The extractable slot is defined in §13.4 CryptoKey interface members

Reflects the [[extractable]] internal slot, which indicates whether or not the raw keying material may be exported by the application.

That means you can't publish the private key or even send it back to the origin. You can store it in IndexDB though. That's why you can use it with HTTP-Signatures.

@elf-pavlik
Copy link
Member

Besides opportunities provided by using HTTP-Signatures I start signing all the data on the client, before posting it to the server, using Linked Data Signatures. Having all the signed documents version controlled on the server, I see some interesting possibilities to recover data after malicious app alters the dataset, based on a distinct key this app used to sign the data. Possibly one could also do something similar with cookies but I want to have all the data signed anyway so didn't think about cookies here.

@sandhawke
Copy link
Author

@bblfish I don't know how to ask my question more clearly. You keep talking about internals. I'm trying to ask about threat models / attack surfaces.

@elf-pavlik That's closer to answering my question, in that signed content is clearly a new function. But I don't yet see how that really helps with this issue.

@bblfish
Copy link
Member

bblfish commented Mar 15, 2016

What was the question?

I was responding to this statement you made:

I don't see how client-side apps can have private keys. Anyone can just look at the app code, see the key, and copy it to use themselves.

Well that's just what the JS WebCrypto allows: An app from an Origin can make a public/private key pair and not even the app or that origin will be able to see it -- when extractable is set to false.

@elf-pavlik
Copy link
Member

I'm trying to ask about threat models / attack surfaces.
@elf-pavlik That's closer to answering my question, in that signed content is clearly a new function. But I don't yet see how that really helps with this issue.

as i understand security, we can only make intrusion harder but in theory can't prevent it. having some strategy in place to also recover from intrusions seems for me like improvement in terms of security

@sandhawke
Copy link
Author

Yes, that's what I was talking about when I said, " if security is broken somewhere this might limit the intrusion a bit. "

I see this issue being how we put up doors that can be locked. You're talking about making locks that are hard to pick. In the current solid spec, it doesn't matter how good your crypto is, every app you run can still do whatever it wants to your data.

@sandhawke
Copy link
Author

@bblfish I meant an app having a key in the sense of the key belongs to a particular entity which people conceptualize as being an "app", in the sense that "Angry Birds" or "Instagram" is an app. You're talking, instead, about an installed app having a key, in which case the key actually belongs to the user owning the device on which it's installed, although they may be legally restricted from getting at it. If apps could have keys, that would help with this issue #44, but I don't think installed apps having keys helps, or at least not very much.

@bblfish
Copy link
Member

bblfish commented Mar 15, 2016

Well keys are tied to orgins, and if you use HTTP Signature you always get two pieces of information:

  1. the origin
  2. the signature made with a private key

So you get the two pieces of information needed, the origin, and the name of a particular instance.

@sandhawke
Copy link
Author

None of that is relevant to this issue unless you actually are proposing an architecture that makes it so.

@bblfish
Copy link
Member

bblfish commented Mar 16, 2016

Yes, I do have a proposal for extending Web Access Control Rules so that they both use the Origin header and HTTP Signature which is detailed above in the comment of january 31. This allows one to specify that apps from origin O when used by person P can read or write to resource collection C.

The owner of collection C can make up his own mind as to which apps he will give access to, or he could indeed use a certification service that list groups of origins that are trustworthy (according to that authority).

What will be a bit more tricky to express in RDF will be something along the lines that given a list of friends F and a set of origins O, that we only want our friends F using apps from O to have access.

@sandhawke
Copy link
Author

But doesn't that return us to the land of lock-in? One of the goals here is to allow people to use whatever apps they want.

@bblfish
Copy link
Member

bblfish commented Mar 16, 2016

But doesn't that return us to the land of lock-in?

Not really, you can have an open and competitive evaluation system of apps, and people can choose different trust anchors. They could trust themselves, or they could trust an open source community, or ...

Of course this does mean that you need agreement between the people working on a set of resources on which apps they trust...

Hopefully improvement of JS security with tools such as I mentioned on Feb 7th will help.

And perhaps future development of secure programming paradigms could help even more. This is certainly a research topic. What we have here is something we can work with for the moment, and integrate better ideas as they come up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants