New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Security between apps #44
Comments
For the sandbox solution 3) one will need to research the webappsec WG's "Confinement with Origin Web Labels" spec, which stems out of the research http://cowl.ws/ . I have not investigated this enough to know, but it looks like it could help solve some problems. |
One could also use WebAccessControl to give access to an origin as suggested in the WebAccessControl wiki [] acl:accessToClass [ acl:regex "https://bblfish.example.io/.*" ];
acl:mode acl:Write;
acl:agentClass [ acl:origin <https://apps.rww.io/> ] . This could add the correct Access Control headers
|
But one can also protect resources using the [] acl:accessToClass [ acl:regex "https://bblfish.solid.example/.*" ];
acl:mode acl:Write;
acl:agentClass [ acl:origin <https://apps.rww.io/> ] . |
Yes, but then the question is how does the user use multiple apps with the same data? How does their medical-status tracking app get access to their blood-sugar-history data? Perhaps something like the android permissions dialog, but with many more options. Alas, I think we know such dialog boxes don't really work -- people end up just saying yes to everything because it's too hard to understand. Maybe asking at usage time would make them tractable. |
I opened an issue on wac:origin. So first I'd replace the previous example with: [] acl:accessToClass [ acl:regex "https://bblfish.solid.example/.*" ];
acl:mode acl:Write;
acl:agent [ acl:user </card#me>;
acl:origin <https://apps.rww.io/> ] . Then to answer @sandhawke latest point one could create a group of application origins trusted by the user with <#trustedApps> a foaf:Group;
foaf:member [ acl:user </card#me>;
acl:origin <https://medical-status.apps.rww.io/> ],
[ acl:user </card#me>;
acl:origin <https://blood-sugar-history.apps.hi-project.org> ] . which could then be used in ACLs like this: [] acl:accessToClass [ acl:regex "https://bblfish.solid.example/medical/.*" ];
acl:mode acl:Write;
acl:agentClass <#trustedApps> . That would allow one to express that the blood-sugar apps and the medical-status apps both had access to all the data in the medical folder, even if those apps both used WebID-TLS to authenticate, and yet not allow a shopping app to access that information (even when logged in with the same WebID!) |
What is interesting is that once one adds |
I'm pretty sure people are deeply concerned about read access (aka privacy), as well. All this acl discussion is in the details of how one might implement some of the possible approaches. This may be pre-mature until we have an outline of a viable UX. |
The current design that both the user AND the webapp (if any) have to have the access required. It is not true that every application has root. Shall we close this issue as a misunderstanding. IS this not documented properly? |
Changed title to reflect the topic rather than an incorrect summary of the situation |
This is an important issue, as origin based access control, the fact that BOTH the untrusted app AND the user must have rights needed is very important for security. |
Sorry for the provocative phrasing about "root". But origin is only a small piece of a solution to the general issue I meant to record. Hopefully the new title is more appropriate. |
But maybe, as I said at the start of the issue, this should be split. Perhaps:
|
There are many new tools coming up to help solve this:
And a whole bunch more at webappsec For roll back one should allow versioning through headers, where each resource can point to the head and the previous version and perhaps a full history. One would need to find a way to roll back using only HTTP. |
Oh, excellent, I've been looking for things like that, especially the first (COWL) and didn't know about. So hard to follow everything relevant! |
I experiment in direction of every app having its own private key. I notice @bblfish also experiments with somehow relevant direction #47 This could allow to grand permissions based on recognising app which either signed the payload using Linked Data Signatures or uses HTTP Signatures BTW: https://www.w3.org/Social/track/actions/62 See also: video - dotJS 2014 - James Halliday (substack) - Immutable offline webapps & http://hyperboot.org/ Q: how approach based on Origin plays with native mobile/desktop apps or CLI apps? |
I don't see how client-side apps can have private keys. Anyone can just look at the app code, see the key, and copy it to use themselves. Assuming we're talking about running apps in unmodified browsers, I think origin is our main tool. We can be a bit more secure if we copy every app to it's a suborigin of the user's pod (like alice-app37121.example.com) and run it from there. Then we know what bytes are running, which might be useful in forensics, or metadata (like security reviews). Desktop apps seem like a lost cause, since there's no security between desktop apps as it is, except for the root/user division, which is pretty hard to use for this. Maybe a linux-containers system could help. It's rather like the modified-clients game cheating scenario. I don't know state of the art for that; if there's a strong solution, I haven't heard of it. Mobile apps, ... have security between the apps, but I still don't see a way for the server to know which app code is really talking to it. |
I generate key pair when I start using given app, currently using soon switching to For now I just store private key in https://github.com/mozilla/localForage but hopefully further work on WebCryptoAPI will provide better solution for storing private key I also look at using separate servers for idp and datasets / blobs, idp will have higher security requirements and only manage adding and revoking public keys via special app (identity manager / keymaster). For now to get started person will need to copy&paste public key PEM from the app which generated it to identity manager which can add it to idp. |
I don't see the use case. I mean, yes, if security is broken somewhere this might limit the intrusion a bit. But the basic model appears to provide the same security functionality as cookies. |
The private key generated by the WebCrypto API cannot be inspected by JS code if it is generated in the correct manner. Check out the generateKey method. Here is some code being used in KeyStore.scala |
And cookies cannot be inspected by JavaScript from other origins, so what is the new security functionality? |
If generated correctly the private key cannot be inspected by the same origin. |
What additional functionality does the that restriction provide? |
The generateKey method is defined as Promise<any> generateKey(AlgorithmIdentifier algorithm,
boolean extractable,
sequence<KeyUsage> keyUsages ); The
That means you can't publish the private key or even send it back to the origin. You can store it in IndexDB though. That's why you can use it with HTTP-Signatures. |
Besides opportunities provided by using HTTP-Signatures I start signing all the data on the client, before posting it to the server, using Linked Data Signatures. Having all the signed documents version controlled on the server, I see some interesting possibilities to recover data after malicious app alters the dataset, based on a distinct key this app used to sign the data. Possibly one could also do something similar with cookies but I want to have all the data signed anyway so didn't think about cookies here. |
@bblfish I don't know how to ask my question more clearly. You keep talking about internals. I'm trying to ask about threat models / attack surfaces. @elf-pavlik That's closer to answering my question, in that signed content is clearly a new function. But I don't yet see how that really helps with this issue. |
What was the question? I was responding to this statement you made:
Well that's just what the JS WebCrypto allows: An app from an Origin can make a public/private key pair and not even the app or that origin will be able to see it -- when |
as i understand security, we can only make intrusion harder but in theory can't prevent it. having some strategy in place to also recover from intrusions seems for me like improvement in terms of security |
Yes, that's what I was talking about when I said, " if security is broken somewhere this might limit the intrusion a bit. " I see this issue being how we put up doors that can be locked. You're talking about making locks that are hard to pick. In the current solid spec, it doesn't matter how good your crypto is, every app you run can still do whatever it wants to your data. |
@bblfish I meant an app having a key in the sense of the key belongs to a particular entity which people conceptualize as being an "app", in the sense that "Angry Birds" or "Instagram" is an app. You're talking, instead, about an installed app having a key, in which case the key actually belongs to the user owning the device on which it's installed, although they may be legally restricted from getting at it. If apps could have keys, that would help with this issue #44, but I don't think installed apps having keys helps, or at least not very much. |
Well keys are tied to orgins, and if you use HTTP Signature you always get two pieces of information:
So you get the two pieces of information needed, the origin, and the name of a particular instance. |
None of that is relevant to this issue unless you actually are proposing an architecture that makes it so. |
Yes, I do have a proposal for extending Web Access Control Rules so that they both use the Origin header and HTTP Signature which is detailed above in the comment of january 31. This allows one to specify that apps from origin O when used by person P can read or write to resource collection C. The owner of collection C can make up his own mind as to which apps he will give access to, or he could indeed use a certification service that list groups of origins that are trustworthy (according to that authority). What will be a bit more tricky to express in RDF will be something along the lines that given a list of friends F and a set of origins O, that we only want our friends F using apps from O to have access. |
But doesn't that return us to the land of lock-in? One of the goals here is to allow people to use whatever apps they want. |
Not really, you can have an open and competitive evaluation system of apps, and people can choose different trust anchors. They could trust themselves, or they could trust an open source community, or ... Of course this does mean that you need agreement between the people working on a set of resources on which apps they trust... Hopefully improvement of JS security with tools such as I mentioned on Feb 7th will help. And perhaps future development of secure programming paradigms could help even more. This is certainly a research topic. What we have here is something we can work with for the moment, and integrate better ideas as they come up. |
This might split off into a dozen issues, but let's start with one.
In solid today, every application has the same access to your pod. It can read all your data, delete all your data, and write new data wherever it wants in your storage. It can use your contact information to put customized messages in the inbox of each person you knows with an inbox. It can send a copy of all your data to a secret server, with or without telling you. It can blackmail you or ask for ransom.
It's like desktop software, for better or worse.
To expose yourself to this, you have to visit a website with malicious code (and sometimes do a webid login). This might happen because you have been tricked into visiting the site (check out this new solid app!), or it might be a site you trust which has become infected.
In this sense, solid is no worse than many browser security bugs that have allowed malicious websites to take over the machines of people who visit. Still, I don't think any vendors purposely choose to give their browsers such vulnerabilities, and if they did, I don't think the customers would be at all happy about it. So we should probably figure out how to tighten this up before users do anything important with solid.
I've heard several ideas about how to solve this problem, but it's not clear to me how to put them together in a way that works.
The text was updated successfully, but these errors were encountered: