Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prototype encrypting data client-side with the system's public key #92

Open
dtauerbach opened this issue Oct 22, 2013 · 78 comments
Open

Comments

@dtauerbach
Copy link
Contributor

Right now as I understand it, the source uploads a sensitive document, that document is sent over Tor to the hidden service running on the source server, that source server encrypts the document, and it is only decrypted on the SVS. This means that if the source server is somehow compromised, an attacker could recover the plaintext of the document before it is encrypted.

Channeling some of the feedback from Patrick Ball at the techno-activism event tonight, it might make sense to instead encrypt on the client with the public key of the system. That way, if the source server is compromised, the data will still be protected, so long as the SVS is secure. Since the SVS has a higher security model than the source server.

The way that was suggested to accomplish this is via a browser extension, or baking keys into the browser. In addition to being a lot of work, this brings up the whole can of worms that comes with key distribution (e.g. does the browser extension/patch server as a CA?)

In the shorter term, one could just provide the public key with Javascript, and encrypt the document using it before sending it to the source server. There are two issues I see with this: first, adding Javascript may open up an attack vector if no Javascript is being used right now. Second, the attacker we've presumed to have control of the source server could modify the Javascript to include a different public key. The second problem I think is solvable with a super basic browser add-on or something that detects when a client sees unexpected Javascript. Not all clients have to run this. Given the attacker does not who has submitted documents, she must attack everyone to attack her target. That means even if a small percentage of people run the testing add-on, it will still make an effective attack (against everyone) detectable.

[There should be a separate bug for if and how to move the conversation with the journalist to use a somewhat similar client-side approach.]

@fpietrosanti
Copy link

I feel that the Security model of encryption with the node keys is not the best one and SD should switch to The same approach used by globaleaks, encrypting with the recipients keys. Anyhow leveraging OpenPGP.js is a good strategy, i'm following the project and in the past 1 year it improved a lot! Adding client side crypto with server provided keys will add a bit of perfect forward secrecy to the communication exchange, however it does need to use javascript on the submission interface. In globaleaks the submission client is fully JS but i don't know if in the SD threat model is acceptable to use javascript on submission interface.

@micahflee micahflee reopened this Oct 22, 2013
@micahflee
Copy link
Contributor

Oops, didn't mean to close this!

@klpwired
Copy link
Contributor

Just say no to Javascript crypto.

If the server is compromised to capture plaintext documents, it could just as easily be compromised to corrupt the javascript crypto code served to the source. So the gains are illusory. In the meantime, you'd be forcing (or at least encouraging) sources to turn off NoScript, making them vastly more vulnerable to Freedom Hosting-style malware.

@fpietrosanti
Copy link

@klpwired The Javascript crypto give you certain values related to PFS within the respect to the following real-risk-context:

So, Javascript crypto is valuable given that you properly assess the kind of protection that will provide.

Anyhow you must consider that in SD, the default browser to be used is Tor Browser Bundle.
Tor Browser Bundle have Javascript enabled by default and i expect that no whistleblowers would be ever change the default configuration.

If you like to keep the philosophical choice of "keeping javascript off", i just agree to disagree :-)

@diracdeltas
Copy link
Contributor

@klpwired @dtauerbach @fpietrosanti We're already using Javascript on the source interface (jQuery).

@klpwired Re: "If the server is compromised to capture plaintext documents, it could just as easily be compromised to corrupt the javascript crypto code served to the source," I don't agree because presumably we would perform the client-side javascript crypto in a browser extension, which the client would have to download [1] before using the source website. This actually provides extra security against server corruption, because the client would have the ability to check the source code of the extension and make sure their documents are actually being encrypted. You can essentially think of the browser extension as an OS-independent, user-friendly front-end to OpenPGP. It could be as safe as using GPG offline if the user doesn't have malware in their browser.

(thanks to @Hainish for having this conversation with me last night and bringing up some of these points.)

[1] Either we could bundle the extension with the Tor browser bundle, or the client could download it over Tor separately as a signed, deterministically-built package. We need to be careful that simply having the extension in your browser doesn't single you out as a securedrop source!

@dtauerbach
Copy link
Contributor Author

I agree that there are dangers to turning on host-served Javascript and using Javascript crypto libraries. But I think the analysis deserves a more nuanced treatment. In particular, host-served Javascript can be compromised, but is also auditable. Suppose an attacker has compromised the source server, and can send malicious Javascript. If the client gets anything except the expected Javascript, it has the opportunity to raise a red flag and fail, or, perhaps more importantly, detect that the server has been compromised. It is much more difficult for the attacker to target particular individuals given that TBB is being used, so even if a handful of clients are doing this auditing due diligence, this raises the cost of serving malicious Javascript quite a lot. On the other hand, if the encryption happens server-side, then the attacker who has compromised the source server (but not the SVS) will simply have plaintext access to the documents and not have to raise extra audit flags by serving malicious Javascript.

There are serious downsides of course:

  1. We would be encouraging sources to use Javascript on this page, but should be discouraging the use of Javascript as much as possible.
  2. We would be relying on the security of a Javascript crypto library which may have serious vulnerabilities and operates in a totally insecure runtime environment. But, taking a closer look, the only function the library is serving is to encrypt to a public key. Let's put a pin in the issue of whether or not this encryption happens correctly, as that is discussed below. The form submission already contains data that we should assume is malicious. Moreover, any outside Javascript that could affect this page (for example, through a malicious add-on that the TBB users install) will mostly likely be able to affect the form submission too, and could, say, swap out the real file the source wants to submit with a malicious one. Still, there may be a narrow class of browser exploits that give access to another page's Javascript runtime, but NOT to the DOM of the other page. Moving to Javascript introduces an attack vector here.
  3. There is more of a chance for things to go wrong. Security issues aside, if the encryption happens incorrectly -- say, due to an add-on a source has that interferes with OpenPGP.js -- then the source will think she has submitted a document, and only when it gets to the SVS will the journalist realize that it cannot be read.
  4. Encrypting large documents may take significant time, which is another barrier that raises the cost to the source of submitting a document and makes a submission less likely.
  5. This adds complexity to the client, which we want to be as simple as possible.

I'd suggest it's worth thinking through carefully. Empirical data could be gathered about downsides 1-4 in order to weigh them against the upside. Client-side encryption provides a major benefit, and makes the increased security of the air-gapped SVS much more significant. And a longer term solution to consider would be to create a browser add-on that ships with TBB. That way the Javascript isn't host-served, but there's stil a question of how the public keys of the SecureDrop sites get into the add-on -- the host could send the public key, but there would have to be some way to establish trust in that key.

@klpwired
Copy link
Contributor

Well, not to strip the nuance away, but unauthorized plaintext access to documents being leaked to a journalist for publication is not the primary threat. De-anonymization of the source is. Making the system Javascript-dependent increases the risk to the source's anonymity in order to provide (again, illusory) gains in document confidentiality-- a distant second in importance.

@Taipo
Copy link

Taipo commented Oct 22, 2013

Tor Browser is not the only way people will be using the SD either, Tor2Web users could well be using Chrome or even worse, Internet Explorer to access an SD. You will also have sources using throwaway internet ready cell phones as well which can access through either Tor2Web, or Orbot.

While I am not adverse to the use of javascript for some front end functions, but there are social issues with encrypting client side with javascript even more so when having to add on an extension to do so.

Consider scope of potential sources ranges from the technophobes to Snowdens. Then ask these two rhetorical question:

Could an Edward Snowden type whistleblower ( or in fact anyone who has read the leaks concerning EGOTISTICALGIRAFFE ) be put off using a dead drop system that employed client side javascript to encrypt files knowing that the vast majority of NSA efforts is focused on browser hijacking of firefox shipped with TOR? ( in fact anyone with contacts with him could very well ask the real Snowden of his thoughts on this issue )

Could a Technophobe be put off by the extra added step in the situation where the extension had to be manually installed, or at least the public key rather than be presented with the common select file field all computer users have become accustomed to.

Source de-anonymisation is the number one threat if it comes down to a weighing exercise.

@fpietrosanti
Copy link

@Taipo In SD threat model Tor2web is not contemplated, it is on the GlobaLeaks one. We need to see which will be the decision regarding #43 but i expect that following SD philosophy there will be no compromise. Please consider that most whistleblowers are technologically unskilled and a little bit dumb, so the main effort is to try to protect them from their own mistakes, not from NSA.

@klpwired If de-anonimization of the source is the main risk, then you need to have very usable user interface, with super-strong-and-useful awareness information. To do so, you will need fancy UI with some major JS framework and a proper usability study made by UX design experts based on emotional-design concepts. Social risks are much more relevant than technological risks, IMHO.

@Taipo
Copy link

Taipo commented Oct 22, 2013

@fpietrosanti My point about Tor2Web is that it allows a user to access an SD using a wider variety of web browsers other than Firefox so any GPG encryption extensions would need to be available across a much wider range or browsers, or else browser brand restrictions are needed. I agree with you about technologically unskilled whistleblowers. That is basically what a 'Technophobe' is, its a slang word for the same thing, my apologies for the language barrier issues ( perhaps ).

@Hainish
Copy link
Contributor

Hainish commented Oct 22, 2013

I've been having this conversation on the securedrop-dev mailing list, I've copied my conversation with Patrick Ball:

Date: Mon, 21 Oct 2013 19:28:51 -0700
From: Patrick Ball pball@hrdag.org
To: bill@eff.org, Seth David Schoen schoen@eff.org, Micah Lee
micahflee@riseup.net
Subject: SecureDrop
X-Mailer: MailMate (1.7r3790)

hi Seth, Bill, and Micah,

My concern is essentially the same as the audit's final bullet in 3.4. In
short, this doesn't look to me much safer than HushMail or any other
host-based approach. If you can compromise the DD Source Server, the content
of the message (but not the source's communications metadata, thanks to Tor)
would be exposed to the attacker.

The solution that seems to me safest to the host-based-attack I proposed in
the discussion tonight is to move the source's encryption into the human
source's browser. The guy at the discussion tonight who has hacked on
OpenPGP.js said that you have to secure the whole javascript stack, and
that's true, but:

If the server has to inject evil javascript in order to compromise encryption
done in OpenPGP.js implementation in the Tor browser bundle, then the evil
javascript gets exposed to every visitor. That makes the evil javascript at
least potentially detectable. I think it's a big win to force the attack to
be visible (even if heavily obfuscated) to the user -- as opposed to being
completely invisible by evil code running deep in the server.

I think that encrypted public and private keys could be stored on the DD
Source Server if all the encryption and decryption -- of keys and of content
-- happened on the Human Source's computer. This way the source can deny that
she is interacting with SecureDrop.

Danny's point that Tor doesn't want to implement anything special for you or
for anyone is a good and important point. However, I would think you can
finesse this by asking the Tor browser people to include in the browser
basically generic crypto tools that could be used for any host-based crypto
system. That would include a fairly obvious API, including the
encryption/decryption parts, including potentially some way to audit for at
least some kinds of evil code. We can then play whack-a-mole with evil code.

I had a long conversation with Ben Adida about this a couple of years ago,
and he concluded then that it's impossible to completely secure. This said, I
still think that it might be possible to move the attack into a visible
place.

hope this helps -- PB.


Patrick Ball
Executive Director,
Human Rights Data Analysis Group
https://hrdag.org

@Hainish
Copy link
Contributor

Hainish commented Oct 22, 2013

Date: Mon, 21 Oct 2013 21:32:48 -0700
From: William Budington bill@eff.org
To: Patrick Ball pball@hrdag.org
Cc: Seth David Schoen schoen@eff.org, Micah Lee micahflee@riseup.net
Subject: Re: SecureDrop
User-Agent: Mutt/1.5.21 (2010-09-15)

Hey Patrick,

I definitely like the idea of the encryption being done on the client side.
The problem with Hushmail wasn't that it was doing encryption on the server
side. Hushmail was actually doing encryption on the client side, but with a
Java application rather than javascript. The problem was that in the
delivery of this application, it was including modified code to certain
target IPs. Since we would be delivering the application to anonymized
clients via the Tor Browser Bundle, an insertion of malicious code could not
be targeted at certain IPs, and would be forced to be a blanket delivery,
thus risking exposition. Of course, a carefully timed attack could still be
performed, but this would require knowing when the was going to log on and
performing the attack in a very narrow timeframe to reduce risk of exposition.

But just because an attack is exposable doesn't mean that it will be exposed.
It is unlikely that for all delivered instances of the code, someone will do
anything approaching a security audit. The only way I can see to prevent
this is actually having a browser extension that is versioned and signed by a
trusted source. Of course Tor has apprehensions about accepting browser
plugins liberally, which is understandable, but they may be inclined to make
an exception in the case of SecureDrop. But regardless, I feel that the
entire application would have to live within the extension, not just certain
cryptographic primitives that are exposed through the browser. This has the
same problem as HushMail.

The current problem is actually that the version of Firefox that the Tor
Browser Bundle uses cannot be used for cryptographic purposes, since it
(Firefox ESR 17.0.9 at the time of writing) does not provide access to the
newer API for random values in the browser, window.crypto.getRandomValues.
As you said, this shouldn't be a showblocker and we should start development
on a browser application anyway, in preparation for the day Tor does start
working with a newer version of Firefox or Chrome, and I agree with that.

Bill

@Hainish
Copy link
Contributor

Hainish commented Oct 22, 2013

Date: Tue, 22 Oct 2013 09:49:55 -0700
From: Patrick Ball pball@hrdag.org
To: William Budington bill@eff.org
Cc: Seth David Schoen schoen@eff.org, Micah Lee micahflee@riseup.net
Subject: Re: SecureDrop
X-Mailer: MailMate (1.7r3790)

hi Bill,

first off, yes, certainly you may use anything from this thread in any way
that might benefit SecureDrop. More inline:

On 21 Oct 2013, at 21:32, William Budington wrote:

Hey Patrick,

I definitely like the idea of the encryption being done on the client side.
The problem with Hushmail wasn't that it was doing encryption on the
server side. Hushmail was actually doing encryption on the client side,
but with a Java application rather than javascript. The problem was that
in the delivery of this application, it was including modified code to
certain target IPs. Since we would be delivering the application to
anonymized clients via the Tor Browser Bundle, an insertion of malicious
code could not be targeted at certain IPs, and would be forced to be a
blanket delivery, thus risking exposition. Of course, a carefully timed
attack could still be performed, but this would require knowing when the
was going to log on and performing the attack in a very narrow timeframe to
reduce risk of exposition.

I know that Hushmail was encrypting client side, but by "host-based
approach," I specifically mean any attack that a compromised server can
direct at an identifiable user. I whined about this a lot in an article in
Wired last summer.

But just because an attack is exposable doesn't mean that it will be
exposed.

Of course not. But a non-exposable attack has zero chance of being detected.

It is unlikely that for all delivered instances of the code, someone will
do anything approaching a security audit.

True, but there might be a way to detect a necessarily incomplete but
possibly growable set of attacks in an automated way.

It's not clear to me that a perfect system can be built, but an imperfect
system that improves on the current approach while creating an evolving
problem for attackers seems to me like it would be a win.

The only way I can see to prevent this is actually having a browser
extension that is versioned and signed by a trusted source. Of course Tor
has apprehensions about accepting browser plugins liberally, which is
understandable, but they may be inclined to make an exception in the case
of SecureDrop.

I think they'd be way more open to it if the add-on were somehow
generalizable to any host-based crypto system.

I am convinced by Danny's point that having a SecureDrop-specific extension
on one's machine is too incriminating for the user; and distributing such an
add-on ties Tor too closely to leaking. Either problem is a deal-breaker, I
think, and together, well, I doubt it's possible to sell.

But regardless, I feel that the entire application would have to live
within the extension, not just certain cryptographic primitives that are
exposed through the browser. This has the same problem as HushMail.

I don't think the primitives approach has the same attack surface as HushMail
(or SilentCircle). The attack on built-in primitives has to be in the
server's javascript that somehow misuses the primitives, which I think is
much more detectable than breaking the primitives (which could be crazily
subtle).

The current problem is actually that the version of Firefox that the Tor
Browser Bundle uses cannot be used for cryptographic purposes, since it
(Firefox ESR 17.0.9 at the time of writing) does not provide access to the
newer API for random values in the browser, window.crypto.getRandomValues.
As you said, this shouldn't be a showblocker and we should start
development on a browser application anyway, in preparation for the day Tor
does start working with a newer version of Firefox or Chrome, and I agree
with that.

Good luck! and I look forward to following the developments -- PB.

@fpietrosanti
Copy link

Regarding specific OpenPGP.JS threat model/uses please loin http://list.openpgpjs.org/ where those kind of discussions are usual every month!
Also look at the recently OpenTechnologyFund funded MailVelope project http://mailvelope.com that could be a nice target for improvements and uses, being set for broad usage (plausible deniability) and being well funded in it's r&d plan.

@dtauerbach
Copy link
Contributor Author

Thanks Bill. One option that I just discussed with @micahflee would be to encrypt client-side if and only if the user has Javascript running; if it is not running, display an alert of some sort encouraging the user to encrypt the documents herself before submitting.

In terms of threats, I don't think there is a big delta between "attacker having plaintext access to documents" and "attacker being able to identify the source" -- I think the documents will often be the most identifying piece of information about the source, perhaps more identifying than having root on the computer used to leak. I also don't necessarily agree that Snowden would be turned off by the idea of client-side Javascript-based cryptography but NOT by the idea that the submission platform has you send the documents in plaintext to a host, instead of encrypting them in a way that they can only be decrypted via an SVS.

I don't know what the right answer is, but I think this issue deserves careful consideration.

@garrettr
Copy link
Contributor

Unauthorized plaintext access to documents being leaked to a journalist for publication is not the primary threat. De-anonymization of the source is.

@klpwired The concern here is that plaintext access to documents may lead to de-anonymization of the source due to identifying metadata in the documents.

We're already using Javascript on the source interface (jQuery).

@diracdeltas As I expressed on the mailing list, I do not believe that change (to allow sources to customize the number of words in their codename) has a good usability/security tradeoff. Given what we know about how NSA tries to de-anonymize Tor users, I think we should be encouraging users to disable JS. The only reason I accepted that change is because the codename chooser gracefully degrades and is still functional with JS disabled.

I do not think we should add any functionality that requires JS, and the current existence of JS in the tree should not normalize its further use (without careful consideration).

In particular, host-served Javascript can be compromised, but is also auditable.

@dtauerbach As long as it is being served in a signed browser extension, I agree - but this has serious usability problems (although bundling it in TBB would help a lot).

In the end, I agree with @klpwired above. If an adversary could compromise our server to the degree that they could access the plaintext of documents being uploaded, then they could also serve a JS-based exploit. This would be much more likely to succeed because while uploaded documents might have identifying metadata, a successful exploit on the client's machine would certainly lead to de-anonymization. Therefore I think we should focus on securing our server and encouraging users to minimize their attack surface by disabling Javascript.

@Hainish
Copy link
Contributor

Hainish commented Oct 22, 2013

I don't think it likely that the TBB will include a browser plugin for SecureDrop for a number of reasons. Firstly, every additional plugin is an additional vector for attack of all TBB users, not just ones that want to leak documents. I don't think they would want to expose their users to that risk. Secondly, it would imply that the TBB is a tool for leaking documents, which is not what they're going for. I think it may be unreasonable to ask the TBB to include such a plugin.
That being said, due to the fact of the leaker being anonymized by the TBB, for malicious code injection to be successful it would have to be applied in a blanketed fashion. As I stated above, a timed attack could be performed with some effort, if you know exactly when a leaker is going to leak documents, but to decrease the risk of exposition it would have to be in a narrow time frame.

As an alternative to a TBB plugin, I think we can develop an additional piece of infrastructure, let's call it a "SecureDrop Directory Server." (SDDS) This server could periodically check the SecureDrop running instances for their HTML and Javascript. Since it is a request over the Tor network, the SecureDrop server could not differentiate between a SDDS and a real leaking client, thus avoiding the HushMail problem of providing a malicious application to specified IPs. The SDDS then verifies if the set of HTML and JS returned is a verified instance of SecureDrop. This would make detection of malicious SecureDrop instances streamlined, and we could create a directory page that de-lists instances that are not verified (or even instances that are too old and security vulns have been found for). Providing that the SDDS headers can't be fingerprinted (and we'd have to provide the same headers as the TBB in our requests) this would eliminate the timed attack vector.

In addition, the SDDS could be provided a list of public keys for running instances of SD servers, so the attack above that Dan mentioned (the JS providing a MITMed public key) could also be eliminated by having these SDDS servers.

One criticism I've heard of this model is basically centralized. But it doesn't have to be, anyone can run an SDDS, including Freedom of the Press Foundation and any other organizations that wish to be guardians of the sanctity of SecureDrop servers.

As a side-note, above I mentioned that the TBB currently does not support window.crypto.getRandomValues. I talked to Mike Perry and he mentioned that before December 2nd, they will be upgrading to FF 24, which does indeed provide the secure RNG API. This means that we can conceivably in the near future provide a client-side application for encrypting documents to the journalist.

@garrettr
Copy link
Contributor

by asking the Tor browser people to include in the browser basically generic crypto tools that could be used for any host-based crypto system. That would include a fairly obvious API, including the encryption/decryption parts, including potentially some way to audit for at least some kinds of evil code.

In-browser "generic crypto tools" is the goal of the W3C Web Crypto Working Group. This is still in development and it is unclear when it will be ready to be implemented. "Ways to audit evil code" is specifically mentioned as a use case here. The TBB developers have in the past entertained this idea, although it would be nontrivial and who knows what they would say now.

Ultimately the problem is one of establishing a trust anchor if you want this to be automated. If you don't want to involve the user, you would have to either TOFU or do something similar to pinning. Otherwise you can get the user involved, which offloads the burden onto them (with concomitant risks).

Since it (Firefox ESR 17.0.9 at the time of writing) does not provide access to the newer API for random values in the browser, window.crypto.getRandomValues.

We just released the new ESR, which is based on Firefox 24 and has window.crypto.getRandomValues. The Tor devs are working on updating their patches to release a new TBB based on 24 (timeline unknown). We are also working on a broader initiative to integrate as many of the TBB patches into Firefox as possible, so future TBB's can be based on the stable release and we can be equally agile in responding to exploits.

@garrettr
Copy link
Contributor

I talked to Mike Perry and he mentioned that before December 2nd, they will be upgrading to FF 24, which does indeed provide the secure RNG API. This means that we can conceivably in the near future provide a client-side application for encrypting documents to the journalist.

Nice one, @Hainish !

@Hainish
Copy link
Contributor

Hainish commented Oct 23, 2013

Correction, TBB based on FF 24 by Dec 10th:

11:47:20 mikeperry by dec 2nd, all TBBs should be based on FF24
11:48:22 intrigeri mikeperry: 2nd, really? In my understanding, the closest FF release is Dec. 10.
12:15:22 mikeperry intrigeri: https://wiki.mozilla.org/RapidRelease/Calendar seems to indicate you're right

@dtauerbach
Copy link
Contributor Author

OK, a quick recap:

Myself, Patrick, @Hainish, @fpietrosanti seem to favor exploring a host-delivered Javascript approach, trying to maximize the auditability/security of the untrusted code, and noting this will only be possible Dec 10 after TBB migrates to the new Firefox ESR.

@klpwired, @Taipo, @garrettr warn against requiring a user to use Javascript (I agree). Would the 3 of you -- or anyone -- like to weigh in on whether you would consider non-required host-delivered Javascript? If a user is not running it, we could have a message suggesting that additional encryption may be helpful. There are other concerns with this approach too -- I tried to enumerate them above.

In addition to the host-based Javascript question, there has been discussion by @diracdeltas and others about shipping an extension with TBB, or otherwise requiring a signed extension, and having that extension responsible for Javascript (so that it is not delivered by the host). This is more work, and poses several additional problems: deniability if source comp is compromised, key management, etc. But it has the big advantage of not relying on Javascript delivered by the host.

Have I missed anything important?

@klpwired
Copy link
Contributor

Great recap, @dtauerbach.

I'd still consider non-required host-delivered Javascript harmful. It trains users in the wrong direction. Users should be blocking Javascript (and Flash, ActiveX, Java, Silverlight, whatever) from SecureDrop sites, so that if the host is compromised, the risk of the host successfully delivering malware to the user is minimal. IMO, the best use of Javascript would be: window.alert("You should turn off Javascript");

@garrettr
Copy link
Contributor

@dtauerbach +1 on the recap.

In an ideal world, I agree that all encryption would be end-to-end from sources to journalists. Currently, there are too many open questions around Javascript cryptography for us to implement it. It is fine for projects like Cryptocat, which advertise their experimental nature and state up front "You should never trust any piece of software with your life, and Cryptocat is no exception". We are asking sources to take enormous risks to share information using our platform, and I think we can best serve them by being as cautious and conservative as possible in our design choices.

@klpwired I completely agree with your last comment, and have opened #100 and #101 to address it.

This is not to say that I think Securedrop could never encrypt data client-side using Javascript (using a browser extension, until someone solves the problem of securely delivering Javascript in an auditable manner). I would love to see experimental work in this direction. Perhaps it could be part of a 1.0 release sometime in the future!

@dtauerbach
Copy link
Contributor Author

@garrettr @klpwired That seems like a reasonable decision, and I definitely agree that users are generally safer not browsing with Javascript (or Flash, or Java, etc). Still think it's worth being specific about the concerns. In this case, the main concern seems to be that we don't want to encourage users to turn on Javascript, to the point where we want to actively discourage them. That seems like a good idea to me. I listed other concerns above as well that folks haven't discussed. Are there others we've missed?

The reason specificity is important is twofold. First, for the project itself, I agree that being conservative makes sense but one should be conservative relative to one's design goals, not just generally afraid of doing any crypto via Javascript or in browsers. For example, suspend your disbelief and suppose the Tor Project made the TBB come with Javascript always-on with no option to turn off. Then I think that might change the decision above, despite the fact that the Javascript libraries used are still experimental and security guarantees of host-based systems are almost non-existent. The decision we've gone with for now for SecureDrop would be analogous to Cryptocat not performing any sort of end-to-end crypto at all (just a irc/jabber server). It's hard to argue that Cryptocat as a service is less secure than if Nadim just ran a jabber server equivalent, and this has empirically borne out as best I can tell with a cursory look at the bugs in the service that have been identified (e.g. http://tobtu.com/decryptocat.php; yes, they are bad, no they aren't worse than no encryption at all). So in this case, I think the real concern we've keyed in on is that users are less safe running Javascript and we want to actively discourage them, not that the Javascript crypto is too experimental to deploy from a security perspective*, given that the alternative is no e2e crypto at all.

Second, there is a lot of FUD about Javascript crypto. With the meteoric shift of software to the web, it's inevitable that most cryptography will take place in Javascript in browsers sooner than we'd like, if we'd like more than a tiny population to use crypto at all. Specificity allows us to productively move forward and identify showstoppers, to feed back into standards development.

  • There are of course other senses in which it might be too experimental to deploy: for example, if it is a pain to maintain and it breaks so users can't submit things, or takes a really long time to encrypt large files. Some of these usability issues could have downstream security repercussions too, but I don't think we got that far in analyzing the options of libraries and where things could go wrong.

@fpietrosanti
Copy link

@dtauerbach I totally agree that there is an excessive amount of FUD about Javascript and Javascript crypto, compared to the improved value and the effective context of use in anonymous whistleblowing technologies.

It's likely that 99.99% of use of a Tor Hidden Service website is done with the default TBB configuration, that have Javascript turned on, so if this assumption it's true, all the JS/non-JS discussion would be useless.

That's the reason GlobaLeaks started as a framework pure with Javascript application and the upcoming Chat and Messaging features are going to be full JS crypto based (with Cryptocat, OpenPGP.JS and Mailvelope):
https://docs.google.com/document/d/1L8yVgarISeIxIvsFgoT3cF1MYzhEa6YyZzOAsAvR-yY/edit?usp=sharing

However, in order to satisfy the JS-related-sensibilities, we are going to implement a simplified GLClient that expose a submission interface with only HTML and interact with the GLBackend over it's submission API http://docs.globaleaks.apiary.io/ . Those set of security improvements would focus this project proposal:
https://docs.google.com/document/d/15tyTSRKETzcamfgvZ4TOh9mzLV2STnQduZRKnG8fEZQ/edit?usp=sharing

@fpietrosanti
Copy link

I just opened a ticket Log statistics about javascript support of whistleblowers submitting information at #109 to collect objectively collected data about the effective use of No-Script on submission interface on live infrastructures.

@nadimkobeissi
Copy link

The amount of uneducated FUD regarding JS crypto, in this thread, is terrifying, especially considering the otherwise solid reputation of the people involved.

Guys, the concerns @klpwired has about JS crypto are solvable using a signed browser extension to deliver the code. Also, regarding your other concerns on the matter, please do read my blog post on JS crypto, which I hope will dispel a lot of the FUD in this thread.

@garrettr
Copy link
Contributor

@diracdeltas SJCL could be a good choice. Again, for performance it might be good to use asm.js (Emscripten-compiled native libraries). This blog post is another good take on "what library should we use to do crypto in the browser?"

Generally, I think it's most important that we first define a generic API that can be utilized by a variety of clients (browser plugins, native desktop or mobile apps, etc.) A design document for such an API, and the accompanying protocol, is in progress.

@redshiftzero
Copy link
Contributor

redshiftzero commented Feb 5, 2018

Yesterday at FOSDEM I had a chat with @tasn, who has written a nice browser extension that verifies the PGP signature of web pages. This is something that is worth experimenting with in the context of SecureDrop. In SecureDrop releases, we'd ship JavaScript that encrypts submissions client-side to the instance's public key, and this js code would be signed with the SecureDrop release key. The browser extension would verify the sig and only execute the JavaScript if the signature verifies (we'd need the release key baked into the extension). We'd fall back to server-side crypto for sources that have JavaScript turned off entirely. We'd also (eventually) need to get this browser extension bundled into Tor Browser.

This doesn't address the problem that a malicious server can replace the submission key on the server with an attacker controlled one, though we can detect this using OSSEC, and alert on the replacement of the key such that admins can respond. This would be a significant improvement over the current situation, where a very careful attacker (i.e. careful not to trigger any OSSEC alerts) that is able to compromise the application server can read submissions from memory without being detected.

@tasn
Copy link

tasn commented Feb 6, 2018

@redshiftzero covered almost everything, I just have a few comments.

The extension verifies user controlled websites. This means, that users can add website + pubkey combinations as they please. I plan on adding a preloaded list of trusted services and their corresponding keys, and would love to add securedrop once you are ready.

You probably know better if it makes sense, but in my mind I see two alternative ways of using this extension with securedrop. The first is you signing your HTML (for the extension) and instances, e.g. NYT, just upload it as is. This is the easiest solution, and will let users verify the code is really from securedrop.
An alternative solution would be to have your instances (again, e.g. NYT) sign the HTML themselves (for the extension) with their pubkey already embedded in the HTML (or external JS file) which will solve your malicious server key verification issue you just raised.
An even additional solution, which is not currently implemented, would be to verify other requests than just the main HTML (everything else is verified by the browser using SRI). For example, the extension could try and verify XHRs too (to paths that match), and thus be able to verify a configuration json, for example.

I understand you'd like to use this extension in order to support client side encryption, which is great, and what the extension was made for (I created it for EteSync's web client), but I think you could already benefit from it, given the sensitive nature of the project. For example, attackers with the ability to modify files on the server (but not sniff transport), could at the moment change the form's target to a server controlled by them and steal data this way. This extension will prevent that.

If there's anything I can do to help with integrating the extension, or if you have any suggestions or queries regarding the extensions, please let me know.

@eloquence eloquence changed the title Consider encrypting data client-side with the system's public key Prototype encrypting data client-side with the system's public key Feb 20, 2018
@conorsch conorsch modified the milestones: Some Day, Product Backlog Feb 20, 2018
@eloquence
Copy link
Member

eloquence commented Feb 20, 2018

Added "prototyping" to title to clarify that's what we're committing to for now.

@psivesely
Copy link
Contributor

Looking towards the future ECDH key pairs could be generated in the trusted crypto VM on the Qubes Reading Room Workstation (RR). A large number of the ECDH public keys, all signed by the long-term ECDSA identity key of the (RR), are sent to the server through a networked VM a series of them really).

Clients get served unique* ECDH public keys, verify the signature over them, derive a shared symmetric key for AEAD, and use that to encrypt a document/ message. The client then uploads a tuple (iv, ciphertext, tag, pub_client, pub_server, sig), where:

  • (iv, ciphertext, tag) are as in standard AEAD schemes.
  • pub_client is the public ECDH key the client generates when deriving the shared symmetric key.
  • pub_server is the public ECDH key the server served the client that was generated by the RR (for easy lookup of the corresponding private key on the RR).
  • sig is a signature by the ECDSA key derived client-side from the source codename over all the other values. This helps defend against certain attacks like some types of replay attacks, and allows the RR client to definitively link submissions from the same source, while preventing the server from linking submissions sent over different Tor circuits (the long-term source ECDSA public key need only be sent once, encrypted, to the RR).

So this straightforward hybrid-encryption scheme provides forward secrecy and a measure of sender unlinkability, and the crypto is pretty straightforward to implement. Honestly, the harder part of the implementation will be due to the complicated security architecture of SD where instead of client-server we're dealing with CryptoVM-NetworkedVM-server-client.

I glanced over some finer details here, including how to achieve forward secrecy for replies and how it might be possible to likewise add a measure of receiver unlinkability (although that seems harder), but would be happy to flesh this out more and even write a formal spec that I could have smarter cryptographers than I help with/ verify if the SD team is ever serious about implementing this. The above builds on some of the ideas in #3281.

@eloquence
Copy link
Member

Cross-referencing: "An End-to-End Encryption Scheme for SecureDrop" (May 2018 student course paper). Unfortunately I wasn't able to find a repo for the example extension code referenced in the paper.

@lev-csouffrant
Copy link
Contributor

Hey, I've been thinking about this ticket, and am definitely part of the "javascript is risky to enable" camp. I can definitely see some promise with using a plugin that can validate that the javascript code is signed, but would still prefer having a self contained browser plugin. tasn/redshiftzero/others brought up a few ways the js signing could work, sorry for rehashing statements.

  1. Each deployment will have its own key for clients to encrypt with and this needs to be baked into a request somewhere so the client knows what that key is. Users will copy this into the plugin before running the encrypted upload functionality. This is the basis of the paper that eloquence listed, and probably the ideal way to go if the PGP js verification is how it ends up working.

  2. The plugin "pins" keys and acts as a trusted authority. This requires Freedom of Press/someone to load public keys for each new SD deployment into the plugin. I can't think of a way to do that without being a huge hassle, and would require trust of the third party who owns the plugin.

  3. One method that could make 2) successful is if there is some additional infrastructure where a SD deployment could have their code+key signed by a Freedom of Press master key before launching. Thus only one key is needed to be baked into the plugin to validate the js crypto and it supports signing the encrypted files with different keys. Is that something that is reasonable for Freedom of the Press to provide? It does add some overhead on infrastructure to support this, so I could understand not being able to support it. (Whether for technical or legal reasonings)

As stated earlier, a generic browser plugin that does the encryption for you might be the best option. The user doesn’t need to enable any javascript, and the largest risk I can see here is a mitm replacing the public key sent to the user with one the attacker controls. (Via a server compromise) This does not place them in any worse scenario than they are in with the current setup, and requires an active attacker. Redshiftzero mentioned earlier that this could potentially be monitored with OSSEC controls.

Is there any reason against just having a standalone “PGP encrypt file” browser plugin that I missed? It should be generic enough that it wouldn’t be SD specific and provides all the functionality without figuring out how to manage code signing across deployments.

@tasn
Copy link

tasn commented May 10, 2019

What do you mean by a standalone "PGP encrypt file"? If you mean just a generic browser extension that validates generic pages using normal PGP signatures with normal PGP keys, that's what Signed Pages is (the extension mentioned previously in this thread).

@zenmonkeykstop
Copy link
Contributor

@tasn in this case PGP would be used to encrypt files before uploading them.

@tasn
Copy link

tasn commented May 10, 2019

@zenmonkeykstop, oops, thanks for the clarification. I can see the confusion now upon re-reading the thread. I thought he was talking about the signature verification, but instead what he was talking about is having a plugin that encrypts the files being uploaded before they even hit the page. Sorry for the noise.

As for the comment: it looks like this solves the uploading of the files problem quite well, but I think there's still value in verifying the integrity of the page to prevent the running of unapproved javascript that could be used for e.g. fingerprinting.

@zenmonkeykstop
Copy link
Contributor

One other point that just occurred to me about client-side encryption of submissions, is that server-side submissions are gzipped before gpg encryption - I'd imagine to ease the pain of large file transfers over Tor. As HTTP compression isn't going to help with gpg-encrypted files, a client-side solution is either going to have to do something similar or deal with said pain.

(This is a minor detail compared to stuff above, obvs.)

@eloquence
Copy link
Member

@lev-csouffrant Any update on your prototyping work? I see the public repo at https://github.com/lev-csouffrant/Uplocker , should we consider that the final state of your prototyping effort, or are you still planning to do further work on it? Thanks :)

@lev-csouffrant
Copy link
Contributor

Hey eloquence, yeah that protoype is final-ish state, and we will see how much free time I can put into updating the last few important pieces (i.e. testing and packaging). Otherwise, it works as a proof of concept for now as a browser plugin that encrypts files via a PGP key (passed to the plugin via a HTML meta tag). I also handled compressing the files before encrypting them as @zenmonkeykstop suggested. Compressing encrypted files is not going to help much, so if there is going to be any compression is should probably be done before the encryption phase occurs.

One thing I am worried about after writing this is the memory usage for files. You need one copy to be stored in memory for the encryption to run on (There's a streaming file capability but it didn't look like it was supported in the version of Firefox TOR browser uses). The encrypted file will also need a copy in memory. If compression is going to be supported that is a third copy that will be stored. Additionally, you need to transfer it from a content-script to background due to limitations on what each portion can run. I used a transferable object to pass between them which should mean that it is not putting a fourth copy into memory...

With 500MB max per file, that means at minimum it will probably need 1-2GB memory just for the file itself because of all of this. Maybe someone smarter at js stuff can chip in on if there's a better way to handle this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests