Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

what are the security implication in using it in modern browsers? #24

Closed
benoitc opened this issue Jul 24, 2015 · 19 comments
Closed

what are the security implication in using it in modern browsers? #24

benoitc opened this issue Jul 24, 2015 · 19 comments

Comments

@benoitc
Copy link

benoitc commented Jul 24, 2015

Hi,

I wonder how much it is secured to crypt/decrypt in the browser.Does it use the Web Cryptography API in modern browsers?

@abenmrad
Copy link
Contributor

Hi,

Well, the security implications are standard JavaScript security implications, like the threat of XSS attacks to steal keys/plaintext instead of cookies, or the fact you cannot make sure a given variable is indeed deleted and cleared from memory at a given time (obviously, pointers in C vs garbage collection in JS).

As for randomness, this js-compiled version of lib sodium doesn't allow to be used in a browser if there isn't the window.crypto.getRandomValues() method available, as you can see it here.

As for algorithms correctness, we are progressively adding some original test vectors from the C library, rewritten to JavaScript. (See #22).

I might be missing something, but this is what comes to my mind. Also, like with most open source libraries, we haven't been audited. So like all licenses specify, use it at your own risk. I hope I've helped you out. :)

@benoitc
Copy link
Author

benoitc commented Jul 30, 2015

@BatikhSouri yes it helped, sorry for the late response. Thanks a lot :)

Sounds OK for the usage I want to do I guess, I wish there would be a better way to crypt in the browser...

@jedisct1
Copy link
Owner

jedisct1 commented Aug 5, 2015

Hi Benoit,

And sorry for the delayed response.

As stated by @BatikhSouri, it uses the crypto API in modern browsers to get random data (windows.msCrypto on IE11, windows.crypto everywhere else).

The main concern with JavaScript and crypto, not specifically with libsodium, is timing attacks.

The original libsodium code takes care of avoiding side-channel attacks.
The asm.js code is almost a transliteration of the LLVM IR code, with one major change: there are no 64-bit integers in Javascript, so any operation involving i64 values (which Sodium uses quite a lot) is emulated by Emscripten.

Fortunately, it turns out that Emscripten's functions to emulate operations on 64-bit values used by Sodium do not use conditional jumps or lookups at all. With one exception: bit shifts. But the condition depends on the number of bits to shift, not on the value, and sodium doesn't have any code shifting bits by a non-constant number.

There are no obvious surprises in the resulting code, including in critical functions such as sodium_memcmp(). The wrapper also tries to avoid type conversions internally.

Now, the major issue with a JIT compiler is that it's hardly deterministic. It can go way beyond what LTO would have done. It would be totally correct for a Javascript VM to turn h(k1, m) and h(k2, m) into two different codes paths.

So, we cannot guarantee that there are no side-channels. And even if we could prove that there are none, for any given program and any data, it wouldn't hold true for very long.

This is not specific to libsodium.js, the same issue arises with any crypto made in the browser. No matter what crypto API you are using, even webcrypto, data has to go in and out. For example, String<->UInt8Array conversions can introduce side channels.

It doesn't mean that these are trivially exploitable, though.

That said, if you are using a crypto library such as this one:

  • Never concatenate/minify the library and your code together. Load the crypto library separately in order to avoid the minifier perform some unwanted optimizations.
  • Avoid type conversions. With libsodium.js, try to use UInt8Array everywhere when passing data in and out.

@jedisct1 jedisct1 closed this as completed Aug 5, 2015
@benoitc
Copy link
Author

benoitc commented Sep 8, 2015

Sorry, somehow the notification was filtered in my mail.

@jedisct1 thanks for the answer and the tips, it helps a lot :) I will try to do some design and see how it can works. I will let you know if I go somewhere with libsodiumjs

@ghost
Copy link

ghost commented Jan 29, 2018

@jedisct1 Is it true, that the nodejs implementation does not have the same weaknesses? For what purpose is it good to use this lib in the browser? Isn't SSL enough?

@barkermn01
Copy link

barkermn01 commented Mar 19, 2019

@jedisct1 Is it true, that the nodejs implementation does not have the same weaknesses? For what purpose is it good to use this lib in the browser? Isn't SSL enough?

No, because a script could be monitoring XMLHttpRequest or Fetch API's like below

(function(){
let x = XMLHttpRequest;
function NativeCode()
   let y = new x();
   x.addEventListener("loaded", function(e){
    console.log(e);
  });
  return y;
}

window.XMLHttpRequest = function XMLHttpRequest(){
   return NativeCode();
}
})();

Any other script run after that one has loaded that uses new XMLHttpRequest(); Will go via our hacked method it's called function overwriting but with this, if we were to encrypt the responseText and then decrypt we at least stop the snooping from getting the plain text as would occur with HTTPS.

@ghost
Copy link

ghost commented Mar 19, 2019

@barkermn01 Okay, but I don't think this is the right solution against XSS...

@barkermn01
Copy link

barkermn01 commented Mar 19, 2019

That method does not have to be deployed via XSS, it could be a plugin in the browser or a dodgy library being used in the website.

So if your sending secure data to a browser you can't trust the browser's Requesting is secure at least with this way someone will have to be very clever with an active memory watcher and a lot of work per message and even they are only going to break security for there own access, to be able to do it as a deployable script would require full Elevated Privilege access to the remote machine and then a lot of work to find keys & message and decrypt them because if your using closures correctly the memory location will change every run of messaging.

@ghost
Copy link

ghost commented Mar 20, 2019

@barkermn01 Probably in some special cases this is justified, but normally it is paranoid I think.

@barkermn01
Copy link

Secure data is secure data, with GDPR in the EU there is no difference if someones personal data gets out the company responsible are going to get a fine.

@ghost
Copy link

ghost commented Mar 21, 2019

@barkermn01
If I assume that the code of the SPA loads before the code of the content (that contains the malicous script), then I can protect the XHR this way:

Object.defineProperties(window, {
	XMLHttpRequest: {
		configurable: false,
		writable: false
	}
});

(function xss(XMLHttpRequest){
	window.XMLHttpRequest = function NastyXMLHttpRequest(){
		var xhr = new XMLHttpRequest();
		console.log("XSS succeeded.");
		return xhr;
	};
})(XMLHttpRequest);

new XMLHttpRequest();

If you leave any global variable unprotected, for example the require of browserify, then there is a good chance your code can be injected. It just takes a little bit more time to attack it and the encryption won't help. I am not sure about what browser plugins, userscripts, etc. can do, but that is the responsibility of the user and not the developer. As far as I remember the content security policy and such headers will protect you from cross site communication, so even if they can steal data with XSS, they won't be able to send it to any server. As of the dependencies, I try to use only a few popular ones and upgrade them frequently. I still don't think it makes sense to use another relative expensive layer of encryption in most of the cases, but ofc. you can convince me.

@barkermn01
Copy link

barkermn01 commented Mar 26, 2019

but that is the responsibility of the user and not the developer.

Wrong, as a developer under GDPR operating as the Data Controller I have to take all precautions available if I know of an exploit and don't try to protect against it and someone uses it to "data breach" I'm liable.

The GDPR requires personal data to be processed in a manner that ensures its security. This includes protection against unauthorized or unlawful processing and against accidental loss, destruction or damage. It requires that appropriate technical or organizational measures are used.

So that means preventing browser plugins from sniffing. So with a browser plugin, you can use "document_end" (document has downloaded but not started parsing head included stuff (script/link) yet) and just override Object.defineProperties to prevent that working. so yes it stops XSS but it does not stop a rouge scripts the developer included before this script is parsed or browser plugins.

I'm not saying it the be all and end all I'm saying that it's one step to helping prevent it so even if they do get the traffic without the decryption key it's useless, so make the decryption keys tempory and it becomes the only way to breach it is for that exact session with access to the browsers raw memory.

So how I do it, (pseudo code)

App has a PK for server

function send => (payload, cb){
(() => { // use a closure to prevent vars going anywhere outside this execution
App encrypts using PK for server
App Generates new PK & SK (in let or var)
App sends Encrypted request for server and it's generated PK
The server decrypts using SK for Server
The server handles the request and produces a response
The server uses sent PK to encrypt the response
The server sends the encrypted response
App uses SK to decrypt response de-ref and tell memory cleaner it can remove PK and SK from memory (delete pk, sk;) // this should mean they are cleaned before the closures used are finished executing
cb(decrypted);
})();
}

With this, you just make sure callback is inside a closure this means once all processing that is needed from it is finished the memory cleaner will clean it as fast as possible to the developer.

This is not perfect but it makes it as hard as possible for anything reading the data at the browser side to break it.

As it is I have built have an analytical system ( paid and we tell them we're injecting into the site) that people use on their sites that do record XMLHttpRequest and Fetch by doing that injection and to make sure some other JS on the page does not interfere and is not ours is not malicious we tell them to put our script tag into the very top of there <body> and it uses a document.write. so the browser has to parse it and run it all before any other script on the page is executed. allowing our injection to not be blocked.

@ghost
Copy link

ghost commented Mar 26, 2019

@barkermn01
With your logic Microsoft is responsible for all damage caused by viruses and malfunctioning applications, no matter that the users downloaded and installed them and other developers wrote them. It is nonsense...

If the plugin can run code before your application is loaded, then they can easily inject code into your closure. The only way this gives protection is when they do not target this library, just the XHR. I am not sure about what plugins are capable of, but they seem to be able to override even HTTP headers, not just content: https://chrome.google.com/webstore/detail/resource-override/pkoacgokdfckfpndoffpifphamojphii?hl=en

@barkermn01
Copy link

barkermn01 commented Mar 26, 2019

They could be applied if software is able to access private information from windows core applications not what they store on their hardware and even then they ship telling people to use an Anti-Virus something it has done since Window XP SP4

If you don't have a virus scanner installed Windows tell you every time you launch it, Microsoft also ships a Virus Scanner with windows since windows 8 (Windows Defender).

On this note, Microsoft has been done for GDPR violations already Check here

If i built an application/virus since they are one and the same thing that then broke into outlook to steal email addresses saved inside it then yes if Microsoft did not take adequate steps to stop me E.G they left that data set as a plain text file they are at fault.

So my Logic is not flawed that is what the law states whether you like it or not:
I quoted as it is set out by the government body in the UK responsible for regulation of that law within the UK https://ico.org.uk/for-organisations/business/guide-to-the-general-data-protection-regulation-gdpr-faqs/

The end of the day if I'm building the application I need to take all reasonable steps to stop someone MITM the data at any stage.

I would like to see your evidence that a closure (the true meaning not a saved function reference) can be injected into because that would be injecting into a script at download but before compile time.

So please tell me how you would inject into this code after the download and parsed by the browser. because if so I need to completely rework our entire JS system.

(() => {
   let seconds = 1;
   console.log("hello world");
   let test = () => { seconds++; console.log(`Counting seconds at ${seconds}`) }
   setInterval(test, 1000);
})();

I have even made this so it has recursive code running something I don't do normally. meaning the memory cleaner can never remove that from memory. what I can't do is prevent any script from reading the page once the data is displayed but that is out of my control so it's not reasonable.

@ghost
Copy link

ghost commented Mar 26, 2019

@barkermn01

The end of the day if I'm building the application I need to take all reasonable steps to stop someone MITM the data at any stage.

Sure. We are arguing about whether this is a reasonable step. I am pretty sure that 99% of the developers would say that it isn't.

If i built an application/virus since they are one and the same thing that then broke into outlook to steal email addresses saved inside it then yes if Microsoft did not take adequate steps to stop me E.G they left that data set as a plain text file they are at fault.

Wow. You confuse things just as badly as liberal politicians. Leaving a database unencrypted and running an application and giving the credentials on a virus infected environment are very different things. Outlook won't defend you from the latter, an antivirus might do so, but again that is not a Microsoft product, at least I use nod32. If you could win a lawsuit with that argument, then Microsoft would be bankrupted in hours.

So please tell me how you would inject into this code after the download and parsed by the browser.

As far as I understand it is possible with current chrome extensions to override the content before loading it. So the source code of your script can be captured and injected. If that is not true you can still remove the script element from DOM before it runs, download it with XHR, inject and run it. At one point the latter one was possible: https://stackoverflow.com/questions/11638509/chrome-extension-remove-script-tags , I am not sure what you can do currently and whether Chrome tries to protect you from plugins accessing HTTPS pages. The former one appears to possible too: https://stackoverflow.com/questions/35580017/override-javascript-file-in-chrome/35580407 Research the topic in details if you have time for this. I don't. Another option to ask it here: https://security.stackexchange.com/ I am pretty sure you cannot do any decent protection on a webpage against browser plugins.

@barkermn01
Copy link

barkermn01 commented Mar 26, 2019

Yeah, you can remove it I don't deny that but removing it and changing it are very different things removing it you still have not broken into my codes execution.

And again i quote:

☐ We have assessed the nature and scope of our processing activities and have implemented encryption solution(s) to protect the personal data we store and/or transmit.
from https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/security/encryption/

This means if your product is storing personal information it should be encrypted and that has been standard for quite some time you don't leave plain text passwords in files or databased (the only change is that names. email addresses IP Addresses now all require the same level of protection).

I am pretty sure you cannot do any decent protection on a webpage against browser plugins.

So with the fact I have shown you, there is no way to change my closured code as you have not provided a method nor can I find one, and I agree you can change the end result HTML of the page but not the content of scripts included by that page.

so you can't change closure code. you would have to remove it and include your own and that would not work with my system as each script has data loaded into it that in generated by Server Side code that would sign the information with a loading key (using referrer header / ip address / microtome of the loaded script from our servers).

And if you really think that it does not make a difference built a browser plugin that can break the login on this site: http://www.a2z.events and no I did not build the site I only built the login system that works across multiple sites. no did I build that implementation so I don't know how secure it is but you should not be able to view the unencrypted response from the Login API server.

@Macil
Copy link

Macil commented Mar 26, 2019

So with the fact I have shown you, there is no way to change my closured code as you have not provided a method nor can I find one,

An extension could re-define console.log to its own function. An extension can replace the script as it is downloaded and have it rewritten arbitrarily. An extension could read the DOM right off of the page, including the password the user types and the content that shows up after they log in.

A sufficiently privileged browser extension should be considered to have control of the browser. There's no way to hide content from the browser while still showing it to the user.

That's not to say libsodium.js is useless. There are things you could use it for besides cloning TLS. TLS secures the communication between the user's browser and the server. If you want to secure communication between the user's browser and another user's browser, as in end-to-end encryption, then you can use libsodium.js in browser to encrypt a message to another user's public key. (You'd still have to trust the server to serve unbackdoored javascript including libsodium.js, but it's theoretically possible for users to verify the served javascript, and it means someone who hacks the server couldn't get access to any messages without making visible modifications to it to backdoor the served javascript.)

@ghost
Copy link

ghost commented Mar 26, 2019

@Macil

A sufficiently privileged browser extension should be considered to have control of the browser. There's no way to hide content from the browser while still showing it to the user.

I am telling him the same thing, but he cannot understand. I hope he is not a security expert...

@barkermn01
Copy link

barkermn01 commented Apr 4, 2019

I can understand it and there is nothing I can do about that but quite frankly that is the users fail for sharing the information with that there that's the username and password.

What your missing is I don't want a browser plugin with the privilege to view raw communications Such as any of the AdBlocking addon to be able to read the customer's clients data. (https://www.theinquirer.net/inquirer/news/3030513/google-expels-five-chrome-ad-blocking-extensions-for-data-mining-up-to-20m-users)

So if a username and password Are breached the user can get notified of an unknown login and terminate them then change there password.

If they are using a dodgy Adblock and it's reading the content of AJAX responses they are only getting encrypted information from mine, not a list of all the visitor information to there website as we hold.

If someone tries to change the Javascript on download so a MITM it will fail because there is a Server to this session javascript key used in the communication to our servers and if that is different to what we have recorded for that session the JS request will be killed.

So while I can't prevent our user's information being stolen out of the browser I can prevent our user's customers data being stolen out of the browser. I never said it was a single solution but it is a solution that can be used to prevent specific types of attack Specifically MITM of AJAX Responses containing sensitive data. (And by sensitive data I mean any information covered as personal information under GDPR Names, Email addresses, IP Addresses, Addresses and such)

You might think it's over the bounds of reasonable steps but I don't it's 20 mins work to provide another layer of protection, and I will air on the side of caution because a fine for that level of data leaking could bankrupt a startup business.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants