Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.
Sign upproposal: crypto/rand: guard against mutation of Reader variable #24160
Comments
gopherbot
added this to the Proposal milestone
Feb 27, 2018
gopherbot
added
the
Proposal
label
Feb 27, 2018
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
bradfitz
Feb 27, 2018
Member
If you care about security, you're already auditing all your vendored/dependency code, right?
Overriding rand.Reader in useful in tests, for example.
|
If you care about security, you're already auditing all your vendored/dependency code, right? Overriding rand.Reader in useful in tests, for example. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
akalin
Feb 27, 2018
Yes, if you care about security, you're already auditing all vendored/dependency code. However, there's a difference in required effort between auditing for packages that e.g. make calls to unsafe, try to make system calls, or load DLLs, and auditing for packages that try to sneak in a single write to a global variable.
I don't think the benefits of overriding crypto/rand.Reader globally for tests outweigh the downsides of having one package's crypto be breakable by another package. It is easy enough to plumb through an io.Reader through the code you want to test.
akalin
commented
Feb 27, 2018
|
Yes, if you care about security, you're already auditing all vendored/dependency code. However, there's a difference in required effort between auditing for packages that e.g. make calls to I don't think the benefits of overriding crypto/rand.Reader globally for tests outweigh the downsides of having one package's crypto be breakable by another package. It is easy enough to plumb through an io.Reader through the code you want to test. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
songgao
Feb 27, 2018
Just wanted to add that, one of the many reasons that Go is fantastic is that it makes good choices and sane defaults for many things, so people don't make mistakes even if they don't have the awareness to, for example, audit vendoring.
songgao
commented
Feb 27, 2018
|
Just wanted to add that, one of the many reasons that Go is fantastic is that it makes good choices and sane defaults for many things, so people don't make mistakes even if they don't have the awareness to, for example, audit vendoring. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
as
Feb 28, 2018
Contributor
This is similar to saying a class has secure member variables because they're private in other languages. Most operating systems don't even support this kind of security in the same user context, let alone process or package scope. If you're running untrusted code there are bigger things to worry about than package variables.
|
This is similar to saying a class has secure member variables because they're private in other languages. Most operating systems don't even support this kind of security in the same user context, let alone process or package scope. If you're running untrusted code there are bigger things to worry about than package variables. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
akalin
Feb 28, 2018
Well, it depends what you mean by 'secure', and what other languages you're talking about. I agree that, say, private variables in C++ are easy to work around, as well as dynamic languages like Ruby/Python/Javascript. However, for example, in Go, unexported variables are inaccessible outside of package scope by code that doesn't use unsafe, cgo, reflect, or some other similar magic, and assuming there isn't some bug in the go runtime/compiler. In that sense one could say that unexported variables are 'safer' than exported ones.
Note that my threat model isn't arbitrary untrusted code, so the comparison to OS-level security isn't really applicable -- it's third-party code that has to be audited before inclusion in a project. It's reasonably easy to audit the dependencies of a package, i.e. to check that it doesn't use unsafe, cgo, reflect, etc., and to perform greater scrutiny if so. It's a bit harder to audit its access of external package variables, and this bug demonstrates that the potential consequences of something slipping through the cracks isn't just weird stuff happening (like setting io.EOF to nil), but a compromise of a RNG that packages depend on for security.
Someone else mentioned that this has been brought up before for e.g. http.DefaultClient. The difference in this case is that one could create their own instance of http.Client and be diligent about using that. However, that's impossible in the case of crypto/rand.
On the other hand, it's not infeasible to write a tool that automates looking for code that modifies external package variables (except for reflection, unsafe, etc.). Although it's unfortunate that such a tool has to be written.
akalin
commented
Feb 28, 2018
|
Well, it depends what you mean by 'secure', and what other languages you're talking about. I agree that, say, private variables in C++ are easy to work around, as well as dynamic languages like Ruby/Python/Javascript. However, for example, in Go, unexported variables are inaccessible outside of package scope by code that doesn't use unsafe, cgo, reflect, or some other similar magic, and assuming there isn't some bug in the go runtime/compiler. In that sense one could say that unexported variables are 'safer' than exported ones. Note that my threat model isn't arbitrary untrusted code, so the comparison to OS-level security isn't really applicable -- it's third-party code that has to be audited before inclusion in a project. It's reasonably easy to audit the dependencies of a package, i.e. to check that it doesn't use unsafe, cgo, reflect, etc., and to perform greater scrutiny if so. It's a bit harder to audit its access of external package variables, and this bug demonstrates that the potential consequences of something slipping through the cracks isn't just weird stuff happening (like setting Someone else mentioned that this has been brought up before for e.g. On the other hand, it's not infeasible to write a tool that automates looking for code that modifies external package variables (except for reflection, unsafe, etc.). Although it's unfortunate that such a tool has to be written. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
rsc
Mar 5, 2018
Contributor
If you're truly worried about this, check it once in a while.
Maybe for Go 2 we could change the declared type of Reader so that there was nothing you could reassign to it.
|
If you're truly worried about this, check it once in a while. Maybe for Go 2 we could change the declared type of Reader so that there was nothing you could reassign to it. |
akalin commentedFeb 27, 2018
crypto/randexposes anio.ReadervariableReaderas "a global, shared instance of a cryptographically strong pseudo-random generator." Furthermore,crypto/rand.Readimplicitly usesReaderfor its crypto source.This seems problematic to me because then any package can just overwrite
crypto/rand.Readerto point to some other object, affecting the security of any packages that rely oncrypto/rand.Readorcrypto/rand.Readerfor security, e.g.x/crypto/nacl.One can say that a language can never ultimately defend against code running in your same process, but I think it should be possible to write something that depends on
crypto/randfor security that wouldn't require auditing other packages for a single malicious variable write.[1]The main API flaw here, IMO, is that
Readeris anio.Readervariable, whereas it should be a function that returns anio.Reader. A new API would look something like:Alas, with the Go 1 compatibility guarantee
Readerwould have to remain, andReadwould still have to useReader. But the above could be added as new functions, sayMakeReader()andSafeRead(). And the standard library (and other important external packages likex/crypto/nacl) could be changed to use those safe functions.[1] Without this flaw, a malicious package would have to use the unsafe package to poke around in the internals of
crypto/rand, or call out to the external OS to e.g. try to redirect access to the random device, which seems easier to audit for than a write tocrypto/rand.Reader. Of course, I'm already assuming that a project worried about this is vendoring all of its dependencies.