Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

crypto/rand: mutex around /dev/urandom is unnecessarily broad #9221

Open
eapache opened this issue Dec 8, 2014 · 7 comments
Open

crypto/rand: mutex around /dev/urandom is unnecessarily broad #9221

eapache opened this issue Dec 8, 2014 · 7 comments
Labels
Milestone

Comments

@eapache
Copy link

@eapache eapache commented Dec 8, 2014

(Possibly related to #7000)

We were trying to read concurrently (on Linux, go 1.3) from the Reader provided by crypto/rand, but were seeing a lot of unexpected contention. Taking a look at the implementation, it appears to take a mutex around the entire call just in order to protect the initialization step: https://github.com/golang/go/blob/master/src/crypto/rand/rand_unix.go#L51

This seems unnecessarily broad, since the underlying Read call itself should be safe to do concurrently. It would seem better to replace the mutex with a sync.Once which would allow concurrent reads, and also provide a fast path with an atomic op once the initialization is complete.

Thoughts?

@pushrax
Copy link
Contributor

@pushrax pushrax commented Dec 8, 2014

It needs to be able to handle the case where the reader is nil because the initialization failed. Besides that, this makes sense to me.

@bradfitz
Copy link
Contributor

@bradfitz bradfitz commented Dec 10, 2014

If you use Go 1.4 and a modern Linux kernel we use the getrandom system call instead of open+reads of /dev/urandom.

(Not that this can't be improved, but this problem will go away on its own eventually, at least on Linux...)

@eapache
Copy link
Author

@eapache eapache commented Dec 10, 2014

Good to know, thank you, that will probably be enough to solve our immediate problem.

@bradfitz
Copy link
Contributor

@bradfitz bradfitz commented Dec 16, 2014

I looked into this more as I was fixing #9205. The problem is that we use a bufio.Reader around the *os.File reading from /dev/urandom, and it's for the bufio.Reader that the mutex is important.

If we were to fix the mutex contention issue for random, we'd probably want to do a per-processor buffer of /dev/urandom data that we tried to read from first, only falling back to re-filling the per-processor buffer from a shared (without locking) *os.File that we then do ~large reads occasionally. Then we wouldn't need bufio.Reader at all. We could use sync.Pool for the per-P buffer, but using that is contentious.

@eapache
Copy link
Author

@eapache eapache commented Dec 16, 2014

we use a bufio.Reader around the *os.File reading from /dev/urandom

Except on Plan9 for some reason?

I'm not sure why the standard library should be buffering here at all, honestly. I understand that it will aid performance in the non-concurrent case, but my first instinct would be to let the caller do the buffering (if they so desire) and by default to just do an unlocked unbuffered Read on the underlying fd.

We could use sync.Pool for the per-P buffer, but using that is contentious.

Most threading libraries (pthreads etc) provide a thread-local storage API that doesn't content; since go does it's own scheduler I imagine (pure speculation here) that you pin threads to processors at the OS level to avoid scheduler fights, so you could cheat and use thread-local storage to get your per-P buffers? If I'm wildly off-base feel free to ignore this :)

@bradfitz
Copy link
Contributor

@bradfitz bradfitz commented Dec 16, 2014

I meant that it's contentious with humans, not contentious with locks.

@eapache
Copy link
Author

@eapache eapache commented Dec 16, 2014

Oh, whoops, lol :)

@rsc rsc added this to the Unplanned milestone Apr 10, 2015
@rsc rsc removed release-none labels Apr 10, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
5 participants
You can’t perform that action at this time.