New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

net/http: ServeMux excess locking #28785

Open
carl-mastrangelo opened this Issue Nov 14, 2018 · 2 comments

Comments

Projects
None yet
4 participants
@carl-mastrangelo
Member

carl-mastrangelo commented Nov 14, 2018

In #25383, a lock contention regression was brought up in ServeMux. I believe this lock contention is still present in the mux, and could be removed. Making ServeMux have an atomically loaded handler map can help reduce latency of serving at the cost of more expensive mutations.

A sample CL that fixes this regression is https://go-review.googlesource.com/c/go/+/149377 and shows promising speedups on the benchmarks:

$ benchstat /tmp/old.txt  /tmp/new.txt
name                   old time/op    new time/op    delta
ServeMux-12              76.3µs ± 3%    64.0µs ± 7%  -16.14%  (p=0.000 n=30+30)
ServeMux_SkipServe-12    28.4µs ± 2%    19.8µs ± 3%  -30.49%  (p=0.000 n=28+28)

name                   old alloc/op   new alloc/op   delta
ServeMux-12              17.3kB ± 0%    17.3kB ± 0%     ~     (all equal)
ServeMux_SkipServe-12     0.00B          0.00B          ~     (all equal)

name                   old allocs/op  new allocs/op  delta
ServeMux-12                 360 ± 0%       360 ± 0%     ~     (all equal)
ServeMux_SkipServe-12      0.00           0.00          ~     (all equal)

However, this change depends on using CompareAndSwap, which is only available for unsafe.Pointer values. The reviewer of the CL expressed that using unsafe is unacceptable, though I am not really sure why. I have previously proposed that atomic.Value should gain these methods in #26728, but the proposal was also declined.

Options

Thus, there are a few ways forward that I can think of, but need a decision from the Go team:

  1. Accept the use of unsafe in ServeMux, and gain the extra efficiency.
  2. Accept that there is a valid use case for atomic.Value.CompareAndSwap(), and prioritize it being added as a public (or private) API, and use it instead.
  3. Come up with some other solution to improve the serving speed of ServeMux, that is neither of the above two.
  4. Disagree that performance of the ServeMux matters or is worth optimizing, close the CL, and pay the extra 10us per request.

I feel rather helpless with regards to options 1 and 2. I have suggested both, but have received differing reasons for why they aren't acceptable from different Go team members. I'd rather the Go team just pick one of the 4 options above, because the requirements for changing Go are opaque from my POV.

cc: @bradfitz @rsc

@bradfitz

This comment has been minimized.

Member

bradfitz commented Nov 14, 2018

I was in the meeting with @rsc, @gri, @ianlancetaylor, et al when we closed #26728 with text:

It would be fine for someone who has a compelling use for these to create a new issue proposing to add Swap and CompareAndSwap (or to reopen this one, if you have permission). For now, given the lack of a real-world use case for them, we will err on the side of leaving it out.

You're not going to find disagreement on the team there.

You seem to have a real-world use case for it, so do what I said on the CL: implement both (with unsafe, which we won't accept, which we also agree on), and with atomic.Value CAS, and show that atomic.Value with CAS is the way to get the same performance without using unsafe.

But it's not a priority for us to investigate this ourselves so if you want to pursue this, please do.

@rsc

This comment has been minimized.

Contributor

rsc commented Nov 14, 2018

I'll just second what @bradfitz said. I'm in complete agreement: I'm more than happy to see atomic.Value.CompareAndSwap (with the type restrictions that currently exist) given a compelling use case. Maybe this is it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment