io/ioutil: data race on blackHole #3970

Closed
dvyukov opened this Issue Aug 17, 2012 · 11 comments

6 participants

@dvyukov
Member
dvyukov commented Aug 17, 2012
When running:
$ go test -race net/http
ThreadSanitizer says:

WARNING: DATA RACE
Write by goroutine 205:
  bufio.(*Reader).Read()
      src/pkg/bufio/bufio.go:152 +0x4b6
  io.(*LimitedReader).Read()
      src/pkg/io/io.go:406 +0x181
  net/http.(*body).Read()
      src/pkg/net/http/transfer.go:538 +0xd4
  io/ioutil.devNull.ReadFrom()
      src/pkg/io/ioutil/ioutil.go:138 +0x6f
  io/ioutil.devNull.ReadFrom·i()
      src/pkg/io/ioutil/ioutil.go:0 +0x52
  io.Copy()
      src/pkg/io/io.go:352 +0xc6
  net/http.(*body).Close()
      src/pkg/net/http/transfer.go:628 +0x170
  net/http.(*bodyEOFSignal).Close()
      src/pkg/net/http/transport.go:817 +0xb9
  net/http_test.func·082()
      src/pkg/net/http/transport_test.go:428 +0xb7

Previous write by goroutine 111:
  bufio.(*Reader).Read()
      src/pkg/bufio/bufio.go:152 +0x4b6
  io.(*LimitedReader).Read()
      src/pkg/io/io.go:406 +0x181
  net/http.(*body).Read()
      src/pkg/net/http/transfer.go:538 +0xd4
  io/ioutil.devNull.ReadFrom()
      src/pkg/io/ioutil/ioutil.go:138 +0x6f
  io/ioutil.devNull.ReadFrom·i()
      src/pkg/io/ioutil/ioutil.go:0 +0x52
  io.Copy()
      src/pkg/io/io.go:352 +0xc6
  net/http.(*body).Close()
      src/pkg/net/http/transfer.go:628 +0x170
  net/http.(*bodyEOFSignal).Close()
      src/pkg/net/http/transport.go:817 +0xb9
  net/http_test.func·082()
      src/pkg/net/http/transport_test.go:428 +0xb7

Goroutine 205 (running) created at:
  net/http_test.TestStressSurpriseServerCloses()
      src/pkg/net/http/transport_test.go:432 +0x369
  testing.tRunner()
      src/pkg/testing/testing.go:297 +0xc9

Goroutine 111 (running) created at:
  net/http_test.TestStressSurpriseServerCloses()
      src/pkg/net/http/transport_test.go:432 +0x369
  testing.tRunner()
      src/pkg/testing/testing.go:297 +0xc9

When devNull is used all goroutines write to the single blackHole slice. The race aside,
it has performance implications when several parallel goroutines write the same memory.
@robpike
Contributor
robpike commented Aug 19, 2012

Comment 1:

Labels changed: added priority-later, removed priority-triage.

Status changed to Accepted.

@bradfitz
Member

Comment 2:

How can you be sure what happens inside a blackhole?
In more seriousness, do we care?  The whole point of ioutil.Discard and its devNull type
is to just throw away data.  I don't care if multiple goroutines are throwing away data
to the same place.
What are you proposing?  Changing the implementation or quieting this warning somehow?
@dvyukov
Member
dvyukov commented Aug 20, 2012

Comment 3:

I agree that it's not particularly harmful. But it must be quieted somehow to not fail
tests. I still do not have a suppression mechanism for Go, and I do not want to
introduce it. So the only option for now is changing code. I do not know how to do it,
though (I understand that memory allocation is not a good option performance-wise).
Another point is that if several parallel goroutines write to the same memory it is
slooooooow.
@bradfitz
Member

Comment 4:

How many oooooohs slow is allocating new memory compared to writing to existing memory
from different threads?
I'm guessing that allocating new memory is slower.
@dvyukov
Member
dvyukov commented Aug 20, 2012

Comment 5:

You're right. We can have both maybe.
@robpike
Contributor
robpike commented Aug 20, 2012

Comment 6:

There is no real data race here, of course. Let's be clear about that.
I think the code is fine and can't see a reasonable alternative. ioutil.Discard is kind
of a special case. I don't believe it's worth, say, having a per-goroutine slice to
avoid memory contention.
@dvyukov
Member
dvyukov commented Aug 20, 2012

Comment 7:

Well, in C1x it would be an Undefined Behaviour, your box catches fire and
launches nuclear missles.
Go Memory Model is somwhat evasive on this.
I also don't see a reasonable alternative for now. Can we leave this issue
in some postponed state?
It must be somehow solved for ThreadSanitizer - one way or another it must
not produce reports here for user tests. To date I have no special means to
"suppress" some reports, that would be good to preserve.
One possible future solution would be - if/when runtime Processors are
commited and we have support for per-Processor caching (there are exactly
GOMAXPROCS Processors), the blackhole woule be a good candidate for such
caching.
@rsc
Contributor
rsc commented Aug 31, 2012

Comment 8:

For thread-sanitizer perhaps you can make a different discard that allocates memory on
every i/o operation, or one that adds an unnecessary lock.
@bradfitz
Member
bradfitz commented Sep 4, 2012

Comment 9:

Dmitry, can we detect whether we're being run under TSAN at runtime?
If so, we can arrange for different (slower, more correct) behavior at init() time, just
to quiet these warnings.
@dvyukov
Member
dvyukov commented Sep 6, 2012

Comment 10:

Currently it's impossible to detect, because that would require changes to public APIs.
I've done what Russ said. If you wish we may close this issue now.
@ianlancetaylor
Contributor

Comment 11:

Status changed to Fixed.

@gopherbot gopherbot locked and limited conversation to collaborators Jun 24, 2016
This issue was closed.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.