New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible data race #1657
Comments
@monkey92t The same build you've mentioned at golang/go#44672 contains more baffling/unexplained race
Where pool.go:517 is I would suspect the memory is corrupted somehow, but I don't see any crashes reported. Actually the first race in that build does not make much sense too. The read is reported to be in /home/travis/gopath/src/github.com/go-redis/redis/tx_test.go:106 +0x89 and write in /home/travis/gopath/src/github.com/go-redis/redis/ring.go:317 +0x1a6. Those are 2 different tests and different clients that should not share anything. |
I think it may be that some kind of operation interruption may trigger such a problem (but I am not sure what operation was interrupted) I often press "control+c" to cancel the execution of the unit in a unit test, and sometimes I can see the data race that I can't understand. If the execution of the unit test is interrupted due to other errors, will the same effect be achieved? After I recently fixed some unit test errors that may appear, it is difficult for me to encounter this data race error anymore. |
It definitely should not. But I can't explain what is happening so it may actually be the reason. I think it is safe to say that these are not real races, but we may have a serious problem that is uncovered by race detector. In such case the code should crash in production which is not happening...
The best thing we can do is
|
During the weekend, I have run 3 containers*30 hours of testing continuously. Unfortunately, the data race I was expecting did not appear. I even began to wonder if this error has been accidentally fixed. I plan to do even crazier testing this week and look forward to it. redis-cluster performance is not stable, I plan to disable it before performing the test. |
Wow, that is impressive and should be more than enough judging by the build history. Going to try this locally on Go 1.6.2 #!/bin/bash
for i in {1..100}
do
echo $i
GOFLAGS="-count=1" go test ./... -short -race
done Will report if I get any results... |
No failures locally after 100 runs. |
travis-ci seems more likely to appear: https://travis-ci.org/github/go-redis/redis/jobs/763230641 |
I encountered an unintelligible data race locally, I think it should not appear under any circumstances. code: //race_amd64.s
//226 // Store
//227 TEXT sync?~H~Uatomic·StoreInt32(SB), NOSPLIT, $0-0
//228 MOVQ $__tsan_go_atomic32_store(SB), AX
//229 CALL racecallatomic<>(SB)
//230 RET
//redis.go
//146 func newBaseClient(opt *Options, connPool pool.Pooler) *baseClient {
//147 return &baseClient{
//148 opt: opt,
//149 connPool: connPool,
//150 }
//151 } WARNING: DATA RACE
Write at 0x00c0006fe198 by goroutine 889:
sync/atomic.StoreInt32()
/usr/local/go/src/runtime/race_amd64.s:229 +0xb
github.com/go-redis/redis/v8.(*clusterStateHolder).LazyReload.func1()
/redis/cluster.go:647 +0xf6
Previous write at 0x00c0006fe198 by goroutine 566:
github.com/go-redis/redis/v8.newBaseClient()
/redis/redis.go:147 +0xba
github.com/go-redis/redis/v8.NewClient()
/redis/redis.go:571 +0x70
github.com/go-redis/redis/v8.newClusterNode()
/redis/cluster.go:180 +0x70a
github.com/go-redis/redis/v8.(*clusterNodes).Get()
/redis/cluster.go:371 +0x1fb
github.com/go-redis/redis/v8.(*ClusterClient).checkMovedErr()
/redis/cluster.go:1225 +0xea
github.com/go-redis/redis/v8.(*ClusterClient).pipelineReadCmds()
/redis/cluster.go:1201 +0x189
github.com/go-redis/redis/v8.(*ClusterClient)._processPipelineNode.func1.1.2()
/redis/cluster.go:1180 +0x8e
github.com/go-redis/redis/v8/internal/pool.(*Conn).WithReader.func1()
/redis/internal/pool/conn.go:73 +0x43e
github.com/go-redis/redis/v8/internal.WithSpan()
/redis/internal/util.go:59 +0x296
github.com/go-redis/redis/v8/internal/pool.(*Conn).WithReader()
/redis/internal/pool/conn.go:69 +0xa8
github.com/go-redis/redis/v8.(*ClusterClient)._processPipelineNode.func1.1()
/redis/cluster.go:1179 +0x2cd
github.com/go-redis/redis/v8.(*baseClient).withConn.func1()
/redis/redis.go:309 +0x459
github.com/go-redis/redis/v8/internal.WithSpan()
/redis/internal/util.go:59 +0x296
github.com/go-redis/redis/v8.(*baseClient).withConn()
/redis/redis.go:291 +0x9c
github.com/go-redis/redis/v8.(*ClusterClient)._processPipelineNode.func1()
/redis/cluster.go:1171 +0x1c9
github.com/go-redis/redis/v8.hooks.processPipeline.func1()
/redis/redis.go:95 +0x92
github.com/go-redis/redis/v8.hooks.withContext()
/redis/redis.go:134 +0x450
github.com/go-redis/redis/v8.hooks.processPipeline()
/redis/redis.go:94 +0x3c5
github.com/go-redis/redis/v8.(*ClusterClient)._processPipelineNode()
/redis/cluster.go:1170 +0x126
github.com/go-redis/redis/v8.(*ClusterClient)._processPipeline.func1()
/redis/cluster.go:1104 +0xea
Goroutine 889 (running) created at:
github.com/go-redis/redis/v8.(*clusterStateHolder).LazyReload()
/redis/cluster.go:639 +0x7a
github.com/go-redis/redis/v8.(*ClusterClient).checkMovedErr()
/redis/cluster.go:1231 +0x38b
github.com/go-redis/redis/v8.(*ClusterClient).pipelineReadCmds()
/redis/cluster.go:1201 +0x189
github.com/go-redis/redis/v8.(*ClusterClient)._processPipelineNode.func1.1.2()
/redis/cluster.go:1180 +0x8e
github.com/go-redis/redis/v8/internal/pool.(*Conn).WithReader.func1()
/redis/internal/pool/conn.go:73 +0x43e
github.com/go-redis/redis/v8/internal.WithSpan()
/redis/internal/util.go:59 +0x296
github.com/go-redis/redis/v8/internal/pool.(*Conn).WithReader()
/redis/internal/pool/conn.go:69 +0xa8
github.com/go-redis/redis/v8.(*ClusterClient)._processPipelineNode.func1.1()
/redis/cluster.go:1179 +0x2cd
github.com/go-redis/redis/v8.(*baseClient).withConn.func1()
/redis/redis.go:309 +0x459
github.com/go-redis/redis/v8/internal.WithSpan()
/redis/internal/util.go:59 +0x296
github.com/go-redis/redis/v8.(*baseClient).withConn()
/redis/redis.go:291 +0x9c
github.com/go-redis/redis/v8.(*ClusterClient)._processPipelineNode.func1()
/redis/cluster.go:1171 +0x1c9
github.com/go-redis/redis/v8.hooks.processPipeline.func1()
/redis/redis.go:95 +0x92
github.com/go-redis/redis/v8.hooks.withContext()
/redis/redis.go:134 +0x450
github.com/go-redis/redis/v8.hooks.processPipeline()
/redis/redis.go:94 +0x3c5
github.com/go-redis/redis/v8.(*ClusterClient)._processPipelineNode()
/redis/cluster.go:1170 +0x126
github.com/go-redis/redis/v8.(*ClusterClient)._processPipeline.func1()
/redis/cluster.go:1104 +0xea
Goroutine 566 (finished) created at:
github.com/go-redis/redis/v8.(*ClusterClient)._processPipeline()
/redis/cluster.go:1101 +0x2a7
github.com/go-redis/redis/v8.(*ClusterClient)._processPipeline-fm()
/redis/cluster.go:1080 +0x7a
github.com/go-redis/redis/v8.hooks.processPipeline.func1()
/redis/redis.go:95 +0x92
github.com/go-redis/redis/v8.hooks.withContext()
/redis/redis.go:134 +0x450
github.com/go-redis/redis/v8.hooks.processPipeline()
/redis/redis.go:94 +0x3c5
github.com/go-redis/redis/v8.(*ClusterClient).processPipeline()
/redis/cluster.go:1077 +0xd6
github.com/go-redis/redis/v8.(*ClusterClient).processPipeline-fm()
/redis/cluster.go:1076 +0x7a
github.com/go-redis/redis/v8.(*Pipeline).Exec()
/redis/pipeline.go:115 +0x1d1
github.com/go-redis/redis/v8_test.glob..func1.1.8.1.2()
/redis/cluster_test.go:454 +0x5c4
github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync()
/gopath/pkg/mod/github.com/onsi/ginkgo@v1.15.0/internal/leafnodes/runner.go:113 +0xfc
github.com/onsi/ginkgo/internal/leafnodes.(*runner).run()
/gopath/pkg/mod/github.com/onsi/ginkgo@v1.15.0/internal/leafnodes/runner.go:64 +0x184
github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run()
/gopath/pkg/mod/github.com/onsi/ginkgo@v1.15.0/internal/leafnodes/it_node.go:26 +0xb9
github.com/onsi/ginkgo/internal/spec.(*Spec).runSample()
/gopath/pkg/mod/github.com/onsi/ginkgo@v1.15.0/internal/spec/spec.go:215 +0x7fd
github.com/onsi/ginkgo/internal/spec.(*Spec).Run()
/gopath/pkg/mod/github.com/onsi/ginkgo@v1.15.0/internal/spec/spec.go:138 +0x187
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec()
/gopath/pkg/mod/github.com/onsi/ginkgo@v1.15.0/internal/specrunner/spec_runner.go:200 +0x17b
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs()
/gopath/pkg/mod/github.com/onsi/ginkgo@v1.15.0/internal/specrunner/spec_runner.go:170 +0x22a
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run()
/gopath/pkg/mod/github.com/onsi/ginkgo@v1.15.0/internal/specrunner/spec_runner.go:66 +0x145
github.com/onsi/ginkgo/internal/suite.(*Suite).Run()
/gopath/pkg/mod/github.com/onsi/ginkgo@v1.15.0/internal/suite/suite.go:79 +0x899
github.com/onsi/ginkgo.RunSpecsWithCustomReporters()
/gopath/pkg/mod/github.com/onsi/ginkgo@v1.15.0/ginkgo_dsl.go:229 +0x35c
github.com/onsi/ginkgo.RunSpecs()
/gopath/pkg/mod/github.com/onsi/ginkgo@v1.15.0/ginkgo_dsl.go:210 +0x258
github.com/go-redis/redis/v8_test.TestGinkgoSuite()
/redis/main_test.go:122 +0x108
testing.tRunner()
/usr/local/go/src/testing/testing.go:1123 +0x202
================== |
Edit: In my case the error was between the chair and the keyboard. My issue is solved and not caused by a data race but by a stackoverflow caused by stupidity. |
This issue is marked stale. It will be closed in 30 days if it is not updated. |
Full build log: https://travis-ci.org/github/go-redis/redis/jobs/757558441 . Help is appreciated - so far I was not able to understand what is happening.
For example:
Or
The text was updated successfully, but these errors were encountered: