Fix race in getEndpointsFromStore#750
Fix race in getEndpointsFromStore#750mrjana merged 1 commit intomoby:masterfrom LK4D4:endpoints_race
Conversation
|
tests passed for me locally :/ but probably there is some deadlock |
|
However I stress-tested docker daemon with this patch and didn't found any deadlocks. |
|
It's not your diff. CircleCI some times locks up and times out. Restarted the test. Never seen that local runs. |
|
LGTM But, am not sure I understand the race condition here. We have locking in most (must be all) of the |
|
@mavenugo it can be two concurrent assignments to |
|
@LK4D4 It is a valid race when we get it from cache i.e not from first time population in which case two go routines can indeed get hold of the same pointer. But this is a case of harmless race since go memory model ensures that the readers either see the old value of Which brings us to the question of why we even need to update it with network value which should already be set for every endpoint. So that assignment is redundant. May be we should just remove that assignment altogther? BTW, out of curiosity, did this show up in go race detector? |
|
@mrjana Yes, it is from go race detector. I'm not sure about what part of memory model you talking, but it was not once said by Go developers, that races can produce corrupted values if type more than one-word size. Maybe it's changed some time ago? |
|
@mrjana this assignment create multiple races with other functions too. Removing would be perfect. |
|
@LK4D4 I am referring to https://golang.org/ref/mem And this assignment will happen in one machine word whether it is 32-bit or 64-bit. But having said that it's worth it to fix it by removing it altogether. Do you want to do this as part of this PR? |
Race detector was angry about that assignment Signed-off-by: Alexander Morozov <lk4d4@docker.com>
|
@mrjana yeah, I pushed change. |
|
LGTM |
|
Thanks @LK4D4. LGTM |
Fix race in getEndpointsFromStore
Race can occur between two getEndpointsFromStore or between it and
getNetwork.