Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mvcc: fix panic by allowing future revision watcher from restore operation #9775

Merged
merged 2 commits into from May 31, 2018

Conversation

gyuho
Copy link
Contributor

@gyuho gyuho commented May 24, 2018

It can happen without gRPC proxy as well.

Fix panic when gRPC proxy leader watcher is restored:

go test -v -tags cluster_proxy -cpu 4 -race -run TestV3WatchRestoreSnapshotUnsync

=== RUN   TestV3WatchRestoreSnapshotUnsync
panic: watcher minimum revision 9223372036854775805 should not exceed current revision 16

goroutine 156 [running]:
github.com/coreos/etcd/mvcc.(*watcherGroup).chooseAll(0xc4202b8720, 0x10, 0xffffffffffffffff, 0x1)
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watcher_group.go:242 +0x3b5
github.com/coreos/etcd/mvcc.(*watcherGroup).choose(0xc4202b8720, 0x200, 0x10, 0xffffffffffffffff, 0xc420253378, 0xc420253378)
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watcher_group.go:225 +0x289
github.com/coreos/etcd/mvcc.(*watchableStore).syncWatchers(0xc4202b86e0, 0x0)
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watchable_store.go:340 +0x237
github.com/coreos/etcd/mvcc.(*watchableStore).syncWatchersLoop(0xc4202b86e0)
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watchable_store.go:214 +0x280
created by github.com/coreos/etcd/mvcc.newWatchableStore
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watchable_store.go:90 +0x477
exit status 2
FAIL	github.com/coreos/etcd/integration	2.551s

gRPC proxy spawns a watcher with a key "proxy-namespace__lostleader"
and watch revision "int64(math.MaxInt64 - 2)" to detect leader loss.
But, when the partitioned node restores, this watcher triggers
panic with "watcher minimum revision ... should not exceed current ...".

This check was added a long time ago, by my PR, when there was no gRPC proxy!

#4043 (comment)

we can remove this checking actually. it is impossible for a unsynced watching to have a future rev. or we should just panic here.

However, now it's possible that a unsynced watcher has a future
revision, when it was moved from a synced watcher group through
restore operation.

This PR adds "restore" flag to indicate that a watcher was moved
from the synced watcher group with restore operation. Otherwise,
the watcher with future revision in an unsynced watcher group
would still panic.

Example logs with future revision watcher from restore operation:

{"level":"info","ts":1527196358.9057755,"caller":"mvcc/watcher_group.go:261","msg":"choosing future revision watcher from restore operation","watch-key":"proxy-namespace__lostleader","watch-revision":9223372036854775805,"current-revision":16}
{"level":"info","ts":1527196358.910349,"caller":"mvcc/watcher_group.go:261","msg":"choosing future revision watcher from restore operation","watch-key":"proxy-namespace__lostleader","watch-revision":9223372036854775805,"current-revision":16}

This is reproducible via TestV3WatchRestoreSnapshotUnsync #9745 for #9281.

/cc @xiang90 @heyitsanthony

@codecov-io
Copy link

codecov-io commented May 24, 2018

Codecov Report

❗ No coverage uploaded for pull request base (master@476c9cb). Click here to learn what that means.
The diff coverage is 50%.

Impacted file tree graph

@@            Coverage Diff            @@
##             master    #9775   +/-   ##
=========================================
  Coverage          ?   69.64%           
=========================================
  Files             ?      375           
  Lines             ?    35209           
  Branches          ?        0           
=========================================
  Hits              ?    24523           
  Misses            ?     8925           
  Partials          ?     1761
Impacted Files Coverage Δ
mvcc/watchable_store.go 82.8% <100%> (ø)
mvcc/watcher_group.go 89.62% <33.33%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 476c9cb...0398ec7. Read the comment docs.

@xiang90
Copy link
Contributor

xiang90 commented May 24, 2018

However, now it's possible that a unsynced watcher has a future
revision, when it was moved from a synced watcher group through
restore operation. And there was no gRPC proxy, when we added this
check.

why would we restore to a store with old revision? restore should always end up with a newer revision, no?

@gyuho
Copy link
Contributor Author

gyuho commented May 24, 2018

@xiang90 TestV3WatchRestoreSnapshotUnsync tests such case.

https://github.com/coreos/etcd/blob/f5e52c995cc688126d7749e409fb80787791254b/integration/v3_watch_restore_test.go#L28-L38

It happens:

  1. current storage revision is 1
  2. watcher is requested on node A, with revision 5, and goes to synced watcher
  3. node A is network partitioned
  4. cluster makes progress exceeding the number of "snapshot catch up entries", current revision 16
  5. network partition is removed
  6. snapshot is sent from an active leader to the partitioned node A
  7. watcher on node A is still on synced watcher group since it was network partitioned
  8. now, leader snapshot triggers restore operation in node A
  9. move all synced watchers in node A to unsynced watcher group, including the ones with watcher revision 5

Thus, node A ends up choosing the watcher with watch revision 5, but current revision 16. In this case, restore ends up with an old revision.

@xiang90
Copy link
Contributor

xiang90 commented May 24, 2018

ok. makes sense to me now.

@xiang90
Copy link
Contributor

xiang90 commented May 24, 2018

@gyuho

but then when that happened, we might not be able to send the updates between 5-16 if the newer version of the store has say revision 8 already compacted.

@gyuho
Copy link
Contributor Author

gyuho commented May 24, 2018

revision 8 already compacted.

@xiang90 That is handled via if w.minRev < compactRev {

https://github.com/coreos/etcd/blob/f5e52c995cc688126d7749e409fb80787791254b/mvcc/watcher_group.go#L238-L251

compactRev would be 8 after restore, and the watcher be deleted.

if !w.restore {
panic(fmt.Errorf("watcher minimum revision %d should not exceed current revision %d", w.minRev, curRev))
}
if lg != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably just remove the logging and add some comments on why this can happen on the restore case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure!

@xiang90
Copy link
Contributor

xiang90 commented May 24, 2018

can you add a test case to test it more directly?

@gyuho gyuho added the WIP label May 24, 2018
@gyuho gyuho removed the WIP label May 25, 2018
@gyuho
Copy link
Contributor Author

gyuho commented May 25, 2018

@xiang90

Added a test case that fails 100% without the patch. Also this fixes the failure go test -v -tags cluster_proxy -cpu 4 -race -run TestV3WatchRestoreSnapshotUnsync.

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
…ation

This also happens without gRPC proxy.

Fix panic when gRPC proxy leader watcher is restored:

```
go test -v -tags cluster_proxy -cpu 4 -race -run TestV3WatchRestoreSnapshotUnsync

=== RUN   TestV3WatchRestoreSnapshotUnsync
panic: watcher minimum revision 9223372036854775805 should not exceed current revision 16

goroutine 156 [running]:
github.com/coreos/etcd/mvcc.(*watcherGroup).chooseAll(0xc4202b8720, 0x10, 0xffffffffffffffff, 0x1)
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watcher_group.go:242 +0x3b5
github.com/coreos/etcd/mvcc.(*watcherGroup).choose(0xc4202b8720, 0x200, 0x10, 0xffffffffffffffff, 0xc420253378, 0xc420253378)
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watcher_group.go:225 +0x289
github.com/coreos/etcd/mvcc.(*watchableStore).syncWatchers(0xc4202b86e0, 0x0)
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watchable_store.go:340 +0x237
github.com/coreos/etcd/mvcc.(*watchableStore).syncWatchersLoop(0xc4202b86e0)
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watchable_store.go:214 +0x280
created by github.com/coreos/etcd/mvcc.newWatchableStore
	/home/gyuho/go/src/github.com/coreos/etcd/mvcc/watchable_store.go:90 +0x477
exit status 2
FAIL	github.com/coreos/etcd/integration	2.551s
```

gRPC proxy spawns a watcher with a key "proxy-namespace__lostleader"
and watch revision "int64(math.MaxInt64 - 2)" to detect leader loss.
But, when the partitioned node restores, this watcher triggers
panic with "watcher minimum revision ... should not exceed current ...".

This check was added a long time ago, by my PR, when there was no gRPC proxy:

etcd-io#4043 (comment)

> we can remove this checking actually. it is impossible for a unsynced watching to have a future rev. or we should just panic here.

However, now it's possible that a unsynced watcher has a future
revision, when it was moved from a synced watcher group through
restore operation.

This PR adds "restore" flag to indicate that a watcher was moved
from the synced watcher group with restore operation. Otherwise,
the watcher with future revision in an unsynced watcher group
would still panic.

Example logs with future revision watcher from restore operation:

```
{"level":"info","ts":1527196358.9057755,"caller":"mvcc/watcher_group.go:261","msg":"choosing future revision watcher from restore operation","watch-key":"proxy-namespace__lostleader","watch-revision":9223372036854775805,"current-revision":16}
{"level":"info","ts":1527196358.910349,"caller":"mvcc/watcher_group.go:261","msg":"choosing future revision watcher from restore operation","watch-key":"proxy-namespace__lostleader","watch-revision":9223372036854775805,"current-revision":16}
```

Signed-off-by: Gyuho Lee <gyuhox@gmail.com>
@xiang90
Copy link
Contributor

xiang90 commented May 30, 2018

lgtm

Copy link
Contributor

@jpbetz jpbetz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

Thanks!

@gyuho gyuho merged commit 391433b into etcd-io:master May 31, 2018
@gyuho gyuho deleted the fix-grpc-proxy-watch branch May 31, 2018 18:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

Successfully merging this pull request may close these issues.

None yet

4 participants