Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add follower read support to TiDB #11347

Merged
merged 24 commits into from
Aug 16, 2019
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
11f6f5e
Initial effort to add follower read to TiDB
sunxiaoguang Jul 20, 2019
ef3a32b
Add tests for replica read
sunxiaoguang Jul 23, 2019
8beffef
Merge branch 'master' into replica_read
sunxiaoguang Jul 23, 2019
013e0ca
Updated kvproto dependency and fixed a test
sunxiaoguang Jul 23, 2019
7277838
Add more tests
sunxiaoguang Jul 23, 2019
31f1333
Merge branch 'master' of https://github.com/pingcap/tidb into replica…
sunxiaoguang Jul 24, 2019
c43f94b
Merge branch 'master' into replica_read
sunxiaoguang Jul 25, 2019
315aa95
Update kvproto
sunxiaoguang Jul 31, 2019
7bb7d76
Run go mod tidy
sunxiaoguang Aug 1, 2019
3c3f2e0
Merge branch 'master' into replica_read
sunxiaoguang Aug 1, 2019
f4d59d6
Merge branch 'master' into replica_read
sunxiaoguang Aug 2, 2019
237005e
Refactor interface between SQL and KV layers
sunxiaoguang Aug 2, 2019
136d760
Do not store index of followers in region cache
sunxiaoguang Aug 5, 2019
93c88d4
Use session scope coprocessor client
sunxiaoguang Aug 5, 2019
03c9576
Merge branch 'master' into replica_read
sunxiaoguang Aug 5, 2019
3968c6b
Try next follower if selected follower had failed
sunxiaoguang Aug 6, 2019
3bcd5c8
Handle failed follower when reading from it
sunxiaoguang Aug 10, 2019
1775d42
Merge branch 'master' into replica_read
sunxiaoguang Aug 11, 2019
13065e8
Merge branch 'master' into replica_read
sunxiaoguang Aug 11, 2019
39c9add
Merge branch 'master' into replica_read
coocood Aug 12, 2019
d3b64ce
Merge branch 'master' of https://github.com/pingcap/tidb into replica…
sunxiaoguang Aug 13, 2019
b03c2d9
Merge branch 'master' into replica_read
sunxiaoguang Aug 14, 2019
682333a
Assign different replica read seeds to snapshots
sunxiaoguang Aug 15, 2019
b0bf2a3
Merge branch 'master' into replica_read
sre-bot Aug 16, 2019
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion distsql/request_builder.go
Original file line number Diff line number Diff line change
Expand Up @@ -152,12 +152,13 @@ func (builder *RequestBuilder) getKVPriority(sv *variable.SessionVars) int {
}

// SetFromSessionVars sets the following fields for "kv.Request" from session variables:
// "Concurrency", "IsolationLevel", "NotFillCache".
// "Concurrency", "IsolationLevel", "NotFillCache", "ReplicaRead".
func (builder *RequestBuilder) SetFromSessionVars(sv *variable.SessionVars) *RequestBuilder {
builder.Request.Concurrency = sv.DistSQLScanConcurrency
builder.Request.IsolationLevel = builder.getIsolationLevel()
builder.Request.NotFillCache = sv.StmtCtx.NotFillCache
builder.Request.Priority = builder.getKVPriority(sv)
builder.Request.ReplicaRead = sv.ReplicaRead
return builder
}

Expand Down
11 changes: 9 additions & 2 deletions executor/analyze.go
Original file line number Diff line number Diff line change
Expand Up @@ -625,7 +625,7 @@ func (e *AnalyzeFastExec) getSampRegionsRowCount(bo *tikv.Backoffer, needRebuild
})
var resp *tikvrpc.Response
var rpcCtx *tikv.RPCContext
rpcCtx, *err = e.cache.GetRPCContext(bo, loc.Region)
rpcCtx, *err = e.cache.GetRPCContext(bo, loc.Region, e.ctx.GetSessionVars().ReplicaRead)
if *err != nil {
return
}
Expand Down Expand Up @@ -925,6 +925,9 @@ func (e *AnalyzeFastExec) handleScanTasks(bo *tikv.Backoffer) (keysSize int, err
if err != nil {
return 0, err
}
if e.ctx.GetSessionVars().ReplicaRead.IsFollowerRead() {
snapshot.SetFollowerRead()
}
for _, t := range e.scanTasks {
iter, err := snapshot.Iter(t.StartKey, t.EndKey)
if err != nil {
Expand All @@ -943,10 +946,14 @@ func (e *AnalyzeFastExec) handleSampTasks(bo *tikv.Backoffer, workID int, err *e
defer e.wg.Done()
var snapshot kv.Snapshot
snapshot, *err = e.ctx.GetStore().(tikv.Storage).GetSnapshot(kv.MaxVersion)
rander := rand.New(rand.NewSource(e.randSeed + int64(workID)))
if *err != nil {
return
}
if e.ctx.GetSessionVars().ReplicaRead.IsFollowerRead() {
snapshot.SetFollowerRead()
}
rander := rand.New(rand.NewSource(e.randSeed + int64(workID)))

for i := workID; i < len(e.sampTasks); i += e.concurrency {
task := e.sampTasks[i]
if task.SampSize == 0 {
Expand Down
3 changes: 3 additions & 0 deletions executor/point_get.go
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,9 @@ func (e *PointGetExecutor) Next(ctx context.Context, req *chunk.Chunk) error {
if err != nil {
return err
}
if e.ctx.GetSessionVars().ReplicaRead.IsFollowerRead() {
e.snapshot.SetFollowerRead()
}
if e.idxInfo != nil {
idxKey, err1 := e.encodeIndexKey()
if err1 != nil && !kv.ErrNotExist.Equal(err1) {
Expand Down
2 changes: 2 additions & 0 deletions go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -76,3 +76,5 @@ require (
sourcegraph.com/sourcegraph/appdash v0.0.0-20180531100431-4c381bd170b4
sourcegraph.com/sourcegraph/appdash-data v0.0.0-20151005221446-73f23eafcf67
)

replace github.com/pingcap/kvproto v0.0.0-20190703131923-d9830856b531 => github.com/5kbpers/kvproto v0.0.0-20190711121214-0c6cf07c691d
4 changes: 2 additions & 2 deletions go.sum
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/5kbpers/kvproto v0.0.0-20190711121214-0c6cf07c691d h1:y6CZS1/jYiXQeJz+m/gENK7GBJRYNYZ3Z83pf9FiRSU=
github.com/5kbpers/kvproto v0.0.0-20190711121214-0c6cf07c691d/go.mod h1:QMdbTAXCHzzygQzqcG9uVUgU2fKeSN1GmfMiykdSzzY=
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/StackExchange/wmi v0.0.0-20180725035823-b12b22c5341f h1:5ZfJxyXo8KyX8DgGXC5B7ILL8y51fci/qYz2B4j8iLY=
Expand Down Expand Up @@ -159,8 +161,6 @@ github.com/pingcap/failpoint v0.0.0-20190512135322-30cc7431d99c/go.mod h1:DNS3Qg
github.com/pingcap/goleveldb v0.0.0-20171020122428-b9ff6c35079e h1:P73/4dPCL96rGrobssy1nVy2VaVpNCuLpCbr+FEaTA8=
github.com/pingcap/goleveldb v0.0.0-20171020122428-b9ff6c35079e/go.mod h1:O17XtbryoCJhkKGbT62+L2OlrniwqiGLSqrmdHCMzZw=
github.com/pingcap/kvproto v0.0.0-20190516013202-4cf58ad90b6c/go.mod h1:QMdbTAXCHzzygQzqcG9uVUgU2fKeSN1GmfMiykdSzzY=
github.com/pingcap/kvproto v0.0.0-20190703131923-d9830856b531 h1:8xk2HobDwClB5E3Hv9TEPiS7K7bv3ykWHLyZzuUYywI=
lzmhhh123 marked this conversation as resolved.
Show resolved Hide resolved
github.com/pingcap/kvproto v0.0.0-20190703131923-d9830856b531/go.mod h1:QMdbTAXCHzzygQzqcG9uVUgU2fKeSN1GmfMiykdSzzY=
github.com/pingcap/log v0.0.0-20190214045112-b37da76f67a7/go.mod h1:xsfkWVaFVV5B8e1K9seWfyJWFrIhbtUTAD8NV1Pq3+w=
github.com/pingcap/log v0.0.0-20190307075452-bd41d9273596 h1:t2OQTpPJnrPDGlvA+3FwJptMTt6MEPdzK1Wt99oaefQ=
github.com/pingcap/log v0.0.0-20190307075452-bd41d9273596/go.mod h1:WpHUKhNZ18v116SvGrmjkA9CBhYmuUTKL+p8JC9ANEw=
Expand Down
29 changes: 29 additions & 0 deletions kv/kv.go
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,23 @@ const (
RC
)

// ReplicaReadType is the type of replica to read data from
type ReplicaReadType byte

const (
// ReplicaReadLeader stands for 'read from leader'.
ReplicaReadLeader ReplicaReadType = iota
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I prefer using 1 << iota here, so we can use bit operations to easily check the type is set.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I though about this initially but doubt if it is rational to read from all kind of replicas. Read from different type of replicas may have different latency characteristics and pose different burden to leader, we can have more discussion about this.

// ReplicaReadFollower stands for 'read from follower'.
ReplicaReadFollower
// ReplicaReadLearner stands for 'read from learner'.
ReplicaReadLearner
)

// IsFollowerRead checks if leader is going to be used to read data.
func (r ReplicaReadType) IsFollowerRead() bool {
return r == ReplicaReadFollower
}

// Those limits is enforced to make sure the transaction can be well handled by TiKV.
var (
// TxnEntrySizeLimit is limit of single entry size (len(key) + len(value)).
Expand Down Expand Up @@ -158,6 +175,11 @@ type Transaction interface {
// BatchGet gets kv from the memory buffer of statement and transaction, and the kv storage.
BatchGet(keys []Key) (map[string][]byte, error)
IsPessimistic() bool

// SetFollowerRead sets current transaction to read data from follower
SetFollowerRead()
// ClearFollowerRead disables follower read on current transaction
ClearFollowerRead()
}

// AssertionProto is an interface defined for the assertion protocol.
Expand Down Expand Up @@ -222,6 +244,8 @@ type Request struct {
Streaming bool
// MemTracker is used to trace and control memory usage in co-processor layer.
MemTracker *memory.Tracker
// ReplicaRead is used for reading data from replicas, only follower is supported at this time.
ReplicaRead ReplicaReadType
}

// ResultSubset represents a result subset from a single storage unit.
Expand Down Expand Up @@ -253,6 +277,11 @@ type Snapshot interface {
BatchGet(keys []Key) (map[string][]byte, error)
// SetPriority snapshot set the priority
SetPriority(priority int)

// SetFollowerRead sets current snapshot to read data from follower
SetFollowerRead()
// ClearFollowerRead disables follower read on current snapshot
ClearFollowerRead()
}

// Driver is the interface that must be implemented by a KV storage.
Expand Down
7 changes: 7 additions & 0 deletions kv/mock.go
Original file line number Diff line number Diff line change
Expand Up @@ -116,6 +116,9 @@ func (t *mockTxn) SetVars(vars *Variables) {
func (t *mockTxn) SetAssertion(key Key, assertion AssertionType) {}
func (t *mockTxn) ConfirmAssertions(succ bool) {}

func (t *mockTxn) SetFollowerRead() {}
func (t *mockTxn) ClearFollowerRead() {}

// NewMockTxn new a mockTxn.
func NewMockTxn() Transaction {
return &mockTxn{
Expand Down Expand Up @@ -229,3 +232,7 @@ func (s *mockSnapshot) Iter(k Key, upperBound Key) (Iterator, error) {
func (s *mockSnapshot) IterReverse(k Key) (Iterator, error) {
return s.store.IterReverse(k)
}

func (s *mockSnapshot) SetFollowerRead() {}

func (s *mockSnapshot) ClearFollowerRead() {}
8 changes: 7 additions & 1 deletion session/session.go
Original file line number Diff line number Diff line change
Expand Up @@ -1256,6 +1256,9 @@ func (s *session) Txn(active bool) (kv.Transaction, error) {
s.sessionVars.SetStatusFlag(mysql.ServerStatusInTrans, true)
}
s.sessionVars.TxnCtx.CouldRetry = s.isTxnRetryable()
if s.sessionVars.ReplicaRead.IsFollowerRead() {
s.txn.SetFollowerRead()
}
}
return &s.txn, nil
}
Expand Down Expand Up @@ -1315,7 +1318,10 @@ func (s *session) NewTxn(ctx context.Context) error {
return err
}
txn.SetCap(s.getMembufCap())
txn.SetVars(s.sessionVars.KVVars)
txn.SetVars(s.GetSessionVars().KVVars)
tiancaiamao marked this conversation as resolved.
Show resolved Hide resolved
if s.GetSessionVars().ReplicaRead.IsFollowerRead() {
txn.SetFollowerRead()
}
s.txn.changeInvalidToValid(txn)
is := domain.GetDomain(s).InfoSchema()
s.sessionVars.TxnCtx = &variable.TransactionContext{
Expand Down
9 changes: 9 additions & 0 deletions sessionctx/variable/session.go
Original file line number Diff line number Diff line change
Expand Up @@ -399,6 +399,9 @@ type SessionVars struct {

// use noop funcs or not
EnableNoopFuncs bool

// ReplicaRead is used for reading data from replicas, only follower is supported at this time.
ReplicaRead kv.ReplicaReadType
}

// ConnectionInfo present connection used by audit.
Expand Down Expand Up @@ -831,6 +834,12 @@ func (s *SessionVars) SetSystemVar(name string, val string) error {
s.EnableIndexMerge = TiDBOptOn(val)
case TiDBEnableNoopFuncs:
s.EnableNoopFuncs = TiDBOptOn(val)
case TiDBReplicaRead:
if strings.EqualFold(val, "follower") {
s.ReplicaRead = kv.ReplicaReadFollower
} else if strings.EqualFold(val, "leader") || len(val) == 0 {
s.ReplicaRead = kv.ReplicaReadLeader
}
sunxiaoguang marked this conversation as resolved.
Show resolved Hide resolved
}
s.systems[name] = val
return nil
Expand Down
1 change: 1 addition & 0 deletions sessionctx/variable/sysvar.go
Original file line number Diff line number Diff line change
Expand Up @@ -704,6 +704,7 @@ var defaultSysVars = []*SysVar{
{ScopeSession, TiDBLowResolutionTSO, "0"},
{ScopeSession, TiDBExpensiveQueryTimeThreshold, strconv.Itoa(DefTiDBExpensiveQueryTimeThreshold)},
{ScopeGlobal | ScopeSession, TiDBEnableNoopFuncs, BoolToIntStr(DefTiDBEnableNoopFuncs)},
{ScopeSession, TiDBReplicaRead, "leader"},
}

// SynonymsSysVariables is synonyms of system variables.
Expand Down
3 changes: 3 additions & 0 deletions sessionctx/variable/tidb_vars.go
Original file line number Diff line number Diff line change
Expand Up @@ -141,6 +141,9 @@ const (

// TiDBLowResolutionTSO is used for reading data with low resolution TSO which is updated once every two seconds
TiDBLowResolutionTSO = "tidb_low_resolution_tso"

// TiDBReplicaRead is used for reading data from replicas, followers for example.
TiDBReplicaRead = "tidb_replica_read"
)

// TiDB system variable names that both in session and global scope.
Expand Down
7 changes: 7 additions & 0 deletions sessionctx/variable/varsutil.go
Original file line number Diff line number Diff line change
Expand Up @@ -560,6 +560,13 @@ func ValidateSetSystemVar(vars *SessionVars, name string, value string) (string,
if v <= 0 {
return value, errors.Errorf("tidb_wait_split_region_timeout(%d) cannot be smaller than 1", v)
}
case TiDBReplicaRead:
if strings.EqualFold(value, "follower") {
return "follower", nil
} else if strings.EqualFold(value, "leader") || len(value) == 0 {
return "leader", nil
}
return value, ErrWrongValueForVar.GenWithStackByArgs(name, value)
}
return value, nil
}
Expand Down
1 change: 1 addition & 0 deletions store/tikv/coprocessor.go
Original file line number Diff line number Diff line change
Expand Up @@ -622,6 +622,7 @@ func (worker *copIteratorWorker) handleTaskOnce(bo *Backoffer, task *copTask, ch
NotFillCache: worker.req.NotFillCache,
HandleTime: true,
ScanDetail: true,
FollowerRead: worker.req.ReplicaRead.IsFollowerRead(),
})
startTime := time.Now()
resp, rpcCtx, err := sender.SendReqCtx(bo, req, task.region, ReadTimeoutMedium)
Expand Down
72 changes: 59 additions & 13 deletions store/tikv/region_cache.go
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ import (
"bytes"
"context"
"fmt"
"github.com/pingcap/tidb/kv"
tiancaiamao marked this conversation as resolved.
Show resolved Hide resolved
"sync"
"sync/atomic"
"time"
Expand Down Expand Up @@ -67,34 +68,60 @@ type Region struct {
// RegionStore represents region stores info
// it will be store as unsafe.Pointer and be load at once
type RegionStore struct {
workStoreIdx int32 // point to current work peer in meta.Peers and work store in stores(same idx)
stores []*Store // stores in this region
workStoreIdx int32 // point to current work peer in meta.Peers and work store in stores(same idx)
stores []*Store // stores in this region
nextFollowerStore uint32 // point to current follower in followers store
followers []int32 // followers' index in this region
}

// clone clones region store struct.
func (r *RegionStore) clone() *RegionStore {
return &RegionStore{
workStoreIdx: r.workStoreIdx,
stores: r.stores,
workStoreIdx: r.workStoreIdx,
stores: r.stores,
nextFollowerStore: 0,
followers: r.followers,
}
}

// reset region store followers
func (r *RegionStore) initFollowers() {
r.followers = make([]int32, 0, len(r.stores)-1)
for i := int32(0); i < int32(len(r.stores)); i++ {
if i != r.workStoreIdx {
r.followers = append(r.followers, i)
}
}
}

// return next follower store's index
func (r *RegionStore) nextFollower() int32 {
followers := r.followers
nextFollower := atomic.AddUint32(&r.nextFollowerStore, 1)
return followers[nextFollower%uint32(len(followers))]
}

// init initializes region after constructed.
func (r *Region) init(c *RegionCache) {
// region store pull used store from global store map
// to avoid acquire storeMu in later access.
rs := &RegionStore{
workStoreIdx: 0,
stores: make([]*Store, 0, len(r.meta.Peers)),
workStoreIdx: 0,
stores: make([]*Store, 0, len(r.meta.Peers)),
nextFollowerStore: 0,
followers: make([]int32, 0, len(r.meta.Peers)-1),
}
for _, p := range r.meta.Peers {
for i, p := range r.meta.Peers {
c.storeMu.RLock()
store, exists := c.storeMu.stores[p.StoreId]
c.storeMu.RUnlock()
if !exists {
store = c.getStoreByStoreID(p.StoreId)
}
rs.stores = append(rs.stores, store)
if i != 0 {
rs.followers = append(rs.followers, int32(i))
}
}
atomic.StorePointer(&r.store, unsafe.Pointer(rs))

Expand Down Expand Up @@ -248,7 +275,7 @@ func (c *RPCContext) String() string {

// GetRPCContext returns RPCContext for a region. If it returns nil, the region
// must be out of date and already dropped from cache.
func (c *RegionCache) GetRPCContext(bo *Backoffer, id RegionVerID) (*RPCContext, error) {
func (c *RegionCache) GetRPCContext(bo *Backoffer, id RegionVerID, replicaRead kv.ReplicaReadType) (*RPCContext, error) {
ts := time.Now().Unix()

cachedRegion := c.getCachedRegionWithRLock(id)
Expand All @@ -261,7 +288,15 @@ func (c *RegionCache) GetRPCContext(bo *Backoffer, id RegionVerID) (*RPCContext,
}

regionStore := cachedRegion.getStore()
store, peer, storeIdx := cachedRegion.WorkStorePeer(regionStore)
var store *Store
var peer *metapb.Peer
var storeIdx int
switch replicaRead {
case kv.ReplicaReadFollower:
store, peer, storeIdx = cachedRegion.FollowerStorePeer(regionStore)
default:
store, peer, storeIdx = cachedRegion.WorkStorePeer(regionStore)
}
addr, err := c.getStoreAddr(bo, cachedRegion, store, storeIdx)
if err != nil {
return nil, err
Expand Down Expand Up @@ -892,12 +927,21 @@ func (r *Region) GetLeaderStoreID() uint64 {
return r.meta.Peers[int(r.getStore().workStoreIdx)].StoreId
}

func (r *Region) getStorePeer(rs *RegionStore, pidx int32) (store *Store, peer *metapb.Peer, idx int) {
store = rs.stores[pidx]
peer = r.meta.Peers[pidx]
idx = int(pidx)
return
}

// WorkStorePeer returns current work store with work peer.
func (r *Region) WorkStorePeer(rs *RegionStore) (store *Store, peer *metapb.Peer, idx int) {
idx = int(rs.workStoreIdx)
store = rs.stores[rs.workStoreIdx]
peer = r.meta.Peers[rs.workStoreIdx]
return
return r.getStorePeer(rs, rs.workStoreIdx)
}

// FollowerStorePeer returns a follower store with follower peer.
func (r *Region) FollowerStorePeer(rs *RegionStore) (store *Store, peer *metapb.Peer, idx int) {
return r.getStorePeer(rs, rs.nextFollower())
}

// RegionVerID is a unique ID that can identify a Region at a specific version.
Expand Down Expand Up @@ -947,6 +991,7 @@ func (c *RegionCache) switchNextPeer(r *Region, currentPeerIdx int) {
nextIdx := (currentPeerIdx + 1) % len(regionStore.stores)
newRegionStore := regionStore.clone()
newRegionStore.workStoreIdx = int32(nextIdx)
newRegionStore.initFollowers()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When switchNextPeer is called, the workStoreIdx may point to a follower, so initFollowers does not do what it supposed to do.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't it trying to predict that the next peer is going to be the new leader? If that's the case workStoreIdx will actually become new leader to make non replica read work.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When we failed a request, we change the workStoreIdx, then access the follower to wake up the hibernated region.

It's not always the leader.

If the workStoreIdx points to a follower, it may be the only valid follower, as we avoid to access it, all requests will be sent to the real leader.

It works but the code it's misleading.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I see. Looks like simply removing initFollowers here will work, am I right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed

r.compareAndSwapStore(regionStore, newRegionStore)
}

Expand All @@ -973,6 +1018,7 @@ retry:
}
newRegionStore := oldRegionStore.clone()
newRegionStore.workStoreIdx = int32(leaderIdx)
newRegionStore.initFollowers()
if !r.compareAndSwapStore(oldRegionStore, newRegionStore) {
goto retry
}
Expand Down
Loading