Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix races in etcdclient #801

Merged
merged 3 commits into from May 10, 2022
Merged

fix races in etcdclient #801

merged 3 commits into from May 10, 2022

Conversation

tjungblu
Copy link
Contributor

@tjungblu tjungblu commented Apr 26, 2022

there is a bug while caching the client, the order in which the endpoints are returned is not deterministic. That causes the cache client to be constantly created from scratch and invalidating the old (potentially while in use).

the new client works in a pooling fashion, which handles the mutual exclusiveness of returning an existing or newly created client. HealthCheck+Member updates are handled during the retrieval, so is the close. Once a consumer is done, they simply return the client to the pool.

added some more additional logging as well.

@openshift-ci openshift-ci bot added bugzilla/severity-medium Referenced Bugzilla bug's severity is medium for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. labels Apr 26, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Apr 26, 2022

@tjungblu: This pull request references Bugzilla bug 2077833, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.11.0) matches configured target release for branch (4.11.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)

Requesting review from QA contact:
/cc @geliu2016

In response to this:

Bug 2077833: add grpc logging

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@tjungblu
Copy link
Contributor Author

/payload 4.11 ci informing

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Apr 26, 2022

@tjungblu: trigger 4 jobs of type informing for the ci release of OCP 4.11

  • periodic-ci-openshift-release-master-ci-4.11-e2e-gcp
  • periodic-ci-openshift-release-master-ci-4.11-e2e-gcp-upgrade
  • periodic-ci-openshift-release-master-ci-4.11-upgrade-from-stable-4.10-e2e-aws-upgrade
  • periodic-ci-openshift-release-master-ci-4.11-upgrade-from-stable-4.10-e2e-aws-ovn-upgrade

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/8d291500-c553-11ec-9922-b75bcdbb766d-0

@tjungblu tjungblu changed the title Bug 2077833: add grpc logging Bug 2077833: stricter locking, more logging Apr 26, 2022
@openshift-ci openshift-ci bot added bugzilla/severity-low Referenced Bugzilla bug's severity is low for the branch this PR is targeting. and removed bugzilla/severity-medium Referenced Bugzilla bug's severity is medium for the branch this PR is targeting. labels Apr 26, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Apr 26, 2022

@tjungblu: This pull request references Bugzilla bug 2077833, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.11.0) matches configured target release for branch (4.11.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)

Requesting review from QA contact:
/cc @geliu2016

In response to this:

Bug 2077833: stricter locking, more logging

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@tjungblu
Copy link
Contributor Author

/retest-required

pkg/etcdcli/etcdcli.go Outdated Show resolved Hide resolved
@tjungblu
Copy link
Contributor Author

tjungblu commented Apr 26, 2022

getting some more data here, sorry for the additional cost :/

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Apr 26, 2022

@tjungblu: An error was encountered. No known errors were detected, please see the full error message for details.

Full error message. could not resolve jobs for 4.11 ci required: job type is not supported: required

Please contact an administrator to resolve this issue.

@tjungblu
Copy link
Contributor Author

/payload 4.11 ci informing

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Apr 26, 2022

@tjungblu: trigger 4 jobs of type informing for the ci release of OCP 4.11

  • periodic-ci-openshift-release-master-ci-4.11-e2e-gcp
  • periodic-ci-openshift-release-master-ci-4.11-e2e-gcp-upgrade
  • periodic-ci-openshift-release-master-ci-4.11-upgrade-from-stable-4.10-e2e-aws-upgrade
  • periodic-ci-openshift-release-master-ci-4.11-upgrade-from-stable-4.10-e2e-aws-ovn-upgrade

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/85902f60-c574-11ec-9478-b916e1f81898-0

@tjungblu
Copy link
Contributor Author

/retest-required

Copy link

@geliu2016 geliu2016 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Apr 27, 2022
Copy link

@geliu2016 geliu2016 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@tjungblu
Copy link
Contributor Author

/payload 4.11 ci blocking

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Apr 27, 2022

@tjungblu: trigger 5 jobs of type blocking for the ci release of OCP 4.11

  • periodic-ci-openshift-release-master-ci-4.11-e2e-aws-ovn-upgrade
  • periodic-ci-openshift-release-master-ci-4.11-upgrade-from-stable-4.10-e2e-aws-ovn-upgrade
  • periodic-ci-openshift-release-master-ci-4.11-upgrade-from-stable-4.10-e2e-azure-upgrade
  • periodic-ci-openshift-release-master-ci-4.11-e2e-gcp-upgrade
  • periodic-ci-openshift-release-master-ci-4.11-e2e-aws-serial

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/9ce83b00-c5f4-11ec-8237-3b5624e6e5ea-0

Copy link
Contributor

@hasbro17 hasbro17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense, but just a question on the placement of the existing Lock() in getEtcdClient().

pkg/etcdcli/etcdcli.go Outdated Show resolved Hide resolved
pkg/etcdcli/etcdcli.go Outdated Show resolved Hide resolved
pkg/etcdcli/etcdcli.go Outdated Show resolved Hide resolved
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Apr 28, 2022

@tjungblu: This pull request references Bugzilla bug 2077833, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.11.0) matches configured target release for branch (4.11.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)

Requesting review from QA contact:
/cc @geliu2016

In response to this:

Bug 2077833: stricter locking, more logging

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci openshift-ci bot requested review from geliu2016 and removed request for marun April 28, 2022 08:17
@tjungblu
Copy link
Contributor Author

/retest-required
/payload 4.11 ci blocking

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Apr 29, 2022

@tjungblu: trigger 5 jobs of type blocking for the ci release of OCP 4.11

  • periodic-ci-openshift-release-master-ci-4.11-e2e-aws-ovn-upgrade
  • periodic-ci-openshift-release-master-ci-4.11-upgrade-from-stable-4.10-e2e-aws-ovn-upgrade
  • periodic-ci-openshift-release-master-ci-4.11-upgrade-from-stable-4.10-e2e-azure-upgrade
  • periodic-ci-openshift-release-master-ci-4.11-e2e-gcp-upgrade
  • periodic-ci-openshift-release-master-ci-4.11-e2e-aws-serial

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/779eb580-c7e9-11ec-9e9d-eee4fe11ed2b-0

@tjungblu
Copy link
Contributor Author

tjungblu commented May 2, 2022

/retest-required

1 similar comment
@tjungblu
Copy link
Contributor Author

tjungblu commented May 2, 2022

/retest-required

@dusk125
Copy link
Contributor

dusk125 commented May 2, 2022

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label May 2, 2022
@tjungblu
Copy link
Contributor Author

tjungblu commented May 3, 2022

/hold cancel

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label May 3, 2022
grpclog.SetLoggerV2(grpclog.NewLoggerV2(ioutil.Discard, ioutil.Discard, os.Stderr))
clientOpts, err := newClientOpts(opts...)
if err != nil {
return nil, err
return nil, fmt.Errorf("error during clientOpts: %w", err)
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good addition of clearer error messages throughout. Much appreciated!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thank @Elbehery!

@EmilyM1
Copy link

EmilyM1 commented May 3, 2022

/lgtm

@openshift-ci openshift-ci bot removed the lgtm Indicates that a PR is ready to be merged. label May 4, 2022
Copy link
Contributor

@hasbro17 hasbro17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM, just a couple nits and a question on what the tickets actually represent.

returnTicket(p.availableTickets)
err := p.closeFunc(client)
if err != nil {
klog.Errorf("could not close cache client exceeding pool capacity: %v", err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Slightly confusing at first read. I thought initially the error was because we had exceeded the client pool capacity.
A bit wordy but maybe something like this?

Suggested change
klog.Errorf("could not close cache client exceeding pool capacity: %v", err)
klog.Errorf("failed to close extra etcd client which is not being re-added in the client pool: %v", err)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

great suggestion, let me add this

assert.Errorf(t, err, "too many active cache clients, rejecting to create new one")
// returning one should unlock the get again
poolRecorder.pool.Return(clients[0])
client, err = poolRecorder.pool.Get()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing error check.

Suggested change
client, err = poolRecorder.pool.Get()
client, err = poolRecorder.pool.Get()
require.NoError(t, err)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

}
assert.Equal(t, maxNumCachedClients, poolRecorder.numCloseCalls)

// now we should be able to get the full amount of tickets again
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// now we should be able to get the full amount of tickets again
// now we should be able to get the full amount of clients again

I think you mean clients and not tickets here as the number of available tickets is just 5 at this point.
This threw me off initially as it made me think the tickets were returned once the client was as well.

So I expected the following unit test to succeed.

func TestAllTicketsAvailableForFullyCachedPool(t *testing.T) {
	integration.BeforeTestExternal(t)
	testServer := integration.NewClusterV3(t, &integration.ClusterConfig{Size: 3})
	defer testServer.Terminate(t)

	poolRecorder := newTestPool(testServer)
	var clients []*clientv3.Client
	for i := 0; i < maxNumCachedClients; i++ {
		client, err := poolRecorder.pool.Get()
		require.NoError(t, err)
		assert.NotNil(t, client)
		clients = append(clients, client)
	}

	// return all clients to the client pool and check that none are closed
	for _, client := range clients {
		poolRecorder.pool.Return(client)
	}
	assert.Equal(t, maxNumCachedClients, poolRecorder.numNewCalls)
	assert.Equal(t, maxNumCachedClients, poolRecorder.numEndpointCalls)
	assert.Equal(t, maxNumCachedClients, poolRecorder.numHealthCalls)
	assert.Equal(t, 0, poolRecorder.numCloseCalls)

	// The tickets should equal the max available (10) since all clients are ready to be taken from the cached pool
	assert.Equal(t, maxNumClientTickets, len(poolRecorder.pool.availableTickets))

	// Grab a cached client from the pool
	client, err := poolRecorder.pool.Get()
	require.NoError(t, err)
	assert.NotNil(t, client)
	assert.Equal(t, maxNumClientTickets-1, len(poolRecorder.pool.availableTickets))

	// Return it back and all tickets should be available again
	poolRecorder.pool.Return(client)
	assert.Equal(t, maxNumClientTickets, len(poolRecorder.pool.availableTickets))
}

But looking through the code it seems tickets used == number of clients currently open. So just wanted to confirm this.

Copy link
Contributor Author

@tjungblu tjungblu May 10, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

indeed, there are two limits:

  • pool size, how many active clients we are keeping cached at max
  • tickets, how many clients we are handing out in general

The first should ensure we are not creating new clients all the time and keeping them cached, because the TLS handshakes are expensive. Yet bound to maxNumCachedClients as too many IDLE connections to etcd are also not helpful.

The second should ensure that on connectivity issues we do not overwhelm etcd additionally by creating more connections. The behavior in a previous version before the ticketing was:

  • I have to get the members, can I get a cached client?
  • pool is filled, let me grab one and try to health check
  • none of the healthy client are available, let's create a new one

The last bullet was basically creating hundreds of new clients as there was no connectivity for some time period. The ticketing rate limits this and ensures an explicit failure when this happens. Imagine going to a full amusement park and you have to wait for somebody to come out of it.

But looking through the code it seems tickets used == number of clients currently open. So just wanted to confirm this.

yep, good summary. Shall we rename that ticketing aspect to numOpenClients / numActiveClients / numCreatedClients?

}

// replenish the maxNumCachedClients that were closed earlier
assert.Equal(t, maxNumClientTickets+maxNumCachedClients, poolRecorder.numNewCalls)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add a check for the number of available tickets too just to make it explicit that available tickets are the capacity to create more clients.

Suggested change
assert.Equal(t, maxNumClientTickets+maxNumCachedClients, poolRecorder.numNewCalls)
assert.Equal(t, maxNumClientTickets+maxNumCachedClients, poolRecorder.numNewCalls)
assert.Equal(t, maxNumClientTickets-maxNumCachedClients, len(poolRecorder.pool.availableTickets))

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep, in this case the tickets are zero as all clients are handed out:

	// no tickets are available anymore, as we have handed out all clients
	assert.Equal(t, 0, len(poolRecorder.pool.availableTickets))

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 10, 2022
@hasbro17
Copy link
Contributor

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label May 10, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented May 10, 2022

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: dusk125, EmilyM1, geliu2016, hasbro17, tjungblu

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-bot
Copy link
Contributor

/retest-required

Please review the full test history for this PR and help us cut down flakes.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented May 10, 2022

@tjungblu: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-gcp-five-control-plane-replicas 175c05f link false /test e2e-gcp-five-control-plane-replicas

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@openshift-merge-robot openshift-merge-robot merged commit c9977ae into openshift:master May 10, 2022
@tjungblu tjungblu deleted the grpcverbose branch May 11, 2022 06:43
@tjungblu
Copy link
Contributor Author

/cherry-pick 4.10

@openshift-cherrypick-robot

@tjungblu: cannot checkout 4.10: error checking out 4.10: exit status 1. output: error: pathspec '4.10' did not match any file(s) known to git

In response to this:

/cherry-pick 4.10

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@tjungblu
Copy link
Contributor Author

/cherry-pick release-4.10

@openshift-cherrypick-robot

@tjungblu: #801 failed to apply on top of branch "release-4.10":

Applying: fix races in etcdclient
Using index info to reconstruct a base tree...
M	go.mod
M	go.sum
A	pkg/cmd/readyz/readyz.go
M	pkg/etcdcli/etcdcli.go
M	pkg/etcdcli/health.go
A	pkg/operator/clustermemberremovalcontroller/clustermemberremovalcontroller.go
A	pkg/operator/clustermemberremovalcontroller/clustermemberremovalcontroller_test.go
M	pkg/operator/defragcontroller/defragcontroller.go
M	pkg/operator/defragcontroller/defragcontroller_test.go
M	pkg/operator/starter.go
M	pkg/testutils/testutils.go
M	vendor/modules.txt
Falling back to patching base and 3-way merge...
Auto-merging vendor/modules.txt
Auto-merging pkg/operator/starter.go
Auto-merging pkg/operator/defragcontroller/defragcontroller.go
CONFLICT (modify/delete): pkg/operator/clustermemberremovalcontroller/clustermemberremovalcontroller_test.go deleted in HEAD and modified in fix races in etcdclient. Version fix races in etcdclient of pkg/operator/clustermemberremovalcontroller/clustermemberremovalcontroller_test.go left in tree.
CONFLICT (modify/delete): pkg/operator/clustermemberremovalcontroller/clustermemberremovalcontroller.go deleted in HEAD and modified in fix races in etcdclient. Version fix races in etcdclient of pkg/operator/clustermemberremovalcontroller/clustermemberremovalcontroller.go left in tree.
Auto-merging pkg/etcdcli/health.go
CONFLICT (content): Merge conflict in pkg/etcdcli/health.go
Auto-merging pkg/etcdcli/etcdcli.go
CONFLICT (content): Merge conflict in pkg/etcdcli/etcdcli.go
CONFLICT (modify/delete): pkg/cmd/readyz/readyz.go deleted in HEAD and modified in fix races in etcdclient. Version fix races in etcdclient of pkg/cmd/readyz/readyz.go left in tree.
Auto-merging go.sum
Auto-merging go.mod
error: Failed to merge in the changes.
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Patch failed at 0001 fix races in etcdclient
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".

In response to this:

/cherry-pick release-4.10

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cherry-pick-approved Indicates a cherry-pick PR into a release branch has been approved by the release branch manager. lgtm Indicates that a PR is ready to be merged. tide/merge-method-squash Denotes a PR that should be squashed by tide when it merges.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

9 participants