Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Distributor: start remote timeout on first callback #6972

Merged
merged 3 commits into from
Dec 21, 2023

Conversation

colega
Copy link
Contributor

@colega colega commented Dec 20, 2023

What this PR does

It may take a while to calculate which instance gets each one of the series (see: grafana/dskit#454)

It isn't fair (and is useless) to reach the callback with no ttl.

This changes the code to start the timeout once the first callback is called.

Which issue(s) this PR fixes or relates to

None.

Checklist

  • Tests updated.
  • Documentation added.
  • CHANGELOG.md updated - the order of entries should be [CHANGE], [FEATURE], [ENHANCEMENT], [BUGFIX].
  • about-versioning.md updated with experimental features.

It may take a while to calculate which instance gets each one of the
series (see: grafana/dskit#454)

It isn't fair (and is useless) to reach the callback with no ttl.

This changes the code to start the timeout once the first callback is
called.

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
@colega colega marked this pull request as ready for review December 20, 2023 10:26
@colega colega requested a review from a team as a code owner December 20, 2023 10:26
Comment on lines 1311 to 1315
localCtx := context.WithoutCancel(ctx)
var cancelPushContext context.CancelFunc = func() {}
startRemoteTimeout := sync.OnceFunc(func() {
localCtx, cancelPushContext = context.WithTimeout(localCtx, d.cfg.RemoteTimeout)
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't there a race between swapping out the context in the first invocation and using a context in the next?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe if you use sync.OnceValue() you can get around that

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe split ring.DoBatchWithOptions into the slow init part and the make-calls part, so you don't need to change the variable in the callback?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't there a race between swapping out the context in the first invocation and using a context in the next?

All calls will wait for first func to finish. See the comment on Once.Do:

func (o *Once) Do(f func()) {
	// Note: Here is an incorrect implementation of Do:
	//
	//	if atomic.CompareAndSwapUint32(&o.done, 0, 1) {
	//		f()
	//	}
	//
	// Do guarantees that when it returns, f has finished.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've refactored using sync.OnceValues to make it more obvious.

I had to move the fast slab pool to the parent context, which is passed to ring.DoBatch, but that shouldn't be an issue (the key is conserved, and I understand that it doesn't depend on the context cancellation, it only sets itself on the context).

I would love to refactor this entire method further, extracting smaller pieces of code into separate functions, and reordering, but I'd rather do that in a separate PR.

Copy link
Contributor Author

@colega colega Dec 21, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Follow up PR with the refactor: #6978

Signed-off-by: Oleg Zaytsev <mail@olegzaytsev.com>
Copy link
Contributor

@dimitarvdimitrov dimitarvdimitrov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@colega colega merged commit 324843d into main Dec 21, 2023
28 checks passed
@colega colega deleted the start-remote-timeout-when-sending-push-requests branch December 21, 2023 09:08
@@ -12,6 +12,7 @@
* [ENHANCEMENT] Query-Frontend and Query-Scheduler: split tenant query request queues by query component with `query-frontend.additional-query-queue-dimensions-enabled` and `query-scheduler.additional-query-queue-dimensions-enabled`. #6772
* [ENHANCEMENT] Store-gateway: include more information about lazy index-header loading in traces. #6922
* [ENHANCEMENT] Distributor: support disabling metric relabel rules per-tenant via the flag `-distributor.metric-relabeling-enabled` or associated YAML. #6970
* [ENHANCEMENT] Distributor: `-distributor.remote-timeout` is now accounted from the first ingester push request being sent. #6972
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can user notice any difference in practice?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the number of series is really high, the preceding preparation code could consume the timeout before we even start sending network requests.

Maybe we could list this as a bugfix? 🤷

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for explanation. Bugfix or enhancement both make sense to me.

I wonder if we should have timeout for entire push request (incl. processing, this will especially become important for OTLP requests), although in fact there is one already -- -server.http-write-timeout. ("write" is unrelated to "push" in this case).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants