Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Samples delivered out of order with remote write in Prometheus 2.6.0 #5080

Closed
tomwilkie opened this Issue Jan 8, 2019 · 9 comments

Comments

Projects
None yet
4 participants
@tomwilkie
Copy link
Member

tomwilkie commented Jan 8, 2019

Just updated to prometheus v2.6.0 and getting a lot of out-of-order samples.

When I revert #4772 they go away...

@gouthamve

This comment has been minimized.

Copy link
Member

gouthamve commented Jan 8, 2019

*2.6.0 :)

@tomwilkie tomwilkie changed the title Samples delivered out of order with remote write in Prometheus 2.7.0 Samples delivered out of order with remote write in Prometheus 2.6.0 Jan 8, 2019

@tomwilkie

This comment has been minimized.

Copy link
Member Author

tomwilkie commented Jan 8, 2019

I've spent last hour trying to figure this out and I'm completely stuck...

@brian-brazil

This comment has been minimized.

Copy link
Member

brian-brazil commented Jan 8, 2019

Now multiple enqueues can happen in parallel, whereas previously they were serialised?

@tomwilkie

This comment has been minimized.

Copy link
Member Author

tomwilkie commented Jan 8, 2019

enqueue picks a queue (based on fingerprint of metric) and adds the sample to the queue, serialising requests.

It assumes that all the samples for a given timeseries are Appended, in order, by a single goroutine. Which I think is true as a single go routine scrapes each target, and blocks on calls to Append.

The queue is serviced by a single goroutine, which guarantees samples on the queue are delivered in order.

@brian-brazil

This comment has been minimized.

Copy link
Member

brian-brazil commented Jan 8, 2019

Yeah, the code looks okay. Might you have multiple targets ingesting the same time series? I'm wondering if #4926 might be related.

@tomwilkie

This comment has been minimized.

Copy link
Member Author

tomwilkie commented Jan 8, 2019

Might you have multiple targets ingesting the same time series?

Don't know yet, my prometheus is pretty unstable. Will get back to you.

@jojohappy

This comment has been minimized.

Copy link
Contributor

jojohappy commented Jan 18, 2019

@tomwilkie Any update?

@tomwilkie

This comment has been minimized.

Copy link
Member Author

tomwilkie commented Jan 18, 2019

Fraid not; it looks like we might have some OOO samples pre 2.6 that I can't explain, and that time timing changed with the locking changes made them more visible. But not sure.

@tomwilkie

This comment has been minimized.

Copy link
Member Author

tomwilkie commented Mar 4, 2019

Its looks like this was caused by our config: multiple exporters in a single pod all got the same target labels, and all exposed the go_runtime_* metrics.

@tomwilkie tomwilkie closed this Mar 4, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.