-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add staleness markers support #1526
Comments
so this feature is in development now? |
It is not yet, but has a high priority. |
Support for Prometheus staleness markers has been added to VictoriaMetrics in the following commits: Please give it a try before it will be included into the next release of VictoriaMetrics. See build instructions for single-node and build instructions for cluster version. If the data is ingested into VictoriaMetrics by vmagent instead of Prometheus, then the |
…moveCounterResets functions Prometheus stalenss marks shouldn't be changed in removeCounterResets. Otherwise they will be converted to an ordinary NaN values, which couldn't be removed in dropStaleNaNs() function later. This may result in incorrect calculations for rollup functions. Updates #1526
…moveCounterResets functions Prometheus stalenss marks shouldn't be changed in removeCounterResets. Otherwise they will be converted to an ordinary NaN values, which couldn't be removed in dropStaleNaNs() function later. This may result in incorrect calculations for rollup functions. Updates #1526
This allows dropping staleness marks only once and then calculate multiple rollup functions on the result. Updates #1526
This allows dropping staleness marks only once and then calculate multiple rollup functions on the result. Updates #1526
VictoriaMetrics and vmagent gained support for Prometheus staleness markers starting from v1.64.0. Closing the feature request as done. |
… tracking is enabled for metrics from deleted / disappeared scrape targets Store the scraped response body instead of storing the parsed and relabeld metrics. This should reduce memory usage, since the response body takes less memory than the parsed and relabeled metrics. This is especially true for Kubernetes service discovery, which adds many long labels for all the scraped metrics. This should also reduce CPU usage, since the marshaling of the parsed and relabeld metrics has been substituted by response body copying. Updates #1526
… tracking is enabled for metrics from deleted / disappeared scrape targets Store the scraped response body instead of storing the parsed and relabeld metrics. This should reduce memory usage, since the response body takes less memory than the parsed and relabeled metrics. This is especially true for Kubernetes service discovery, which adds many long labels for all the scraped metrics. This should also reduce CPU usage, since the marshaling of the parsed and relabeld metrics has been substituted by response body copying. Updates #1526
Is your feature request related to a problem? Please describe.
VictoriaMetrics and Prometheus staleness detection is different.
VictoriaMetrics calculates staleness threshold based on interval difference between datapoints timestamps (or scrape intervals).
Prometheus staleness logic is the following:
VictoriaMetrics staleness detection behaves differently to Prometheus implementation for the following reasons:
However, because of the differences, query results for stale series between Prometheus and VictoriaMetrics may differ and result into discrepancies when using VictoriaMetrics as remote storage.
Describe the solution you'd like
To improve compatibility with Prometheus ecosystem, VictoriaMetrics TSDB and vmagent should support staleness markers.
Describe alternatives you've considered
Manual queries modifications to account for staleness and resets.
The text was updated successfully, but these errors were encountered: