You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Expected behavior
I expect to see the same graph when querying the two different backends from grafana, Prom or Victoria metrics should produce the same trends, but as you can see in the graphs, this is not the case.
VictoriaMetrics takes into account the previous point before the range window in square brackets for implementing all the PromQL functions accepting range vectors, including rate, while Prometheus ignores the previous point before the range window. This resolves the following Prometheus issues:
This may result in slightly different graphs between Prometheus and VictoriaMetrics like on the screenshots. Try increasing range window in square brackets - for example, use [10m] instead of [1m]. This should reduce the discrepancy between Prometheus and VictoriaMetrics graphs.
Closing this issue as resolved. Feel free opening new one if significant discrepancies between VictoriaMetrics graphs and Prometheus graphs are detected.
Describe the bug
Hi,
I'm evaluating the use of Victoria metrics as long term storage, my goal is to get rid of Prom federation.
I have noticed a different behavior when the same query is run against prom and victoria metrics.
rate(process_cpu_seconds_total{job="$instance-kafka1"}[1m])
VICTORIA METRICS
PROMETHEUS
stored time series seems to contain the same data:
process_cpu_seconds_total{job="$instance-kafka1"}
VICTORIA METRICS
PROMETHEUS
Configuration on Prometheus side:
Expected behavior
I expect to see the same graph when querying the two different backends from grafana, Prom or Victoria metrics should produce the same trends, but as you can see in the graphs, this is not the case.
Version
The text was updated successfully, but these errors were encountered: