Bug
In calculate_start_timestamp_sql, get_duration() returns seconds (the raw end - start from the SQL parser), but it is named scrape_intervals and multiplied by prometheus_scrape_interval again:
// simple_engine.rs:352-355
let scrape_intervals = match_result.query_data[0].time_info.clone().get_duration() as u64;
// scrape_intervals is actually duration in seconds, NOT a count of intervals
end_timestamp - (scrape_intervals * self.prometheus_scrape_interval * 1000)
// ^ seconds ^ seconds again → window is interval× too wide
With scrape_interval=15s and a 150s query window:
- Correct start:
end - 150 * 1000 = end - 150_000ms
- Buggy start:
end - 150 * 15 * 1000 = end - 2_250_000ms (15× too far back)
The same bug exists in build_spatiotemporal_context at line 1397.
All existing equivalence tests use scrape_interval=1, so 150 * 1 * 1000 = 150_000ms is coincidentally correct.
Fix
end_timestamp - (scrape_intervals * 1000)
Breaking test
Add to query_equivalence_tests.rs:
/// With scrape_interval=15s, a 150s SQL temporal query must produce a 150_000ms window.
/// Bug: start = end - (150 * 15 * 1000) = end - 2_250_000ms (15× too wide).
#[test]
fn test_bug_sql_start_timestamp_multiplied_by_scrape_interval() {
let scrape_interval = 15u64;
let window_seconds = 150u64;
let sql_query =
"SELECT SUM(value) FROM cpu_usage \
WHERE time BETWEEN DATEADD(s, -150, '2025-10-01 00:00:00') AND '2025-10-01 00:00:00' \
GROUP BY L1, L2, L3, L4";
let (_, sql_config, streaming_config) = TestConfigBuilder::new("cpu_usage")
.with_grouping_labels(vec!["L1", "L2", "L3", "L4"])
.with_scrape_interval(scrape_interval)
.add_temporal_query(
"sum_over_time(cpu_usage[150s])",
sql_query,
1,
window_seconds,
"tumbling",
)
.build_both();
let sql_engine = SimpleEngine::new(
Arc::new(NoOpStore),
sql_config,
streaming_config,
scrape_interval,
QueryLanguage::sql,
);
// query_time_sec can be anything; fixed-date SQL query ignores it
let sql_context = sql_engine
.build_query_execution_context_sql(sql_query.to_string(), 0.0)
.expect("Failed to build SQL context");
let start_ms = sql_context.store_plan.values_query.start_timestamp;
let end_ms = sql_context.store_plan.values_query.end_timestamp;
let actual_window_ms = end_ms - start_ms;
let expected_window_ms = window_seconds * 1000;
assert_eq!(
actual_window_ms,
expected_window_ms,
"SQL window is {}ms but should be {}ms. \
Bug: get_duration() (seconds) is multiplied by scrape_interval again.",
actual_window_ms,
expected_window_ms
);
}
Bug
In
calculate_start_timestamp_sql,get_duration()returns seconds (the rawend - startfrom the SQL parser), but it is namedscrape_intervalsand multiplied byprometheus_scrape_intervalagain:With
scrape_interval=15sand a 150s query window:end - 150 * 1000 = end - 150_000msend - 150 * 15 * 1000 = end - 2_250_000ms(15× too far back)The same bug exists in
build_spatiotemporal_contextat line 1397.All existing equivalence tests use
scrape_interval=1, so150 * 1 * 1000 = 150_000msis coincidentally correct.Fix
Breaking test
Add to
query_equivalence_tests.rs: