Context
Follow-up bench(es) from PR #1228 review (coverage gaps #6 and #7). The Query API has a rich layering — query_connection, db()/db_at_t()/db_at(), DataSetDb, multi-ledger history queries — and none of the new benches in PR #1228 exercise the time-aware or multi-ledger paths. A regression in the time-travel materialization path or in DataSetDb's history range queries would slip through both query_hot_bsbm (single-ledger, head-only) and query_cold_reload (cold load, no time-travel).
Bundling these two coverage gaps into one issue (and likely one PR) because they share fixtures (a populated ledger with multiple commits at distinct t values), share API patterns (the at_t family of accessors), and share the "this is API-surface coverage rather than a single hot-path bench" character.
Scope
Add two scenarios under fluree-db-api/benches/ (single-file or two-file, author's call):
query_time_travel.rs — exercises fluree.graph_at(id, TimeSpec::T(...)) (or equivalent db_at_t) against a populated ledger with N committed transactions. Two scale-driven scenarios:
query_at_recent_t — query at t-1 (one txn back); should be fast.
query_at_old_t — query at t/2 (halfway-back); should still be reasonable. Establishes the latency curve so a regression that makes time-travel materialization quadratic-in-history-depth is caught.
query_multi_ledger_history.rs — exercises DataSetDb history-range queries across two ledgers. Single scenario at first (just exercise the path); add scale dimensions if the budget allows.
Each:
- Use the chassis pattern.
bench_runtime(), current_profile(), current_scale(), next_ledger_alias().
- Deterministic data via
gen::people (commit one txn per t step).
- Register budgets in
regression-budget.json for both crate/bench pairs.
Acceptance
References
Out of scope
merge, rebase, and branching benches — those are user-facing operations rather than hot-path query reads. If a future regression hunt cares about merge perf, that's a separate issue.
Context
Follow-up bench(es) from PR #1228 review (coverage gaps #6 and #7). The Query API has a rich layering —
query_connection,db()/db_at_t()/db_at(),DataSetDb, multi-ledger history queries — and none of the new benches in PR #1228 exercise the time-aware or multi-ledger paths. A regression in the time-travel materialization path or inDataSetDb's history range queries would slip through bothquery_hot_bsbm(single-ledger, head-only) andquery_cold_reload(cold load, no time-travel).Bundling these two coverage gaps into one issue (and likely one PR) because they share fixtures (a populated ledger with multiple commits at distinct
tvalues), share API patterns (theat_tfamily of accessors), and share the "this is API-surface coverage rather than a single hot-path bench" character.Scope
Add two scenarios under
fluree-db-api/benches/(single-file or two-file, author's call):query_time_travel.rs— exercisesfluree.graph_at(id, TimeSpec::T(...))(or equivalentdb_at_t) against a populated ledger with N committed transactions. Two scale-driven scenarios:query_at_recent_t— query att-1(one txn back); should be fast.query_at_old_t— query att/2(halfway-back); should still be reasonable. Establishes the latency curve so a regression that makes time-travel materialization quadratic-in-history-depth is caught.query_multi_ledger_history.rs— exercisesDataSetDbhistory-range queries across two ledgers. Single scenario at first (just exercise the path); add scale dimensions if the budget allows.Each:
bench_runtime(),current_profile(),current_scale(),next_ledger_alias().gen::people(commit one txn pertstep).regression-budget.jsonfor both crate/bench pairs.Acceptance
--testgreen attinyscale.regression-budget.jsonhas entries for both.BENCHMARKING.md's table grows two rows (or one row if the author bundles both into a single bench file with two scenarios).References
query_hot_bsbm.rs(head-only single-ledger),query_cold_reload.rs(cold load, no time-travel).docs/concepts/time-travel.md,docs/concepts/datasets-and-named-graphs.md.Out of scope
merge,rebase, and branching benches — those are user-facing operations rather than hot-path query reads. If a future regression hunt cares about merge perf, that's a separate issue.