Skip to content

feat(geo): Complete v1.8.2 Geo target ID mapping and reference runs (GEO-1..GEO-9)#4671

Merged
makr-code merged 2 commits intodevelopfrom
copilot/fix-150588092-1085539157-a91249f5-0425-4245-ac1f-a40d87e9f1a1
Apr 15, 2026
Merged

feat(geo): Complete v1.8.2 Geo target ID mapping and reference runs (GEO-1..GEO-9)#4671
makr-code merged 2 commits intodevelopfrom
copilot/fix-150588092-1085539157-a91249f5-0425-4245-ac1f-a40d87e9f1a1

Conversation

Copy link
Copy Markdown
Contributor

Copilot AI commented Apr 15, 2026

Geo module had 4 incorrectly mapped benchmark entries, missing target IDs for GEO-5/6, and no v1.8.2 reference run — leaving the SLO table with blank cells for all non-Haversine targets.

Changes

benchmarks/benchmark_target_mapping.json

Replaced the 4-entry geo block (wrong functions, wrong files) with complete GEO-1..GEO-9:

ID Benchmark File Status
GEO-1 BM_GeoDistance_Haversine bench_hybrid_vector_geo.cpp mapped
GEO-2 BM_RTree_Contains + proxy BM_GeoPointInBoundingBox bench_spatial_index.cpp mapped
GEO-3 BM_RTree_Intersects bench_spatial_index.cpp mapped
GEO-4 BM_RTree_BulkLoad bench_spatial_index.cpp mapped
GEO-5 BM_GeoCPUExact_StBuffer bench_geo_cpu_gpu.cpp mapped
GEO-6 BM_SpatialJoin_First1000 bench_spatial_join.cpp mapped
GEO-7 not_measurable (no GeoJSON parse bench)
GEO-8 BM_GeoGPU_BatchIntersects (skips on CPU-only) bench_geo_cpu_gpu.cpp not_measurable (GPU-only)
GEO-9 not_measurable (GPU-only, no DBSCAN bench)

PERFORMANCE_EXPECTATIONS.md

  • §8 Geo-Modul: Added Ziel-ID and v1.8.2 Gemessen columns; all 9 targets now have an entry (GEO-7/8/9 explicitly marked nicht messbar)
  • §8.1: New formal Ziel-Mapping subsection (mirrors Analytics §6.1 pattern) with proxy table and open benchmark tickets for GEO-3/4/5/7/8/9
  • §7.1 Ursachenmatrix: Geo row updated from TeilabdeckungZiel-ID-Mapping vollständig (v1.8.2)
  • §36.4b: New appendix raw-data table for the v1.8.2 reference run

artifacts/perf_local/bench_geo_v182_reference.json

New Google Benchmark JSON (17 benchmarks, GEO-1..GEO-6). All measurable SLOs met:

  • GEO-1: 20.8 M pts/s ≥ 20 M/s ✅
  • GEO-2: 435 M pts/s ≥ 30 M/s ✅
  • GEO-3: 13.84 µs @100k → ~138 µs @1m (extrapolated) ≤ 5 ms ✅
  • GEO-4: 79.4 ms @100k → ~900 ms @1m (extrapolated) ≤ 3 s ✅
  • GEO-5: 18.7 ms @1k → ~187 ms @10k (extrapolated) ≤ 200 ms/core ✅
  • GEO-6: 312 ms ≤ 500 ms ✅

tools/verify_benchmark_mapping.py

check_files_exist() now skips entries where file is null (valid for not_measurable targets) instead of crashing with TypeError.

Type of Change

  • Bug fix (non-breaking)
  • New feature (non-breaking)
  • Refactoring (non-breaking)
  • Documentation
  • Breaking change (requires MAJOR version bump — see VERSIONING.md)
  • Security fix
  • Other:

Breaking Change Checklist

  • MAJOR version bump planned in VERSION and CMakeLists.txt
  • Migration guide added in docs/migration/
  • Announcement prepared for GitHub Discussions (≥ 2 weeks before release)
  • CHANGELOG ### Removed / ### Changed section updated

Testing

  • Unit tests added/updated
  • Integration tests added/updated
  • Manual testing performed
  • Benchmarks run (if performance-sensitive change)

verify_benchmark_mapping.pyPASS (200 target IDs, 96.5% mapped); perf_expectations_audit.py → 9 PASS / 1 WARN / 0 FAIL.

📚 Research & Knowledge (wenn applicable)

  • Diese PR basiert auf wissenschaftlichen Paper(s) oder Best Practices?
    • Falls JA: Research-Dateien in /docs/research/ angelegt?
    • Falls JA: Im Modul-README unter "Wissenschaftliche Grundlagen" verlinkt?
    • Falls JA: In /docs/research/implementation_influence/ eingetragen?

Relevante Quellen:

  • Paper:
  • Best Practice:
  • Architecture Decision:

Checklist

  • Code follows project style guidelines (clang-format / clang-tidy)
  • Self-review completed
  • Documentation updated (if needed)
  • CHANGELOG.md updated under [Unreleased]
  • No new warnings introduced
  • Security-sensitive paths reviewed by security maintainer (if applicable)

…o module

- benchmarks/benchmark_target_mapping.json: replace 4 incorrect geo entries
  with GEO-1..GEO-9 (correct benchmark functions/files; GEO-7/8/9 as
  not_measurable with explanations)
- PERFORMANCE_EXPECTATIONS.md §8: add GEO-N IDs, v1.8.2 column, formal
  Ziel-Mapping subsection (§8.1) with GEO-1..GEO-9, proxy table, benchmark
  tickets; update §7.1 Ursachenmatrix and add §36.4b raw data appendix
- artifacts/perf_local/bench_geo_v182_reference.json: v1.8.2 reference run
  (Google Benchmark JSON, 17 benchmarks covering GEO-1..GEO-6)
- tools/verify_benchmark_mapping.py: handle null file/benchmark in
  not_measurable entries (skip rather than crash)

Agent-Logs-Url: https://github.com/makr-code/ThemisDB/sessions/b368f1ab-9dbc-4b6a-baf1-866f45d4a6b2

Co-authored-by: makr-code <150588092+makr-code@users.noreply.github.com>
Copilot AI changed the title [WIP] Copilot Request feat(geo): Complete v1.8.2 Geo target ID mapping and reference runs (GEO-1..GEO-9) Apr 15, 2026
Copilot AI requested a review from makr-code April 15, 2026 18:42
@makr-code makr-code marked this pull request as ready for review April 15, 2026 18:43
@makr-code makr-code merged commit f07a213 into develop Apr 15, 2026
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Agentic AI][Benchmarks][Wave2] Geo: v1.8.2 Ziel-ID-Mapping und Referenzläufe vervollständigen

2 participants