Persist autodiscovery telemetry to Postgres#72
Merged
jbarnes850 merged 2 commits intomainfrom Oct 22, 2025
Merged
Conversation
- add discovery_runs table and CLI helper to persist atlas env init/atlas run telemetry when STORAGE__DATABASE_URL is set - populate exporter events with event_type metadata for downstream learning harnesses - cover persistence/export paths with focused tests Tests: pytest tests/test_env_discovery.py tests/unit/test_database.py tests/unit/export/test_jsonl.py
Contributor
There was a problem hiding this comment.
Pull Request Overview
This PR adds persistence of autodiscovery telemetry to Postgres by creating a discovery_runs table and integrating persistence helpers into the CLI workflow. The changes enable recording of atlas env init telemetry and runtime executions when Postgres is configured, while also normalizing trajectory event exports with standardized event_type and actor fields.
Key Changes:
- Added
discovery_runsdatabase table andlog_discovery_runmethod for storing telemetry - Created CLI persistence helper that records discovery runs when database URL is configured
- Normalized trajectory event export format with
event_typeandactorfields
Reviewed Changes
Copilot reviewed 9 out of 9 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| atlas/runtime/storage/schema.sql | Added discovery_runs table schema to store telemetry data |
| atlas/runtime/storage/database.py | Implemented log_discovery_run method for persisting discovery runs |
| atlas/cli/persistence.py | Created new helper module for best-effort persistence of discovery telemetry |
| atlas/cli/env.py | Integrated persistence helper into env init command |
| atlas/cli/utils.py | Added persistence of runtime execution telemetry |
| atlas/cli/jsonl_writer.py | Normalized trajectory event format with event_type and actor fields |
| tests/unit/test_database.py | Added test coverage for discovery run logging |
| tests/test_env_discovery.py | Added test for persistence integration in env init |
| tests/unit/export/test_jsonl.py | Updated tests for normalized trajectory event format |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
Introduces a new learning evaluation workflow that does not require experience hints. Adds `atlas/evaluation/learning_report.py` for generating learning summaries, `scripts/eval_learning.py` for report generation, and updates documentation to describe the workflow. Extends database access methods for learning keys, sessions, and discovery runs. Updates JSONL exporter and session insights to include `execution_mode`. Adds comprehensive unit tests for new functionality.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This pull request introduces a new hint-less learning evaluation workflow to the Atlas SDK, adds persistent telemetry for discovery and runtime runs, and provides utilities for generating learning evaluation reports from stored telemetry. The changes improve how learning sessions and discovery runs are tracked, stored, and analyzed, enabling richer insights and easier reporting. The most important changes are as follows:
Learning Evaluation Workflow & Documentation
scripts/eval_learning.pyto generate JSON and Markdown summaries before experience hints are available. The documentation (README.md) now references the new workflow and its guide (docs/learning_eval.md). [1] [2]Telemetry Persistence
persist_discovery_runinatlas/cli/persistence.pyto log discovery and runtime telemetry into Postgres, making it possible to track and analyze runs over time. This is now integrated into both the discovery and runtime CLI flows (atlas/cli/env.py,atlas/cli/utils.py). [1] [2] [3] [4] [5]log_discovery_runmethod toatlas/runtime/storage/database.pyto support writing discovery run data to the database.Session & Event Metadata Improvements
execution_modeand other metadata in session payloads and event records, ensuring more accurate tracking of adaptive modes and review statuses for each session. [1] [2] [3]Learning Evaluation Utilities
atlas/evaluation/learning_report.pywith utilities to build learning evaluation reports from database telemetry, including session snapshots, reward statistics, adaptive mode distribution, review status counts, and references to discovery runs. Includes Markdown and dict summary generation for reporting.Database API Enhancements
These changes make it easier to persist, retrieve, and analyze agent learning data, laying the foundation for advanced evaluation and continuous improvement workflows.