Conversation
…ge and implement token provider for authentication
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 7 minutes and 31 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughRefactors trace fetching to require a runtime Changes
Sequence Diagram(s)sequenceDiagram
participant Main as evaluation-job/main
participant Runner as BaseRunner
participant Fetcher as TraceFetcher
participant TokenMgr as OAuth2TokenManager
participant API as Trace API / Remote
Main->>Runner: construct runner (trace_fetcher=Fetcher(token_provider=TokenMgr.get_token))
Runner->>Fetcher: request traces
Fetcher->>TokenMgr: token = token_provider()
TokenMgr-->>Fetcher: returns token
Fetcher->>API: HTTP request (Authorization: Bearer token)
API-->>Fetcher: trace payload
Fetcher-->>Runner: return traces
Runner-->>Main: run proceeds with traces
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
libs/amp-evaluation/src/amp_evaluation/runner.py (1)
253-274: Add focused tests for the new_get_fetcher()hard-fail path.Lines 270-274 introduce a stricter runtime branch. Please add a runner-level unit test that validates: (1) explicit
trace_fetcherstill takes precedence, and (2) missing trace source raises thisValueError.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libs/amp-evaluation/src/amp_evaluation/runner.py` around lines 253 - 274, Add unit tests for Runner._get_fetcher(): create a test that constructs the runner with an explicit trace_fetcher mock/object and with config.trace.file_path set and assert _get_fetcher() returns the explicit trace_fetcher (verifying precedence of the trace_fetcher parameter over TraceLoader), and create a separate test that constructs the runner with no trace_fetcher and config.trace.file_path empty/None and assert calling _get_fetcher() raises the ValueError defined in _get_fetcher(). Use the Runner class constructor to set trace_fetcher and config.trace.file_path and reference the _get_fetcher, trace_fetcher, config.trace.file_path, and TraceLoader symbols in the assertions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@libs/amp-evaluation/src/amp_evaluation/config.py`:
- Around line 49-54: Update libs/amp-evaluation/.env.example to match the new
config comments: remove the now-ignored AMP_TRACE_API_URL and AMP_TRACE_API_KEY
entries and ensure only AMP_TRACE_FILE_PATH is documented (with a short
description that it points to a trace JSON file). Verify the remaining example
env entry uses the exact variable name AMP_TRACE_FILE_PATH so it aligns with the
new behavior described in config.py.
In `@libs/amp-evaluation/src/amp_evaluation/trace/fetcher.py`:
- Around line 419-421: In fetch_trace_by_id (and the other fetch method around
the second occurrence) move the call to _get_auth_headers() inside the
surrounding try block so any exception from token_provider()/auth acquisition is
caught by the existing except handlers; specifically, call _get_auth_headers()
after entering the try in fetch_trace_by_id and the other fetch method (the
block around lines 453–455) and then proceed to requests.get/post so token
errors trigger the same logging/return (e.g., return None) paths rather than
propagating.
---
Nitpick comments:
In `@libs/amp-evaluation/src/amp_evaluation/runner.py`:
- Around line 253-274: Add unit tests for Runner._get_fetcher(): create a test
that constructs the runner with an explicit trace_fetcher mock/object and with
config.trace.file_path set and assert _get_fetcher() returns the explicit
trace_fetcher (verifying precedence of the trace_fetcher parameter over
TraceLoader), and create a separate test that constructs the runner with no
trace_fetcher and config.trace.file_path empty/None and assert calling
_get_fetcher() raises the ValueError defined in _get_fetcher(). Use the Runner
class constructor to set trace_fetcher and config.trace.file_path and reference
the _get_fetcher, trace_fetcher, config.trace.file_path, and TraceLoader symbols
in the assertions.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 7c246993-b9d7-47a4-9d37-9da98bd8a6bd
📒 Files selected for processing (5)
evaluation-job/main.pylibs/amp-evaluation/src/amp_evaluation/config.pylibs/amp-evaluation/src/amp_evaluation/runner.pylibs/amp-evaluation/src/amp_evaluation/trace/fetcher.pylibs/amp-evaluation/tests/test_config.py
Purpose
Trace observer is configured to require JWT authentication. But eval jobs make unauthenticated calls to the trace observer service to fetch traces for evaluation without a token. This PR is to implement token provider for trace observer authentication.
Goals
Approach
User stories
Release note
Documentation
Training
Certification
Marketing
Automation tests
Security checks
Samples
Related PRs
Migrations (if applicable)
Test environment
Learning
Summary by CodeRabbit
Refactor
Tests
Documentation