feat(api): Phase 2 Server Layer - Routes delegating to core modules (#322)#325
feat(api): Phase 2 Server Layer - Routes delegating to core modules (#322)#325
Conversation
Step 1: Route Audit (PHASE_2_ROUTE_AUDIT.md) - Inventoried all 78 API routes across 19 router files - Classified routes: 3 delegating, 52 need extraction, 23 server-only - Priority matrix: 5 CRITICAL, 18 HIGH, 22 MEDIUM, 10 LOW/Deferred Step 2: Gap Analysis (PHASE_2_GAP_ANALYSIS.md) - Mapped CLI commands to core modules (all present) - Identified 11 gaps requiring new core functions/modules - Documented extraction priority order for route refactoring - Added core module function reference appendix Part of #322 - Server Layer Refactor
Step 3.1 of #322 - Extract discovery business logic to core. Core module enhancements (core/prd_discovery.py): - Add module-level convenience functions for route delegation: - start_discovery_session() - creates and starts session - get_session() - loads existing session by ID - process_discovery_answer() - processes answer with result dict - generate_prd_from_discovery() - generates PRD from complete session - get_discovery_status() - returns status with progress/question - reset_discovery() - marks session as complete/abandoned New v2 discovery router (ui/routers/discovery_v2.py): - POST /api/v2/discovery/start - start new session - GET /api/v2/discovery/status - get session status - POST /api/v2/discovery/{session_id}/answer - submit answer - POST /api/v2/discovery/{session_id}/generate-prd - generate PRD - POST /api/v2/discovery/reset - reset session - POST /api/v2/discovery/generate-tasks - generate tasks from PRD New v2 workspace dependency (ui/dependencies.py): - get_v2_workspace() - resolves Workspace from path or default The v1 discovery router remains for backwards compatibility with existing web UI. The v2 router uses Workspace (path-based) instead of project_id and delegates to core modules.
Track Step 3.1 completion: - Core convenience functions added to prd_discovery.py - V2 discovery router created at /api/v2/discovery/* - V2 workspace dependency added - Updated progress tracker with completed/remaining items
Step 3.2 of #322 - Extract task execution business logic to core. Core module enhancements (core/runtime.py): - Add ApprovalResult and AssignmentResult dataclasses - Add approve_tasks() - transitions BACKLOG tasks to READY - Add check_assignment_status() - checks if execution can start - Add get_ready_task_ids() - gets all READY task IDs New v2 task router (ui/routers/tasks_v2.py): - GET /api/v2/tasks - list tasks with status filter - GET /api/v2/tasks/{id} - get single task - POST /api/v2/tasks/approve - approve tasks with optional execution - GET /api/v2/tasks/assignment-status - check if can assign - POST /api/v2/tasks/execute - start batch execution - POST /api/v2/tasks/{id}/start - start single task run - POST /api/v2/tasks/{id}/stop - stop running task - POST /api/v2/tasks/{id}/resume - resume blocked task The v1 router remains for backwards compatibility. V2 routes use conductor.start_batch() for execution instead of LeadAgent.
Track Step 3.2 completion: - Core runtime functions added for task approval/assignment - V2 tasks router created at /api/v2/tasks/* - Updated progress tracker - Added v2 task routes to documentation table
Add checkpoint diff function to core/checkpoints.py and create
checkpoints_v2 router that delegates all operations to core module.
- Add diff() function to compare two checkpoints and show task changes
- Add TaskDiff and CheckpointDiff dataclasses for structured diff results
- Create checkpoints_v2.py router with 6 endpoints:
- POST /api/v2/checkpoints (create)
- GET /api/v2/checkpoints (list)
- GET /api/v2/checkpoints/{id} (get)
- POST /api/v2/checkpoints/{id}/restore
- DELETE /api/v2/checkpoints/{id}
- GET /api/v2/checkpoints/{a}/diff/{b}
- Register router in server.py
Part of Phase 2 server layer refactor (#322).
Create core/schedule.py with v2-compatible wrappers around the existing TaskScheduler, bridging v2 Workspace/Task models with v1 scheduler using ID mapping. - Add get_schedule() function for task scheduling with CPM - Add predict_completion() function for completion date prediction - Add get_bottlenecks() function for identifying scheduling issues - Create schedule_v2.py router with 3 endpoints: - GET /api/v2/schedule (get schedule) - GET /api/v2/schedule/predict (predict completion) - GET /api/v2/schedule/bottlenecks (identify bottlenecks) - Register router in server.py Part of Phase 2 server layer refactor (#322).
Create core/templates.py with v2-compatible wrappers around the
TaskTemplateManager, using v2 Workspace/Task models for task creation.
- Add list_templates() function for listing available templates
- Add get_template() function for getting template details
- Add get_categories() function for listing categories
- Add apply_template() function for creating tasks from templates
- Create templates_v2.py router with 4 endpoints:
- GET /api/v2/templates (list templates)
- GET /api/v2/templates/categories (list categories)
- GET /api/v2/templates/{id} (get template details)
- POST /api/v2/templates/apply (apply template)
- Register router in server.py
Part of Phase 2 server layer refactor (#322).
Mark checkpoints, schedule, and templates extractions as complete. Update core module inventory to include new schedule.py and templates.py. Add v2 route documentation for checkpoints, schedule, and templates. All HIGH priority extraction items from Phase 2B are now complete.
Create core/project_status.py with v2-compatible functions for workspace status, progress metrics, and session state management. - Add get_task_counts() for task count statistics - Add get_progress_metrics() for completion progress and blockers - Add get_workspace_status() for comprehensive workspace status - Add get_session_state(), save_session_state(), clear_session_state() - Create projects_v2.py router with 5 endpoints: - GET /api/v2/projects/status (workspace status) - GET /api/v2/projects/progress (progress metrics) - GET /api/v2/projects/task-counts (task counts) - GET /api/v2/projects/session (session state) - DELETE /api/v2/projects/session (clear session) - Register router in server.py Part of Phase 2 server layer refactor (#322).
Mark project status and session management extraction as complete. Update core module inventory to include new project_status.py. Add v2 projects route documentation.
MEDIUM priority feature extraction (Steps 3.7-3.8): Core modules: - core/git.py: Headless git operations (status, commit, diff, branch) - core/review.py: Code review with quality analyzers V2 routers: - ui/routers/git_v2.py: /api/v2/git/* endpoints - ui/routers/review_v2.py: /api/v2/review/* endpoints Feature categorization: - Essential: git status/commit, review trigger/status - Nice-to-have: branch management, metrics, chat - Can-defer: advanced analytics All 956 core tests pass. Part of #322 - Server Layer Refactor
Phase 2 - Route Consistency (Step 4):
Standards established:
- URL Pattern: /api/v2/{resource}/{id?}/{action?}
- Response Format: { success, data, message }
- Error Format: { error, detail, code }
New files:
- ui/response_models.py: Standard ApiResponse, ApiError, ErrorCodes
Updated:
- ui/routers/tasks_v2.py: Uses api_error() for all exceptions
- docs/PHASE_2_ROUTE_CONSISTENCY.md: Audit document
84 task-related tests pass.
Part of #322 - Server Layer Refactor
- Add blockers_v2.py with full CRUD for human-in-the-loop workflow
- GET/POST /api/v2/blockers for list/create
- GET /api/v2/blockers/{id} for show
- POST /api/v2/blockers/{id}/answer and /resolve for state transitions
- Add prd_v2.py for PRD document management
- GET /api/v2/prd for list, GET /api/v2/prd/latest for most recent
- POST /api/v2/prd for store, DELETE /api/v2/prd/{id}
- GET/POST /api/v2/prd/{id}/versions for version management
- GET /api/v2/prd/{id}/diff for version comparison
- Add PATCH/DELETE to tasks_v2.py for task CRUD
- PATCH /api/v2/tasks/{id} for updating title/description/priority/status
- DELETE /api/v2/tasks/{id} for task deletion
- Add CLI-to-API mapping documentation for Phase 2 tracking
Coverage improvements:
- Blockers: 0% → 100%
- PRD CRUD: 0% → 100%
- Tasks CRUD: 0% → 100%
- Add GET /api/v2/tasks/{id}/stream for real-time output streaming
- Server-Sent Events (SSE) with line, info, error, and done event types
- Support for --tail N to show historical lines before live streaming
- 5-minute timeout with graceful connection handling
- Add GET /api/v2/tasks/{id}/run for run status
- Returns current or most recent run details
- Includes status, timing, and output availability
- Add get_latest_run() to core/runtime.py
- Retrieves most recent run for a task regardless of status
High Priority routes now 100% complete:
- Blockers CRUD ✅
- PRD CRUD ✅
- Task CRUD ✅
- Streaming/Run status ✅
Add comprehensive integration tests for blockers_v2, prd_v2, and tasks_v2 routers using FastAPI TestClient with workspace dependency overrides. Tests verify: - Core module delegation (routes call correct core functions) - Valid input handling (correct responses for valid requests) - Error handling (appropriate 4xx/5xx responses) - Standard API response format adherence 50 new tests covering: - blockers_v2: list, create, get, answer, resolve, error cases - prd_v2: list, latest, create, get, delete, versions, diff, error cases - tasks_v2: list, get, update, delete, start, stop, run, stream, error cases Also adds PHASE_2_BUSINESS_LOGIC_AUDIT.md documenting v1 routers that still contain embedded business logic requiring future extraction. Issue: #322
Updates: - V2_STRATEGIC_ROADMAP.md: Mark Phase 2 as in progress, add progress summary table showing v2 router completion status - PHASE_2_BUSINESS_LOGIC_AUDIT.md: Add remaining work section with priority table - PHASE_2_DEVELOPER_GUIDE.md: New guide documenting the thin adapter pattern for implementing v2 routers, including templates, anti-patterns, and testing The developer guide provides: - Implementation template for new v2 routers - URL pattern and response format standards - Error handling patterns with standard codes - Testing approach with dependency overrides - Checklist for new router implementation Issue: #322
…322) Add all remaining v2 routers to achieve 1:1 CLI-to-API mapping: - workspace_v2.py: POST/GET/PATCH /api/v2/workspaces (4 routes) - batches_v2.py: GET/POST /api/v2/batches (5 routes) - diagnose_v2.py: POST/GET /api/v2/tasks/{id}/diagnose (2 routes) - pr_v2.py: GET/POST /api/v2/pr (5 routes) - environment_v2.py: GET/POST /api/v2/env (4 routes) - gates_v2.py: GET/POST /api/v2/gates (2 routes) All routers follow the thin adapter pattern: - Parse HTTP requests - Delegate to core modules - Format responses Total: 80 v2 routes across 16 routers Tests: 126 passing (50 router + 76 CLI integration)
WalkthroughAdds many new v2 "thin-adapter" FastAPI routers and multiple core modules providing headless workspace-oriented services (checkpoints diffing, git ops, PRD discovery, scheduling, templates, review, runtime helpers, project status, etc.), plus docs and integration tests; routers delegate to the new core modules. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Router as "V2 Router"
participant Core as "core.*"
participant Storage as "Workspace Storage / DB"
participant Git as "Git Repository"
Client->>Router: POST /api/v2/git/commit
Router->>Core: git.create_commit(workspace, files, message)
Core->>Git: stage files / commit
Git-->>Core: commit hash
Core->>Storage: log/store commit metadata
Core-->>Router: CommitResult
Router-->>Client: CommitResultResponse
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
Review: Phase 2 Server Layer - Routes Delegating to Core ModulesThis is an excellent, comprehensive refactoring that successfully achieves the Phase 2 architecture goal: establishing the server as a thin adapter layer over core modules. The implementation quality is consistently high across all 16 new v2 routers. ✅ Strengths1. Architectural Consistency
2. Excellent Documentation
3. Core Module DesignThe new core modules are truly headless (no FastAPI imports) and well-structured:
4. Response Format Standardization
5. Comprehensive Testing
🔍 Code Quality ObservationsMinor Issues Found1. Missing Request Body Default in
|
| Metric | Value | Assessment |
|---|---|---|
| New v2 routers | 16 | ✅ Comprehensive |
| Total v2 routes | 80 | ✅ Excellent coverage |
| Integration tests | 126 | ✅ Good coverage |
| Core modules added | 8 | ✅ Well-structured |
| Lines of code | ~7,000 | ✅ Reasonable for scope |
| Documentation files | 4 | ✅ Comprehensive |
✅ Recommendation
APPROVE with minor follow-up items.
This PR successfully completes Phase 2 and establishes a clean architectural foundation. The thin adapter pattern is consistently applied, documentation is excellent, and the 1:1 CLI-to-API mapping is preserved.
Before Merge:
- Fix
StopBatchRequest/ResumeBatchRequestdefault value issues - Update
PHASE_2_CLI_API_MAPPING.mdto mark new routes as ✅ Present - Consider moving tech stack detection to core module
Follow-up Issues (Post-Merge):
- Add SSE streaming tests for
/tasks/{id}/stream - Expand error codes for better API client error handling
- Add workspace path validation in hosted mode
- Consider moving
github_integration.pytocore/
🎉 Excellent Work
This is a textbook example of a well-executed refactoring:
- Clear architectural vision
- Consistent implementation
- Comprehensive documentation
- Maintains backward compatibility
- Sets clear path for Phase 3 (Web UI Rebuild)
The v2 architecture where CLI and Server are sibling adapters over core modules is now fully realized. Phase 3 can proceed with confidence that the foundation is solid.
cc: @frankbria - Outstanding work on this phase! The developer guide will be invaluable for contributors.
Add Phase 2 v2 server layer and mount new FastAPI routers delegating to core modules at /api/v2 for review, schedule, tasks, templates, workspaces, batches, blockers, checkpoints, diagnose, discovery, environment, gates, git, pr, prd, and projectsIntroduce v2 API routers that delegate to core modules, add standardized response/error helpers and workspace dependency, and implement key handlers for review, scheduling, templates, project status, checkpoints diffing, git status/commits, and blockers; register all routers in server.py and add integration tests. 📍Where to StartStart with the router registrations in server.py, then review a representative v2 router such as Macroscope summarized 3426edc. |
| Returns: | ||
| List of task IDs in READY status | ||
| """ | ||
| ready_tasks = tasks.list_tasks(workspace, status=TaskStatus.READY) |
There was a problem hiding this comment.
🟡 Medium
codeframe/core/runtime.py:1035
Calls to tasks.list_tasks() rely on the default limit=100, which can silently drop tasks when >100 exist. Suggest fetching all (explicit limit or pagination) to avoid truncation.
| ready_tasks = tasks.list_tasks(workspace, status=TaskStatus.READY) | |
| ready_tasks = tasks.list_tasks(workspace, status=TaskStatus.READY, limit=10000) |
🚀 Want me to fix this? Reply ex: "fix it for me".
|
|
||
| # Create tasks using v2 API | ||
| created_tasks: list[tuple[tasks.Task, list[int]]] = [] | ||
| for task_dict in task_dicts: |
There was a problem hiding this comment.
🟡 Medium
codeframe/core/templates.py:186
Multiple database operations run without a transaction, so a mid-way failure could leave orphaned tasks. Consider wrapping the create/update loop in a single transaction that rolls back on error.
🚀 Want me to fix this? Reply ex: "fix it for me".
| for file_path in files: | ||
| full_path = repo_root / file_path | ||
| if full_path.exists(): | ||
| valid_files.append(file_path) | ||
| else: | ||
| logger.warning(f"File not found, skipping: {file_path}") |
There was a problem hiding this comment.
🟡 Medium
codeframe/core/git.py:202
Using exists() prevents committing deleted files. Consider also checking if the file is tracked by git (was previously committed or staged) so deletions can be recorded.
- for file_path in files:
- full_path = repo_root / file_path
- if full_path.exists():
- valid_files.append(file_path)
- else:
- logger.warning(f"File not found, skipping: {file_path}")
+ tracked_files = set(repo.git.ls_files().splitlines())
+ for file_path in files:
+ full_path = repo_root / file_path
+ if full_path.exists() or file_path in tracked_files:
+ valid_files.append(file_path)
+ else:
+ logger.warning(f"File not found and not tracked, skipping: {file_path}")🚀 Want me to fix this? Reply ex: "fix it for me".
There was a problem hiding this comment.
Actionable comments posted: 13
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🤖 Fix all issues with AI agents
In `@codeframe/core/git.py`:
- Around line 199-210: The current validation only checks existence but allows
absolute or “..” paths to escape the repo; update the loop that builds
valid_files (references: repo_root, files, valid_files, repo) to resolve each
candidate path against repo_root (use Path.resolve(strict=False) on repo_root
and on repo_root / file_path) and reject any path whose resolved form is not
within repo_root (e.g., compare with repo_root_resolved in a
startswith/relative_to check or catch ValueError from relative_to); log a
warning and skip when a path escapes the repo or is invalid, and only append
when the resolved path both exists and is inside repo_root.
- Around line 95-100: The detached-HEAD handling currently reads
repo.head.commit and only catches TypeError, which crashes for empty repos;
update both occurrences where you build the detached label (the block using
repo.active_branch and repo.head.commit.hexsha) to first check
repo.head.is_valid() and if false set a safe label like "(no commits)" or
"(detached HEAD with no commits)" without accessing repo.head.commit, otherwise
proceed to use repo.head.commit.hexsha for the short SHA; keep the existing
TypeError fallback for non-empty detached HEAD cases so you cover both
empty-repo and detached-HEAD scenarios.
In `@codeframe/core/project_status.py`:
- Around line 154-158: The code constructs a WorkspaceStatus using
workspace.name which does not exist on the Workspace model and will raise
AttributeError; change the assignment to derive the name from the repo path (use
workspace.repo_path.name) when populating WorkspaceStatus in the return
statement (where WorkspaceStatus is created and workspace.name is referenced) so
the workspace_name field uses workspace.repo_path.name instead.
In `@codeframe/core/review.py`:
- Around line 131-133: The instantiation of OWASPPatterns is incorrect:
OWASPPatterns requires a project_path argument. Update the creation in review.py
to pass the project_path (same Path used for ComplexityAnalyzer and
SecurityScanner) when constructing OWASPPatterns (i.e., replace OWASPPatterns()
with OWASPPatterns(project_path)) so the __init__ signature matches.
- Around line 190-211: The code incorrectly calls
owasp_checker.check(file_content) with file content and expects dicts; instead
call the OWASPPatterns method check_file with the file path (use
owasp_checker.check_file(full_path) or pass file_path as Path) and adapt
handling since it returns a List[ReviewFinding] objects rather than dicts—use
the returned ReviewFinding instances directly (or map their attributes) when
appending to findings and adjust scoring logic (e.g., still append
scores.append(50) per finding). Ensure you remove the inner read_text()
try/except (or keep it only for other reasons) because check_file accepts the
path.
- Around line 170-188: The code calls security_scanner.scan_file but the API is
SecurityScanner.analyze_file(file_path: Path) which returns List[ReviewFinding]
objects, so replace the call to security_scanner.scan_file(str(full_path)) with
security_scanner.analyze_file(full_path) (or Path(full_path)) and treat each
returned item as a ReviewFinding instance rather than a dict: append the finding
object (or construct a new ReviewFinding only if necessary) to findings, and
compute scores using the finding.severity attribute (fallback to "medium") with
the existing severity_scores mapping; also remove dict-specific calls like
issue.get(...) and use attribute access like finding.message, finding.file_path,
finding.line_number, and finding.suggestion.
- Around line 149-168: The code incorrectly treats
complexity_analyzer.analyze_file as returning a dict with a "functions" key;
instead iterate the list it returns (complexity_result) directly, treating each
item as a ReviewFinding (or mapping with attributes like name, complexity,
line_number) and append it to findings and scores appropriately; update the loop
in the analyze_file handling to iterate over complexity_result, compute severity
with _severity_from_score and scores.append(max(0, 100 - item["complexity"] * 5
or item.complexity-derived value)), and remove the .get("functions", []) access
while keeping use of ReviewFinding, findings, scores, and logger.warning
unchanged.
In `@codeframe/core/runtime.py`:
- Around line 928-1036: The calls to tasks.list_tasks in the approval flow and
get_ready_task_ids are using the default limit=100 and thus miss tasks beyond
the first page; update approve_tasks (where backlog_tasks is fetched for
TaskStatus.BACKLOG) and get_ready_task_ids (where READY tasks are fetched) to
request all tasks by first calling tasks.count_by_status(workspace) to get the
total for the relevant status and then pass that count as the limit to
tasks.list_tasks, or implement paginated listing until all tasks are retrieved;
ensure you use TaskStatus.BACKLOG/TaskStatus.READY and preserve existing
behavior (collecting approved_ids/excluded_ids and returning list[str] of ids).
In `@codeframe/core/schedule.py`:
- Around line 137-149: The loop converting v2 depends_on to v1 format can omit
tasks whose depends_on contains only unknown UUIDs, so ensure every task from
task_list is unconditionally added to resolver.all_tasks (use int_id from
uuid_to_int) before or regardless of dependency resolution; for each dep_uuid in
task.depends_on, continue to add resolver.dependencies and resolver.dependents
when dep_uuid exists in uuid_to_int, and when a dep_uuid is missing, surface
this (e.g., log a warning or raise) instead of silently skipping—apply this
change to the blocks referencing task_list/uuid_to_int/resolver (the blocks
around the symbols task_list, uuid_to_int, resolver, dependencies, dependents,
all_tasks).
In `@codeframe/ui/dependencies.py`:
- Around line 76-132: The function get_v2_workspace should avoid calling
Path(None) and should map get_workspace FileNotFoundError to a 404 while
preserving the documented 400 for “no path provided and no default configured”;
change the path resolution to: if workspace_path is provided use
Path(workspace_path).resolve(); elif request and getattr(request.app.state,
"default_workspace_path", None) is not None use
Path(request.app.state.default_workspace_path).resolve(); else raise
HTTPException(status_code=400, detail="No workspace path provided and no default
configured"); keep the workspace_exists(path) check, then call
get_workspace(path) inside a try/except catching FileNotFoundError and raising
HTTPException(status_code=404, detail=f"Workspace not found at {path}...");
ensure all references use the existing symbols get_v2_workspace, workspace_path,
request.app.state.default_workspace_path, workspace_exists, get_workspace,
FileNotFoundError, and HTTPException.
In `@codeframe/ui/routers/environment_v2.py`:
- Around line 197-219: The code calls non-existent ToolInstaller methods and
risks interactive prompts: replace the installer.install(...) tuple-unpacking
with a call to installer.install_tool(request.tool_name, confirm=False) and
handle its returned InstallResult object (map its fields to
InstallResultResponse.success, .message, .command_used, .tool_name). Also remove
or replace any call to installer.list_installable_tools() with the correct API
(e.g., a provided method or iterate ToolInstaller's known tools) or add a proper
list_installable_tools method to ToolInstaller so the router no longer calls a
missing method.
In `@codeframe/ui/routers/projects_v2.py`:
- Around line 94-122: The exception handler currently returns raw exception text
via HTTPException(detail=str(e)) which can leak internals; instead keep the
existing logger.error(..., exc_info=True) to record details and change the
HTTPException to a generic message (e.g. "Failed to retrieve workspace status"
or "Internal server error") so the API response does not expose str(e); apply
this change in the handler surrounding project_status.get_workspace_status and
WorkspaceStatusResponse as well as the other similar handlers that build
TaskCountsResponse/ProgressMetricsResponse (the other catch blocks that call
logger.error and raise HTTPException).
In `@codeframe/ui/routers/schedule_v2.py`:
- Around line 12-18: The current error handlers in get_schedule,
predict_completion, and get_bottlenecks return plain strings; update them to use
the standardized api_error/ErrorCodes pattern used by other v2 routers: add
imports for api_error and ErrorCodes, and replace the existing except ValueError
and generic except Exception blocks in the three functions with handlers that
raise HTTPException(status_code=404, detail=api_error("Schedule not found",
ErrorCodes.NOT_FOUND, str(e))) for ValueError and HTTPException(status_code=500,
detail=api_error("Failed to get schedule", ErrorCodes.INTERNAL_ERROR, str(e)))
for other exceptions while logging the exception with exc_info=True (keep the
same log context but change the detail payload to use api_error).
🟡 Minor comments (11)
docs/PHASE_2_CLI_API_MAPPING.md-15-17 (1)
15-17:⚠️ Potential issue | 🟡 MinorMarkdown table style fails linting.
Multiple tables violate MD060 (missing spaces/alignments). Please run the markdown formatter or align pipes consistently to satisfy linting.
docs/PHASE_2_CLI_API_MAPPING.md-299-304 (1)
299-304:⚠️ Potential issue | 🟡 MinorAvoid recommending WebSocket work in Phase 2 docs.
The note recommends WebSocket for UI features, which could be read as redesigning websockets during the v2 refactor. Consider removing or deferring this to a later phase outside Phase 2 scope.
As per coding guidelines, do not redesign auth, websockets, or UI state management during v2 refactor; these are legacy UI concerns.docs/PHASE_2_CLI_API_MAPPING.md-121-135 (1)
121-135:⚠️ Potential issue | 🟡 MinorCapitalize “GitHub” in module names and descriptions.
The PR section uses
git.github_integrationbut the text around it should consistently use “GitHub” (capital H) in the narrative or headings where applicable. This avoids platform naming inconsistencies flagged by LanguageTool.✏️ Suggested edit (example)
-### PR Commands +### PR Commands (GitHub)docs/PHASE_2_GAP_ANALYSIS.md-15-18 (1)
15-18:⚠️ Potential issue | 🟡 MinorFix markdown table alignment (markdownlint MD060).
markdownlint is reporting table formatting issues here; please reformat the table to align pipes.
docs/PHASE_2_ROUTE_CONSISTENCY.md-55-63 (1)
55-63:⚠️ Potential issue | 🟡 MinorFix markdown table alignment (markdownlint MD060).
markdownlint flags table pipe alignment in this section. Please reformat the table to satisfy the linter.
docs/PHASE_2_BUSINESS_LOGIC_AUDIT.md-14-18 (1)
14-18:⚠️ Potential issue | 🟡 MinorFix markdown table alignment (markdownlint MD060).
markdownlint reports misaligned table pipes. Consider running the formatter or adjusting spacing so the tables are compliant.
docs/V2_STRATEGIC_ROADMAP.md-86-124 (1)
86-124:⚠️ Potential issue | 🟡 MinorUpdate Phase 2 status to reflect completed routers.
This PR adds workspace/batch/diagnose/PR/environment/gates v2 routers, but the roadmap still marks them as missing/partial. Please update the “Remaining” note and the progress summary table to avoid stale status reporting.
✍️ Suggested edit (illustrative)
- - ⚠️ Remaining: Workspace, Batch, Diagnose, PR, Environment, Gates routes + - ✅ Workspace, Batch, Diagnose, PR, Environment, Gates routes completed -| Workspace v2 | 0 endpoints | ⚠️ Missing | -| Batch v2 | 1 endpoint | ⚠️ Partial (execute only) | -| Diagnose v2 | 0 endpoints | ⚠️ Missing | -| PR v2 | 0 endpoints | ⚠️ Missing | -| Environment v2 | 0 endpoints | ⚠️ Missing | -| Gates v2 | 0 endpoints | ⚠️ Missing | +| Workspace v2 | 1+ endpoints | ✅ Complete | +| Batch v2 | 3 endpoints | ✅ Complete | +| Diagnose v2 | 1 endpoint | ✅ Complete | +| PR v2 | 5 endpoints | ✅ Complete | +| Environment v2 | 3 endpoints | ✅ Complete | +| Gates v2 | 1 endpoint | ✅ Complete |codeframe/ui/routers/batches_v2.py-127-141 (1)
127-141:⚠️ Potential issue | 🟡 MinorMake
totalconsistent withby_status/ pagination.
totalcurrently equals the limited response size, whileby_statuscounts all batches, which can confuse clients. Consider aligningtotalwith the same dataset used for counts (and with the filter when provided).🛠️ Proposed fix
- # Calculate counts by status - all_batches = conductor.list_batches(workspace, limit=1000) + # Calculate counts by status + all_batches = conductor.list_batches(workspace, limit=1000) status_counts: dict[str, int] = {} for batch in all_batches: status_val = batch.status.value status_counts[status_val] = status_counts.get(status_val, 0) + 1 + + if status_filter: + total = sum(1 for b in all_batches if b.status == status_filter) + else: + total = len(all_batches) return BatchListResponse( batches=[_batch_to_response(b) for b in batch_list], - total=len(batch_list), + total=total, by_status=status_counts, )codeframe/core/git.py-165-169 (1)
165-169:⚠️ Potential issue | 🟡 MinorRaise errors from
list_commits()instead of silently returning empty results.The function currently catches
git.GitCommandErrorandgit.BadNameexceptions and logs them without raising, causing the API caller to receive an empty list indistinguishable from "no commits found." Core modules should propagate errors as exceptions for the UI layer to convert into appropriate HTTP responses.Consolidate the exception handlers and raise
ValueError:🛠️ Suggested change
- except git.GitCommandError as e: - logger.warning(f"Failed to list commits: {e}") - except git.BadName as e: - logger.warning(f"Invalid branch reference: {e}") + except (git.GitCommandError, git.BadName) as e: + logger.warning(f"Failed to list commits: {e}") + raise ValueError(str(e))This aligns with the pattern established in
_get_repo()and allows the API layer to distinguish user errors (invalid branch) from server failures.docs/PHASE_2_ROUTE_AUDIT.md-12-17 (1)
12-17:⚠️ Potential issue | 🟡 MinorNormalize table formatting to satisfy markdownlint.
The tables here trigger MD058/MD060 (missing blank lines and compact pipe spacing). Please add blank lines around tables and space pipe columns (e.g.,| Col |) across the doc or run the formatter once.codeframe/ui/routers/diagnose_v2.py-105-175 (1)
105-175:⚠️ Potential issue | 🟡 MinorFix the misleading docstring for the error conditions.
The docstring currently states:
- 404: Task not found or no failed run - 400: Task has no failed runsThis is contradictory—it assigns 404 to "no failed run" but then 400 to the same scenario. The implementation correctly returns 404 when the task is not found, and 400 when there are no failed runs. Update line 126 to read:
- 404: Task not foundThe code properly orders runs by
started_at DESC, so no changes are needed for that concern.
🧹 Nitpick comments (7)
codeframe/ui/routers/workspace_v2.py (2)
74-120: Harden tech-stack detection against file read/encoding errors.
read_text()can raise on encoding/permissions; for a best-effort detector, swallow read errors and continue.🛠️ Suggested guardrails
- if pyproject.exists(): - content = pyproject.read_text() + if pyproject.exists(): + try: + content = pyproject.read_text(encoding="utf-8", errors="ignore") + except OSError: + content = "" @@ - if package_json.exists(): - content = package_json.read_text() + if package_json.exists(): + try: + content = package_json.read_text(encoding="utf-8", errors="ignore") + except OSError: + content = ""
254-272: Avoid duplicate filesystem checks.
You callworkspace_existstwice; cache it to save extra I/O.♻️ Small optimization
- return { - "exists": ws.workspace_exists(path), - "path": str(path), - "state_dir": str(path / ".codeframe") if ws.workspace_exists(path) else None, - } + exists = ws.workspace_exists(path) + return { + "exists": exists, + "path": str(path), + "state_dir": str(path / ".codeframe") if exists else None, + }codeframe/ui/routers/blockers_v2.py (1)
236-257: Avoid string-matching error messages for state mapping.
Parsing exception text is brittle; prefer typed exceptions or error codes from core to keep API mappings stable.Also applies to: 284-305
codeframe/ui/routers/pr_v2.py (2)
244-260: Optional request body should useBody(None)for proper OpenAPI schema.When
request: MergePRRequest = None, FastAPI may not correctly generate the OpenAPI schema or handle the optional body. UsingBody(None)makes the intent explicit.♻️ Proposed fix
+from fastapi import APIRouter, Body, Depends, HTTPException, Query ... `@router.post`("/{pr_number}/merge", response_model=MergeResponse) async def merge_pull_request( pr_number: int, - request: MergePRRequest = None, + request: Optional[MergePRRequest] = Body(None), workspace: Workspace = Depends(get_v2_workspace), ) -> MergeResponse:
126-161: Unusedworkspaceparameter in GitHub endpoints.The
workspacedependency is injected but not used in the GitHub PR operations. If this is intentional for future use or consistency with other v2 endpoints, consider adding a brief comment. Otherwise, this could be removed to simplify the API.codeframe/ui/routers/tasks_v2.py (1)
566-576: Synchronous agent execution may block the request thread.When
execute=True,runtime.execute_agentruns synchronously within the async endpoint. For long-running tasks, this could block the server. Consider documenting this limitation or making execution async/background.The comment at Line 567 acknowledges this ("might want to make this async/background"). Consider adding a note to the endpoint docstring about expected execution duration, or returning early with the run_id and having clients poll for completion.
codeframe/core/templates.py (1)
148-221: Consider making template application atomic.
If task creation or dependency wiring fails mid-way, partial tasks remain. Consider wrapping this in a transaction or cleaning up on failure to keep the workspace consistent.
| # Add dependencies (convert v2 depends_on to v1 format) | ||
| for task in task_list: | ||
| int_id = uuid_to_int[task.id] | ||
| if task.depends_on: | ||
| for dep_uuid in task.depends_on: | ||
| if dep_uuid in uuid_to_int: | ||
| dep_int_id = uuid_to_int[dep_uuid] | ||
| resolver.dependencies.setdefault(int_id, set()).add(dep_int_id) | ||
| resolver.dependents.setdefault(dep_int_id, set()).add(int_id) | ||
| resolver.all_tasks.add(int_id) | ||
| resolver.all_tasks.add(dep_int_id) | ||
| else: | ||
| resolver.all_tasks.add(int_id) |
There was a problem hiding this comment.
Ensure every task is registered in the dependency graph.
If a task has only unknown depends_on entries, it never gets added to resolver.all_tasks, which can silently drop it from scheduling. Add the task unconditionally and consider surfacing unknown deps (log/raise).
🛠️ Proposed fix (apply in all three blocks)
- for task in task_list:
- int_id = uuid_to_int[task.id]
- if task.depends_on:
- for dep_uuid in task.depends_on:
- if dep_uuid in uuid_to_int:
- dep_int_id = uuid_to_int[dep_uuid]
- resolver.dependencies.setdefault(int_id, set()).add(dep_int_id)
- resolver.dependents.setdefault(dep_int_id, set()).add(int_id)
- resolver.all_tasks.add(int_id)
- resolver.all_tasks.add(dep_int_id)
- else:
- resolver.all_tasks.add(int_id)
+ for task in task_list:
+ int_id = uuid_to_int[task.id]
+ resolver.all_tasks.add(int_id)
+ if task.depends_on:
+ for dep_uuid in task.depends_on:
+ if dep_uuid in uuid_to_int:
+ dep_int_id = uuid_to_int[dep_uuid]
+ resolver.dependencies.setdefault(int_id, set()).add(dep_int_id)
+ resolver.dependents.setdefault(dep_int_id, set()).add(int_id)
+ resolver.all_tasks.add(dep_int_id)Also applies to: 243-255, 333-345
🤖 Prompt for AI Agents
In `@codeframe/core/schedule.py` around lines 137 - 149, The loop converting v2
depends_on to v1 format can omit tasks whose depends_on contains only unknown
UUIDs, so ensure every task from task_list is unconditionally added to
resolver.all_tasks (use int_id from uuid_to_int) before or regardless of
dependency resolution; for each dep_uuid in task.depends_on, continue to add
resolver.dependencies and resolver.dependents when dep_uuid exists in
uuid_to_int, and when a dep_uuid is missing, surface this (e.g., log a warning
or raise) instead of silently skipping—apply this change to the blocks
referencing task_list/uuid_to_int/resolver (the blocks around the symbols
task_list, uuid_to_int, resolver, dependencies, dependents, all_tasks).
| try: | ||
| status = project_status.get_workspace_status(workspace) | ||
|
|
||
| return WorkspaceStatusResponse( | ||
| workspace_id=status.workspace_id, | ||
| workspace_name=status.workspace_name, | ||
| repo_path=status.repo_path, | ||
| tech_stack=status.tech_stack, | ||
| task_counts=TaskCountsResponse( | ||
| total=status.task_counts.total, | ||
| backlog=status.task_counts.backlog, | ||
| ready=status.task_counts.ready, | ||
| in_progress=status.task_counts.in_progress, | ||
| done=status.task_counts.done, | ||
| blocked=status.task_counts.blocked, | ||
| failed=status.task_counts.failed, | ||
| ), | ||
| progress=ProgressMetricsResponse( | ||
| completed_count=status.progress.completed_count, | ||
| total_count=status.progress.total_count, | ||
| progress_percentage=status.progress.progress_percentage, | ||
| open_blockers=status.progress.open_blockers, | ||
| ), | ||
| created_at=status.created_at.isoformat(), | ||
| ) | ||
|
|
||
| except Exception as e: | ||
| logger.error(f"Failed to get workspace status: {e}", exc_info=True) | ||
| raise HTTPException(status_code=500, detail=str(e)) |
There was a problem hiding this comment.
Avoid returning raw exception details in 500 responses.
detail=str(e) can leak internal state (paths, stack context). Prefer a generic message and keep details in logs; apply to all handlers listed.
🔒 Suggested fix (apply to all handlers)
- except Exception as e:
- logger.error(f"Failed to get workspace status: {e}", exc_info=True)
- raise HTTPException(status_code=500, detail=str(e))
+ except Exception as e:
+ logger.error(f"Failed to get workspace status: {e}", exc_info=True)
+ raise HTTPException(status_code=500, detail="Internal server error")Also applies to: 139-151, 166-181, 200-214, 231-241
🤖 Prompt for AI Agents
In `@codeframe/ui/routers/projects_v2.py` around lines 94 - 122, The exception
handler currently returns raw exception text via HTTPException(detail=str(e))
which can leak internals; instead keep the existing logger.error(...,
exc_info=True) to record details and change the HTTPException to a generic
message (e.g. "Failed to retrieve workspace status" or "Internal server error")
so the API response does not expose str(e); apply this change in the handler
surrounding project_status.get_workspace_status and WorkspaceStatusResponse as
well as the other similar handlers that build
TaskCountsResponse/ProgressMetricsResponse (the other catch blocks that call
logger.error and raise HTTPException).
Critical fixes: - review.py: Fix API mismatches with quality analyzers - OWASPPatterns now receives project_path argument - Use analyze_file (not scan_file) for SecurityScanner - Use check_file (not check) for OWASPPatterns - Handle List[ReviewFinding] return types correctly - environment_v2.py: Fix ToolInstaller API - Use install_tool() instead of non-existent install() - Replace list_installable_tools() with hardcoded list Major fixes: - project_status.py: Fix workspace.name → workspace.repo_path.name - project_status.py: Remove invalid limit param from blockers.list_open() - git.py: Handle empty repos (no commits) in branch detection - git.py: Add path validation to prevent traversal attacks - dependencies.py: Handle FileNotFoundError from get_workspace() - dependencies.py: Guard against None default_workspace_path - schedule_v2.py: Standardize error handling with api_error/ErrorCodes All 126 integration tests pass.
Code Review: Phase 2 Server Layer ImplementationThis is an excellent implementation of the Phase 2 server layer refactor. The PR successfully achieves the architectural goal of establishing server routes as thin adapters over core modules. Here's my detailed analysis: ✅ Strengths1. Architectural Excellence
2. Code Quality
3. Test Coverage
4. DocumentationThe three documentation files added are outstanding:
🔍 Code Quality AnalysisRouter Implementation (workspace_v2.py)Score: 9.5/10
Core Module Implementation (git.py, review.py)Score: 9/10
Error Handling Pattern (batches_v2.py, pr_v2.py)Score: 10/10
🛡️ Security Analysis✅ No Security Issues Found
⚡ Performance ConsiderationsStrengths
Potential Optimization (Low Priority)
📋 Potential Issues & Recommendations1. Async/Sync Mismatch (Low Priority)Location: All v2 routers 2. PR Router Uses Legacy Module (Medium Priority)Location: 3. Tech Stack Detection Duplication (Low Priority)Location: 4. Missing Rate Limiting (Future Enhancement)Location: All v2 routers 🧪 Test Coverage AssessmentExcellent Coverage
Suggested Additions (Low Priority)
📊 Metrics ValidationThe PR claims:
🎯 Alignment with CLAUDE.md✅ Follows All Architecture Rules
✅ Follows Development Guidelines
🚀 Final RecommendationAPPROVE ✅This PR is ready to merge with only minor suggestions for future improvement. Merge Criteria Met
Post-Merge Improvements (Optional)
Impact
Excellent work! This PR represents a significant step forward in the v2 architecture. The implementation quality is high, the patterns are consistent, and the documentation is thorough. 🎉 |
| if staged: | ||
| # Staged changes (compared to HEAD) | ||
| if repo.head.is_valid(): | ||
| return repo.git.diff("--cached") | ||
| return "" |
There was a problem hiding this comment.
🟢 Low
codeframe/core/git.py:252
When staged=True in a repo with no commits, this returns empty string, but git diff --cached actually works and shows staged content. Consider removing the repo.head.is_valid() check so staged changes are visible before the first commit.
| if staged: | |
| # Staged changes (compared to HEAD) | |
| if repo.head.is_valid(): | |
| return repo.git.diff("--cached") | |
| return "" | |
| if staged: | |
| # Staged changes (compared to HEAD) | |
| return repo.git.diff("--cached") |
🚀 Want me to fix this? Reply ex: "fix it for me".
| resolver = DependencyResolver() | ||
|
|
||
| # Add dependencies (convert v2 depends_on to v1 format) | ||
| for task in task_list: |
There was a problem hiding this comment.
🟡 Medium
codeframe/core/schedule.py:138
Add each task to resolver.all_tasks unconditionally before processing depends_on. Tasks with only invalid dependency UUIDs are currently skipped from scheduling because they’re never added.
🚀 Want me to fix this? Reply ex: "fix it for me".
| counts = tasks.count_by_status(workspace) | ||
|
|
||
| return TaskCounts( | ||
| total=sum(counts.values()), |
There was a problem hiding this comment.
🟢 Low
codeframe/core/project_status.py:97
TaskCounts is missing a merged field, but total includes MERGED tasks via sum(counts.values()). Consider adding a merged field to keep totals consistent.
🚀 Want me to fix this? Reply ex: "fix it for me".
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@codeframe/core/git.py`:
- Around line 202-219: The code assumes repo.working_tree_dir is set when
creating repo_root; guard against bare repositories by checking if
repo.working_tree_dir is None before calling Path(...).resolve() (the block
around repo_root = Path(repo.working_tree_dir).resolve()); if it is None raise a
clear ValueError (or return an appropriate error) indicating the repo is bare so
the subsequent path validation (loop building candidate and using
candidate.relative_to(repo_root)) won't throw a TypeError.
In `@codeframe/core/project_status.py`:
- Around line 206-239: save_session_state currently types completed_tasks as
Optional[list[str]] while SessionManager and the stored schema expect task IDs
as integers; confirm the canonical ID type and then align them: if IDs are
integers, change the save_session_state signature to completed_tasks:
Optional[list[int]], update the docstring and the state dict construction
accordingly, and update callers to pass ints (or coerce values to int before
building state) so session_state.json and SessionManager remain consistent; also
update SessionManager documentation/comments to reflect the canonical type.
🧹 Nitpick comments (5)
codeframe/ui/dependencies.py (1)
133-137: Redundant None check afterget_workspace.Per
codeframe/core/workspace.py(lines 473-510),get_workspace()either returns a validWorkspaceor raisesFileNotFoundError—it never returnsNone. This check is unreachable dead code since theFileNotFoundErroris already caught above.🧹 Proposed removal of dead code
try: workspace = get_workspace(path) except FileNotFoundError: raise HTTPException( status_code=404, detail=f"Workspace not found at {path}. Initialize with 'cf init {path}'", ) - if not workspace: - raise HTTPException( - status_code=404, - detail=f"Workspace not found at {path}. Initialize with 'cf init {path}'", - ) - return workspacecodeframe/ui/routers/environment_v2.py (1)
231-275: Unusedworkspaceparameter and hardcoded tool list may drift fromToolInstaller.Two observations:
The
workspaceparameter (line 233) is injected but never used in the function body. Either remove it or use it to provide workspace-specific tool recommendations.The hardcoded
installable_toolslist (lines 246-263) may drift out of sync with whatToolInstaller.can_install()actually supports. Consider querying the sub-installers'SUPPORTED_TOOLSdirectly or adding alist_installable_tools()method toToolInstaller.♻️ Option 1: Remove unused dependency
`@router.get`("/tools") async def list_available_tools( - workspace: Workspace = Depends(get_v2_workspace), ) -> dict:♻️ Option 2: Query ToolInstaller dynamically (preferred for maintainability)
Add a method to
ToolInstallerincodeframe/core/installer.py:def list_installable_tools(self) -> list[dict]: """Return list of all tools that can be installed.""" tools = [] for installer in [self._pip_installer, self._npm_installer, self._cargo_installer, self._system_installer]: for tool in getattr(installer, 'SUPPORTED_TOOLS', []): tools.append({ "name": tool, "category": installer.__class__.__name__.replace('Installer', '').lower(), }) return toolsThen in the router:
try: - # List of tools known to be installable by ToolInstaller - # These are aggregated from PipInstaller, NpmInstaller, CargoInstaller, SystemInstaller - installable_tools = [ - # Python tools (pip/uv) - {"name": "pytest", "category": "python", "description": "Python testing framework"}, - ... - ] + installer = ToolInstaller() + installable_tools = installer.list_installable_tools()codeframe/core/review.py (2)
90-108: Unused helper function_severity_from_score.This function is defined but never called anywhere in the module. The code at lines 164, 184, and 204 uses a
severity_scoresdict lookup instead to map severity to scores (the inverse operation).Consider removing this dead code or using it if there's a future need.
163-165: Consider extractingseverity_scoresas a module-level constant.The identical
severity_scoresdict is defined three times inside the loop. Extracting it as a constant would improve clarity and adhere to DRY.♻️ Proposed refactor
EXCELLENT_THRESHOLD = 90 GOOD_THRESHOLD = 70 ACCEPTABLE_THRESHOLD = 50 + +# Severity to score mapping for averaging +SEVERITY_SCORES = {"critical": 20, "high": 40, "medium": 60, "low": 80, "info": 95}Then replace inline definitions:
- severity_scores = {"critical": 20, "high": 40, "medium": 60, "low": 80, "info": 95} - scores.append(severity_scores.get(finding.severity, 60)) + scores.append(SEVERITY_SCORES.get(finding.severity, 60))Also applies to: 183-185, 203-205
codeframe/core/project_status.py (1)
140-162: Reduce duplicate task-count queries in status assembly.
get_workspace_statuscallsget_task_countsandget_progress_metrics, each of which hitstasks.count_by_status. You can reuse a single count result to avoid a second DB query.♻️ Proposed refactor to reuse counts
-def get_task_counts(workspace: Workspace) -> TaskCounts: +def get_task_counts( + workspace: Workspace, + counts: Optional[dict[str, int]] = None, +) -> TaskCounts: @@ - counts = tasks.count_by_status(workspace) + if counts is None: + counts = tasks.count_by_status(workspace) @@ -def get_progress_metrics(workspace: Workspace) -> ProgressMetrics: +def get_progress_metrics( + workspace: Workspace, + counts: Optional[dict[str, int]] = None, +) -> ProgressMetrics: @@ - counts = tasks.count_by_status(workspace) + if counts is None: + counts = tasks.count_by_status(workspace) @@ def get_workspace_status(workspace: Workspace) -> WorkspaceStatus: @@ - task_counts = get_task_counts(workspace) - progress = get_progress_metrics(workspace) + counts = tasks.count_by_status(workspace) + task_counts = get_task_counts(workspace, counts=counts) + progress = get_progress_metrics(workspace, counts=counts)
| # Validate file paths exist and are within repo | ||
| repo_root = Path(repo.working_tree_dir).resolve() | ||
| valid_files: list[str] = [] | ||
| for file_path in files: | ||
| candidate = (repo_root / file_path).resolve() | ||
| # Security: Ensure path stays within repo root | ||
| try: | ||
| candidate.relative_to(repo_root) | ||
| except ValueError: | ||
| logger.warning(f"File outside repo, skipping: {file_path}") | ||
| continue | ||
| if candidate.exists(): | ||
| valid_files.append(str(candidate.relative_to(repo_root))) | ||
| else: | ||
| logger.warning(f"File not found, skipping: {file_path}") | ||
|
|
||
| if not valid_files: | ||
| raise ValueError("None of the specified files exist") |
There was a problem hiding this comment.
Path validation is correctly implemented; guard against bare repositories.
The security fix for path traversal (using resolve() and relative_to()) is properly implemented. However, repo.working_tree_dir returns None for bare repositories, which would cause TypeError on Line 203.
🛡️ Proposed guard for bare repos
repo = _get_repo(workspace)
# Validate file paths exist and are within repo
- repo_root = Path(repo.working_tree_dir).resolve()
+ if repo.working_tree_dir is None:
+ raise ValueError("Cannot commit in a bare repository")
+ repo_root = Path(repo.working_tree_dir).resolve()
valid_files: list[str] = []🤖 Prompt for AI Agents
In `@codeframe/core/git.py` around lines 202 - 219, The code assumes
repo.working_tree_dir is set when creating repo_root; guard against bare
repositories by checking if repo.working_tree_dir is None before calling
Path(...).resolve() (the block around repo_root =
Path(repo.working_tree_dir).resolve()); if it is None raise a clear ValueError
(or return an appropriate error) indicating the repo is bare so the subsequent
path validation (loop building candidate and using
candidate.relative_to(repo_root)) won't throw a TypeError.
| def save_session_state( | ||
| workspace: Workspace, | ||
| summary: str, | ||
| completed_tasks: Optional[list[str]] = None, | ||
| next_actions: Optional[list[str]] = None, | ||
| current_plan: Optional[str] = None, | ||
| active_blockers: Optional[list[dict]] = None, | ||
| progress_pct: float = 0.0, | ||
| ) -> None: | ||
| """Save session state for a workspace. | ||
|
|
||
| Saves session state to .codeframe/session_state.json. | ||
|
|
||
| Args: | ||
| workspace: Target workspace | ||
| summary: Summary of the session | ||
| completed_tasks: List of completed task IDs | ||
| next_actions: List of next action items | ||
| current_plan: Current task/plan | ||
| active_blockers: List of active blocker dicts | ||
| progress_pct: Progress percentage | ||
| """ | ||
| session_mgr = SessionManager(str(workspace.repo_path)) | ||
|
|
||
| state = { | ||
| "summary": summary, | ||
| "completed_tasks": completed_tasks or [], | ||
| "next_actions": next_actions or [], | ||
| "current_plan": current_plan, | ||
| "active_blockers": active_blockers or [], | ||
| "progress_pct": progress_pct, | ||
| } | ||
|
|
||
| session_mgr.save_session(state) |
There was a problem hiding this comment.
Align completed_tasks typing with the stored schema.
save_session_state uses Optional[list[str]], while SessionManager documents completed_tasks as List[int]. If task IDs are integers in the DB/schema, this mismatch can leak strings into session_state.json and break downstream consumers. Please confirm the canonical type and align both sides.
🔧 Possible fix if IDs are integers
- completed_tasks: Optional[list[str]] = None,
+ completed_tasks: Optional[list[int]] = None,Run this to verify the task ID type and completed_tasks usage:
#!/bin/bash
# Inspect task ID schema/usage and completed_tasks consumers
rg -n "completed_tasks" -S
rg -n "CREATE TABLE tasks" -S
rg -n "task_id" -S🤖 Prompt for AI Agents
In `@codeframe/core/project_status.py` around lines 206 - 239, save_session_state
currently types completed_tasks as Optional[list[str]] while SessionManager and
the stored schema expect task IDs as integers; confirm the canonical ID type and
then align them: if IDs are integers, change the save_session_state signature to
completed_tasks: Optional[list[int]], update the docstring and the state dict
construction accordingly, and update callers to pass ints (or coerce values to
int before building state) so session_state.json and SessionManager remain
consistent; also update SessionManager documentation/comments to reflect the
canonical type.
- Update status badge to Phase 2 In Progress (4285 tests) - Add API key authentication section with CLI commands - Add rate limiting configuration documentation - Document new V2 API endpoints - Add Security & API section to Key Features - Update roadmap to show Phase 2 at 90% complete - Mark #322, #325, #326, #327 as complete - Update current focus to remaining items (WebSocket, OpenAPI, pagination)
Summary
Completes Phase 2 of the v2 refactor by establishing all server routes as thin adapters that delegate to core modules. This achieves the architecture goal where CLI and Server are siblings, both calling core.
Changes
New v2 Routers (6 added)
workspace_v2.py- Workspace init/update (4 routes)batches_v2.py- Batch execution management (5 routes)diagnose_v2.py- Task diagnostics (2 routes)pr_v2.py- GitHub PR management (5 routes)environment_v2.py- Environment validation (4 routes)gates_v2.py- Verification gates (2 routes)Existing v2 Routers (10 already complete)
blockers_v2.py,prd_v2.py,tasks_v2.py,checkpoints_v2.pydiscovery_v2.py,schedule_v2.py,templates_v2.pyprojects_v2.py,git_v2.py,review_v2.pyDocumentation
docs/PHASE_2_BUSINESS_LOGIC_AUDIT.mdwith completion statusdocs/PHASE_2_DEVELOPER_GUIDE.mdwith thin adapter patternsMetrics
Acceptance Criteria
core.*modulesTest Plan
uv run pytest tests/ui/test_v2_routers_integration.py- 50 tests passuv run pytest tests/cli/test_v2_cli_integration.py- 76 tests passCloses #322
Summary by CodeRabbit
New Features
Chores
✏️ Tip: You can customize this high-level summary in your review settings.