fix: add MCP server config to ExpandChatSession for feature creation#88
fix: add MCP server config to ExpandChatSession for feature creation#88sundog75 wants to merge 4 commits intoAutoForgeAI:masterfrom
Conversation
📝 WalkthroughWalkthroughExpandChatSession no longer uses an MCP tool for creating features: it parses <features_to_create> JSON blocks from Claude streaming responses, creates features directly in the database in bulk (assigning incremental priorities and flushing to obtain IDs), tracks created feature IDs/count, and emits features_created events. Feature model gains regression_count, testing_in_progress, and last_tested_at columns; MCP gains feature_get_for_regression to fetch/increment passing features. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant ExpandService as "ExpandChatSession"
participant ClaudeClient as "ClaudeSDKClient"
participant DB as Database
Client->>ExpandService: request expand session
ExpandService->>ClaudeClient: stream prompt (allow tools disabled for feature creation)
ClaudeClient-->>ExpandService: stream text chunks (may contain <features_to_create> blocks)
ExpandService->>ExpandService: accumulate full_response, detect/deduplicate feature blocks
ExpandService->>DB: call _create_features_bulk(features) (determine start priority, insert, flush)
DB-->>ExpandService: return created feature IDs/metadata
ExpandService->>Client: emit features_created event and return expanded session
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Tip 🧪 Unit Test Generation v2 is now available!We have significantly improved our unit test generation capabilities. To enable: Add this to your reviews:
finishing_touches:
unit_tests:
enabled: trueTry it out by using the Have feedback? Share your thoughts on our Discord thread! Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
cff1808 to
32ec92e
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@api/database.py`:
- Around line 61-66: The migration function _migrate_add_testing_columns is
currently a no-op but must add the deprecated columns for DBs created after
their removal; update _migrate_add_testing_columns to check the target table
schema (e.g., via inspector or PRAGMA) for the absence of testing_in_progress
and last_tested_at and, if missing, execute DDL to ALTER TABLE and add
testing_in_progress BOOLEAN NOT NULL DEFAULT FALSE and last_tested_at DATETIME
NULL (or equivalent for SQLite), ensuring the column definitions match the
SQLAlchemy model defaults and nullability and that the operation is safe (wrap
in try/except and commit/rollback as in existing migration patterns). Ensure the
function references the same table name used by the model and handles both
SQLite and other RDBMS dialect nuances when issuing ALTER statements.
| # DEPRECATED: Kept for backward compatibility with existing databases. | ||
| # These columns are no longer used but prevent NOT NULL violations on insert. | ||
| # Removed in commit 486979c but old databases still have them. | ||
| testing_in_progress = Column(Boolean, nullable=False, default=False) | ||
| last_tested_at = Column(DateTime, nullable=True, default=None) | ||
|
|
There was a problem hiding this comment.
Migration gap: databases created after column removal won't have these columns.
The columns are added to the model, but _migrate_add_testing_columns() (lines 241-253) is a no-op. This creates a problem for databases that were created after the columns were removed in commit 486979c but before this PR—they won't have these columns, and INSERT operations will fail.
Consider updating _migrate_add_testing_columns to actually add the columns if missing:
Proposed fix for _migrate_add_testing_columns
def _migrate_add_testing_columns(engine) -> None:
- """Legacy migration - no longer adds testing columns.
-
- The testing_in_progress and last_tested_at columns were removed from the
- Feature model as part of simplifying the testing agent architecture.
- Multiple testing agents can now test the same feature concurrently
- without coordination.
-
- This function is kept for backwards compatibility but does nothing.
- Existing databases with these columns will continue to work - the columns
- are simply ignored.
- """
- pass
+ """Add testing columns for backward compatibility.
+
+ These columns are deprecated but required in the model so SQLAlchemy
+ populates them on INSERT (preventing NOT NULL violations in old databases).
+ """
+ with engine.connect() as conn:
+ result = conn.execute(text("PRAGMA table_info(features)"))
+ columns = [row[1] for row in result.fetchall()]
+
+ if "testing_in_progress" not in columns:
+ conn.execute(text(
+ "ALTER TABLE features ADD COLUMN testing_in_progress BOOLEAN DEFAULT 0"
+ ))
+ if "last_tested_at" not in columns:
+ conn.execute(text(
+ "ALTER TABLE features ADD COLUMN last_tested_at DATETIME DEFAULT NULL"
+ ))
+ conn.commit()🤖 Prompt for AI Agents
In `@api/database.py` around lines 61 - 66, The migration function
_migrate_add_testing_columns is currently a no-op but must add the deprecated
columns for DBs created after their removal; update _migrate_add_testing_columns
to check the target table schema (e.g., via inspector or PRAGMA) for the absence
of testing_in_progress and last_tested_at and, if missing, execute DDL to ALTER
TABLE and add testing_in_progress BOOLEAN NOT NULL DEFAULT FALSE and
last_tested_at DATETIME NULL (or equivalent for SQLite), ensuring the column
definitions match the SQLAlchemy model defaults and nullability and that the
operation is safe (wrap in try/except and commit/rollback as in existing
migration patterns). Ensure the function references the same table name used by
the model and handles both SQLite and other RDBMS dialect nuances when issuing
ALTER statements.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@mcp_server/feature_mcp.py`:
- Around line 304-352: The feature_get_for_regression function currently reads
Feature.regression_count and then updates it in-memory, which allows lost
increments under concurrent requests; change the selection/update to perform
row-level locking or an atomic update: use
session.query(Feature).with_for_update().filter(Feature.passes ==
True).order_by(Feature.regression_count.asc(), Feature.id.asc()).limit(limit) to
lock rows before incrementing regression_count, then increment and commit (or
perform an UPDATE ... RETURNING to atomically increment regression_count and
fetch the rows), ensure session.commit() and session.refresh(...) are used after
the locked/atomic update so returned features reflect the new counts.
♻️ Duplicate comments (1)
api/database.py (1)
61-65: Missing migration for deprecated testing columns can break inserts.Adding
testing_in_progress/last_tested_atto the model means SQLAlchemy will include them in INSERTs; databases created after their removal but before this change won’t have those columns and will error._migrate_add_testing_columns()is still a no‑op, so this gap persists.🛠️ Suggested fix (in
_migrate_add_testing_columns)def _migrate_add_testing_columns(engine) -> None: - """Legacy migration - no longer adds testing columns. - - The testing_in_progress and last_tested_at columns were removed from the - Feature model as part of simplifying the testing agent architecture. - Multiple testing agents can now test the same feature concurrently - without coordination. - - This function is kept for backwards compatibility but does nothing. - Existing databases with these columns will continue to work - the columns - are simply ignored. - """ - pass + """Add testing columns if missing (backward compatibility).""" + with engine.connect() as conn: + result = conn.execute(text("PRAGMA table_info(features)")) + columns = [row[1] for row in result.fetchall()] + if "testing_in_progress" not in columns: + conn.execute(text( + "ALTER TABLE features ADD COLUMN testing_in_progress BOOLEAN DEFAULT 0" + )) + if "last_tested_at" not in columns: + conn.execute(text( + "ALTER TABLE features ADD COLUMN last_tested_at DATETIME DEFAULT NULL" + )) + conn.commit()
|
As you referenced my issue, I tested this PR:
|
…toForgeAI#88) - Created tests/test_validate_dependency_graph_complex_cycles.py (19 tests) - Verified complex cycles (A->B->C->A) are detected with full path [1, 2, 3] - Verified missing dependencies are also detected alongside cycles - All cycle issues correctly marked as auto_fixable=False (requires user action) - Updated claude-progress.txt with session notes Feature AutoForgeAI#88 marked as PASSING. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
SUMMARY:
Fixed TypeScript build error in ProjectSetupRequired.tsx where startAgent
was being called with a boolean instead of an options object.
DETAILS:
- The startAgent API function signature was updated (in previous PR merges)
to accept an options object: { yoloMode?, parallelMode?, maxConcurrency?, testingAgentRatio? }
- ProjectSetupRequired.tsx was still calling it with the old signature:
startAgent(projectName, yoloMode) - passing boolean directly
- Changed to: startAgent(projectName, { yoloMode }) - wrapping in options object
This was the only remaining build error after merging 13+ PRs from upstream:
- PR AutoForgeAI#112: Security vulnerabilities and race conditions
- PR AutoForgeAI#89: Windows subprocess blocking fix
- PR AutoForgeAI#109: Rate limit handling with exponential backoff
- PR AutoForgeAI#88: MCP server config for ExpandChatSession
- PR AutoForgeAI#100: Diagnostic warnings for config loading
- PR AutoForgeAI#110: Quality gates (quality_gates.py)
- PR AutoForgeAI#113: Structured logging (structured_logging.py)
- PR AutoForgeAI#48: Knowledge files support (API, schemas, prompts)
- PR AutoForgeAI#29: Feature editing/deletion (MCP tools)
- PR AutoForgeAI#45: Chat persistence
- PR AutoForgeAI#52: Refactoring feature guidance
- PR AutoForgeAI#4: Project reset functionality
- PR AutoForgeAI#95: UI polish, health checks, cross-platform fixes
Build now passes: npm run build succeeds with all 2245 modules transformed.
|
Please could you resolve the conflicts |
SUMMARY:
Fixed TypeScript build error in ProjectSetupRequired.tsx where startAgent
was being called with a boolean instead of an options object.
DETAILS:
- The startAgent API function signature was updated (in previous PR merges)
to accept an options object: { yoloMode?, parallelMode?, maxConcurrency?, testingAgentRatio? }
- ProjectSetupRequired.tsx was still calling it with the old signature:
startAgent(projectName, yoloMode) - passing boolean directly
- Changed to: startAgent(projectName, { yoloMode }) - wrapping in options object
This was the only remaining build error after merging 13+ PRs from upstream:
- PR AutoForgeAI#112: Security vulnerabilities and race conditions
- PR AutoForgeAI#89: Windows subprocess blocking fix
- PR AutoForgeAI#109: Rate limit handling with exponential backoff
- PR AutoForgeAI#88: MCP server config for ExpandChatSession
- PR AutoForgeAI#100: Diagnostic warnings for config loading
- PR AutoForgeAI#110: Quality gates (quality_gates.py)
- PR AutoForgeAI#113: Structured logging (structured_logging.py)
- PR AutoForgeAI#48: Knowledge files support (API, schemas, prompts)
- PR AutoForgeAI#29: Feature editing/deletion (MCP tools)
- PR AutoForgeAI#45: Chat persistence
- PR AutoForgeAI#52: Refactoring feature guidance
- PR AutoForgeAI#4: Project reset functionality
- PR AutoForgeAI#95: UI polish, health checks, cross-platform fixes
Build now passes: npm run build succeeds with all 2245 modules transformed.
|
Please resolve the conflict. Thanks! |
|
Please continue to address the conflict. Thank you. |
When adding features to existing projects, the ExpandChatSession was unable to use feature creation MCP tools because it lacked the MCP server configuration that AssistantChatSession has. This commit adds: - EXPAND_FEATURE_TOOLS constant for feature_create, feature_create_bulk, and feature_get_stats MCP tools - MCP server configuration pointing to mcp_server.feature_mcp - Updated allowed_tools and security settings to include feature tools The existing XML parsing fallback continues to work as defense-in-depth if MCP fails. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Existing databases have testing_in_progress (NOT NULL) and last_tested_at columns from before commit 486979c. Re-add them to the Feature model with defaults to prevent NOT NULL violations when creating features via MCP. The columns are no longer used by the application but must exist in the model so SQLAlchemy populates them on INSERT. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Implements the regression_count column and feature_get_for_regression MCP tool to ensure even distribution of regression testing across all passing features. Changes: - Add regression_count column to Feature model with migration - Add feature_get_for_regression MCP tool that: - Returns passing features ordered by regression_count (ascending) - Increments count after selection for round-robin behavior - Prevents duplicate testing of same features - Remove unused RegressionInput class Based on PR AutoForgeAI#47 by connor-tyndall, cleanly reimplemented to avoid merge conflicts. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Use with_for_update() to acquire locks before reading features in feature_get_for_regression. This prevents concurrent requests from both selecting the same features and losing increment updates. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
5b1bbb9 to
5afd57f
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
server/services/expand_chat_session.py (1)
320-389:⚠️ Potential issue | 🟡 MinorRemove unused MCP tool declarations or implement proper tool handling.
The
EXPAND_FEATURE_TOOLSare registered with the Claude client but never processed—the receive_response loop only handlesTextBlockfromAssistantMessage, notToolUseBlockorToolResultBlock. This means if Claude callsfeature_create_bulkvia MCP, the tool invocation is silently ignored. Either remove the tool declarations if the fallback<features_to_create>parser is the intended path, or add tool handling (like inspec_chat_session.pyandassistant_chat_session.py) to actually use the MCP tools.
🤖 Fix all issues with AI agents
In `@api/database.py`:
- Around line 262-275: The migration _migrate_add_regression_count_column
currently adds regression_count without a NOT NULL constraint; update the ALTER
TABLE statement to add the column as "INTEGER NOT NULL DEFAULT 0" so it matches
the model's nullable=False, and after adding the column (or as a safety step)
run an UPDATE to set any NULL regression_count to 0 before committing to ensure
existing rows conform to the NOT NULL constraint.
In `@server/services/expand_chat_session.py`:
- Around line 391-458: The _create_features_bulk async method currently runs
blocking synchronous SQLAlchemy ORM calls (create_database, session.query,
session.add, session.flush, session.commit) inside the event loop; move the
entire DB work into a synchronous helper executed via asyncio.to_thread to avoid
blocking. Refactor by extracting the DB logic that references Feature,
create_database, SessionLocal, session.query/flush/commit/rollback/close and the
loop that builds db_feature objects into a regular sync function (e.g.,
_create_features_bulk_sync) and call it from _create_features_bulk using await
asyncio.to_thread(...), ensuring exceptions are propagated and IDs are returned
to the async caller. Make sure to keep session lifecycle (commit/rollback/close)
inside the sync helper so no blocking DB calls happen in the async context.
| def _migrate_add_regression_count_column(engine) -> None: | ||
| """Add regression_count column to existing databases that don't have it. | ||
|
|
||
| This column tracks how many times each feature has been regression tested, | ||
| enabling least-tested-first selection for even test distribution. | ||
| """ | ||
| with engine.connect() as conn: | ||
| # Check if column exists | ||
| result = conn.execute(text("PRAGMA table_info(features)")) | ||
| columns = [row[1] for row in result.fetchall()] | ||
|
|
||
| if "regression_count" not in columns: | ||
| conn.execute(text("ALTER TABLE features ADD COLUMN regression_count INTEGER DEFAULT 0")) | ||
| conn.commit() |
There was a problem hiding this comment.
Align the migration with the model’s NOT NULL constraint.
Line 274 adds the column without NOT NULL, which diverges from nullable=False and allows NULLs in existing DBs. Consider matching the model constraint.
🔧 Suggested migration tweak
- conn.execute(text("ALTER TABLE features ADD COLUMN regression_count INTEGER DEFAULT 0"))
+ conn.execute(text("ALTER TABLE features ADD COLUMN regression_count INTEGER NOT NULL DEFAULT 0"))🤖 Prompt for AI Agents
In `@api/database.py` around lines 262 - 275, The migration
_migrate_add_regression_count_column currently adds regression_count without a
NOT NULL constraint; update the ALTER TABLE statement to add the column as
"INTEGER NOT NULL DEFAULT 0" so it matches the model's nullable=False, and after
adding the column (or as a safety step) run an UPDATE to set any NULL
regression_count to 0 before committing to ensure existing rows conform to the
NOT NULL constraint.
| async def _create_features_bulk(self, features: list[dict]) -> list[dict]: | ||
| """ | ||
| Create features directly in the database. | ||
|
|
||
| Args: | ||
| features: List of feature dictionaries with category, name, description, steps | ||
|
|
||
| Returns: | ||
| List of created feature dictionaries with IDs | ||
|
|
||
| Note: | ||
| Uses flush() to get IDs immediately without re-querying by priority range, | ||
| which could pick up rows from concurrent writers. | ||
| """ | ||
| # Import database classes | ||
| import sys | ||
| root = Path(__file__).parent.parent.parent | ||
| if str(root) not in sys.path: | ||
| sys.path.insert(0, str(root)) | ||
|
|
||
| from api.database import Feature, create_database | ||
|
|
||
| # Get database session | ||
| _, SessionLocal = create_database(self.project_dir) | ||
| session = SessionLocal() | ||
|
|
||
| try: | ||
| # Determine starting priority | ||
| max_priority_feature = session.query(Feature).order_by(Feature.priority.desc()).first() | ||
| current_priority = (max_priority_feature.priority + 1) if max_priority_feature else 1 | ||
|
|
||
| created_rows: list = [] | ||
|
|
||
| for f in features: | ||
| db_feature = Feature( | ||
| priority=current_priority, | ||
| category=f.get("category", "functional"), | ||
| name=f.get("name", "Unnamed feature"), | ||
| description=f.get("description", ""), | ||
| steps=f.get("steps", []), | ||
| passes=False, | ||
| in_progress=False, | ||
| ) | ||
| session.add(db_feature) | ||
| created_rows.append(db_feature) | ||
| current_priority += 1 | ||
|
|
||
| # Flush to get IDs without relying on priority range query | ||
| session.flush() | ||
|
|
||
| # Build result from the flushed objects (IDs are now populated) | ||
| created_features = [ | ||
| { | ||
| "id": db_feature.id, | ||
| "name": db_feature.name, | ||
| "category": db_feature.category, | ||
| } | ||
| for db_feature in created_rows | ||
| ] | ||
|
|
||
| session.commit() | ||
| return created_features | ||
|
|
||
| except Exception: | ||
| session.rollback() | ||
| raise | ||
| finally: | ||
| session.close() |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's understand the file structure and examine the method in context
cd /repo
find . -name "expand_chat_session.py" -type f | head -20Repository: leonvanzyl/autocoder
Length of output: 177
🏁 Script executed:
# Look at the file around the specified lines to understand the context better
head -450 server/services/expand_chat_session.py | tail -100Repository: leonvanzyl/autocoder
Length of output: 3922
🏁 Script executed:
# Check the full method signature and what calls it
rg "_create_features_bulk" --context 5Repository: leonvanzyl/autocoder
Length of output: 1744
🏁 Script executed:
# Examine the database setup to understand if async is available
find . -name "database.py" -o -name "db.py" | head -10Repository: leonvanzyl/autocoder
Length of output: 81
🏁 Script executed:
# Check what create_database does
rg "def create_database" --context 10Repository: leonvanzyl/autocoder
Length of output: 1047
🏁 Script executed:
# Examine the database.py file to understand engine setup
cat -n api/database.py | head -150Repository: leonvanzyl/autocoder
Length of output: 6849
🏁 Script executed:
# Search for SQLAlchemy imports to see if async is used anywhere
rg "from sqlalchemy" --type py | head -30Repository: leonvanzyl/autocoder
Length of output: 1079
🏁 Script executed:
# Check if asyncio.to_thread is used anywhere in the project
rg "asyncio\.to_thread|to_thread" --type pyRepository: leonvanzyl/autocoder
Length of output: 46
🏁 Script executed:
# Look for other async methods that do DB operations
rg "async def.*:" server/services/expand_chat_session.py | head -20Repository: leonvanzyl/autocoder
Length of output: 529
🏁 Script executed:
# Check if there are other similar async methods doing sync DB operations
rg "async def.*\(" server/services/expand_chat_session.py -A 20 | grep -E "session\.|create_database" | head -20Repository: leonvanzyl/autocoder
Length of output: 364
🏁 Script executed:
# See the full create_database implementation to understand the engine
sed -n '200,300p' api/database.pyRepository: leonvanzyl/autocoder
Length of output: 3908
Move synchronous database operations out of async context to prevent event loop blocking.
This async method performs blocking I/O (query, flush, commit) with SQLAlchemy's synchronous ORM. Under concurrent load, these operations will stall the event loop. Use asyncio.to_thread() to offload the database work to a thread pool, allowing other tasks to progress.
🛠️ Example: offload sync DB work to a thread
- async def _create_features_bulk(self, features: list[dict]) -> list[dict]:
+ async def _create_features_bulk(self, features: list[dict]) -> list[dict]:
+ return await asyncio.to_thread(self._create_features_bulk_sync, features)
+
+ def _create_features_bulk_sync(self, features: list[dict]) -> list[dict]:
"""
Create features directly in the database.
"""
# Import database classes
import sys
root = Path(__file__).parent.parent.parent🤖 Prompt for AI Agents
In `@server/services/expand_chat_session.py` around lines 391 - 458, The
_create_features_bulk async method currently runs blocking synchronous
SQLAlchemy ORM calls (create_database, session.query, session.add,
session.flush, session.commit) inside the event loop; move the entire DB work
into a synchronous helper executed via asyncio.to_thread to avoid blocking.
Refactor by extracting the DB logic that references Feature, create_database,
SessionLocal, session.query/flush/commit/rollback/close and the loop that builds
db_feature objects into a regular sync function (e.g.,
_create_features_bulk_sync) and call it from _create_features_bulk using await
asyncio.to_thread(...), ensuring exceptions are propagated and IDs are returned
to the async caller. Make sure to keep session lifecycle (commit/rollback/close)
inside the sync helper so no blocking DB calls happen in the async context.
Problem
When adding features to existing projects via the "Add Features" / expand project flow, Claude reports it "could not access the MCP tool required to add features" and asks users to add them manually.
Root cause: The
ExpandChatSessionclass does NOT configure an MCP server, whileAssistantChatSessiondoes. This means Claude cannot usefeature_create_bulkorfeature_createMCP tools during project expansion.Solution
Added MCP server configuration to
ExpandChatSession, matching the pattern used inAssistantChatSession:EXPAND_FEATURE_TOOLSconstant forfeature_create,feature_create_bulk, andfeature_get_statsmcp_server.feature_mcpallowed_toolsand security settings to include feature toolsTesting
Notes
<features_to_create>tags) continues to work as defense-in-depth if MCP failsSummary by CodeRabbit
New Features
Chores
✏️ Tip: You can customize this high-level summary in your review settings.