Skip to content

fix: add MCP server config to ExpandChatSession for feature creation#88

Closed
sundog75 wants to merge 4 commits intoAutoForgeAI:masterfrom
sundog75:fix/expand-feature-mcp-access
Closed

fix: add MCP server config to ExpandChatSession for feature creation#88
sundog75 wants to merge 4 commits intoAutoForgeAI:masterfrom
sundog75:fix/expand-feature-mcp-access

Conversation

@sundog75
Copy link

@sundog75 sundog75 commented Jan 24, 2026

Problem

When adding features to existing projects via the "Add Features" / expand project flow, Claude reports it "could not access the MCP tool required to add features" and asks users to add them manually.

Root cause: The ExpandChatSession class does NOT configure an MCP server, while AssistantChatSession does. This means Claude cannot use feature_create_bulk or feature_create MCP tools during project expansion.

Solution

Added MCP server configuration to ExpandChatSession, matching the pattern used in AssistantChatSession:

  • Added EXPAND_FEATURE_TOOLS constant for feature_create, feature_create_bulk, and feature_get_stats
  • Added MCP server configuration pointing to mcp_server.feature_mcp
  • Updated allowed_tools and security settings to include feature tools

Testing

  1. Start the UI
  2. Select an existing project with features
  3. Click "Add Features" or use the expand feature flow
  4. Ask Claude to add a new feature
  5. Verify feature appears in the Kanban board (no manual addition required)

Notes

Summary by CodeRabbit

  • New Features

    • Chat sessions now detect and import feature-definition blocks from assistant responses, create features in bulk, and emit a features_created event with created IDs.
    • Added an endpoint/tool to fetch passing features for regression workflows with automatic regression-count updates.
  • Chores

    • Added regression tracking fields (regression_count, testing_in_progress, last_tested_at) and a migration to backfill databases.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Jan 24, 2026

📝 Walkthrough

Walkthrough

ExpandChatSession no longer uses an MCP tool for creating features: it parses <features_to_create> JSON blocks from Claude streaming responses, creates features directly in the database in bulk (assigning incremental priorities and flushing to obtain IDs), tracks created feature IDs/count, and emits features_created events. Feature model gains regression_count, testing_in_progress, and last_tested_at columns; MCP gains feature_get_for_regression to fetch/increment passing features.

Changes

Cohort / File(s) Summary
Expand chat session service
server/services/expand_chat_session.py
Reworked session to parse <features_to_create> JSON blocks from Claude streamed text, accumulate full_response, deduplicate detected feature blocks, add _create_features_bulk(features) to insert features directly into DB (determine start priority, session.flush() to get IDs), track created feature IDs/count, emit features_created event, and add robust JSON error handling. Added re import and adjusted session lifecycle/error paths.
Database — Feature model & migration
api/database.py
Added Feature columns: regression_count (Integer, default=0, indexed), testing_in_progress (Boolean, default=False), and last_tested_at (DateTime, nullable). Updated to_dict() to include regression_count (default 0) and added migration helper to add regression_count to existing DBs; invoked during DB init.
MCP server — regression tool
mcp_server/feature_mcp.py
Added public tool feature_get_for_regression(limit: Annotated[int, Field(...)] = 3) -> str that selects passing features ordered by regression_count, acquires row locks, increments regression_count, commits/refreshes, and returns a JSON payload; removed RegressionInput model.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant ExpandService as "ExpandChatSession"
    participant ClaudeClient as "ClaudeSDKClient"
    participant DB as Database

    Client->>ExpandService: request expand session
    ExpandService->>ClaudeClient: stream prompt (allow tools disabled for feature creation)
    ClaudeClient-->>ExpandService: stream text chunks (may contain <features_to_create> blocks)
    ExpandService->>ExpandService: accumulate full_response, detect/deduplicate feature blocks
    ExpandService->>DB: call _create_features_bulk(features) (determine start priority, insert, flush)
    DB-->>ExpandService: return created feature IDs/metadata
    ExpandService->>Client: emit features_created event and return expanded session
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Poem

🐰 I parsed the stream with careful paws,

found JSON blooms and ignored their flaws,
I planted features in rows so neat,
flushed their IDs and gave a little beat,
a rabbit cheers for every created feat.

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Title check ⚠️ Warning The PR title claims to 'add MCP server config' but the raw_summary shows the main changes involve reworking feature creation to parse JSON blocks instead of relying on MCP tools, plus database schema changes for regression tracking. Revise the title to reflect the primary changes, such as: 'refactor: use JSON block parsing for feature creation instead of MCP tools' or include mention of the database regression tracking additions.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

🧪 Unit Test Generation v2 is now available!

We have significantly improved our unit test generation capabilities.

To enable: Add this to your .coderabbit.yaml configuration:

reviews:
  finishing_touches:
    unit_tests:
      enabled: true

Try it out by using the @coderabbitai generate unit tests command on your code files or under ✨ Finishing Touches on the walkthrough!

Have feedback? Share your thoughts on our Discord thread!


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@sundog75 sundog75 force-pushed the fix/expand-feature-mcp-access branch from cff1808 to 32ec92e Compare January 24, 2026 01:52
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@api/database.py`:
- Around line 61-66: The migration function _migrate_add_testing_columns is
currently a no-op but must add the deprecated columns for DBs created after
their removal; update _migrate_add_testing_columns to check the target table
schema (e.g., via inspector or PRAGMA) for the absence of testing_in_progress
and last_tested_at and, if missing, execute DDL to ALTER TABLE and add
testing_in_progress BOOLEAN NOT NULL DEFAULT FALSE and last_tested_at DATETIME
NULL (or equivalent for SQLite), ensuring the column definitions match the
SQLAlchemy model defaults and nullability and that the operation is safe (wrap
in try/except and commit/rollback as in existing migration patterns). Ensure the
function references the same table name used by the model and handles both
SQLite and other RDBMS dialect nuances when issuing ALTER statements.

Comment on lines +61 to +66
# DEPRECATED: Kept for backward compatibility with existing databases.
# These columns are no longer used but prevent NOT NULL violations on insert.
# Removed in commit 486979c but old databases still have them.
testing_in_progress = Column(Boolean, nullable=False, default=False)
last_tested_at = Column(DateTime, nullable=True, default=None)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Migration gap: databases created after column removal won't have these columns.

The columns are added to the model, but _migrate_add_testing_columns() (lines 241-253) is a no-op. This creates a problem for databases that were created after the columns were removed in commit 486979c but before this PR—they won't have these columns, and INSERT operations will fail.

Consider updating _migrate_add_testing_columns to actually add the columns if missing:

Proposed fix for _migrate_add_testing_columns
 def _migrate_add_testing_columns(engine) -> None:
-    """Legacy migration - no longer adds testing columns.
-
-    The testing_in_progress and last_tested_at columns were removed from the
-    Feature model as part of simplifying the testing agent architecture.
-    Multiple testing agents can now test the same feature concurrently
-    without coordination.
-
-    This function is kept for backwards compatibility but does nothing.
-    Existing databases with these columns will continue to work - the columns
-    are simply ignored.
-    """
-    pass
+    """Add testing columns for backward compatibility.
+
+    These columns are deprecated but required in the model so SQLAlchemy
+    populates them on INSERT (preventing NOT NULL violations in old databases).
+    """
+    with engine.connect() as conn:
+        result = conn.execute(text("PRAGMA table_info(features)"))
+        columns = [row[1] for row in result.fetchall()]
+
+        if "testing_in_progress" not in columns:
+            conn.execute(text(
+                "ALTER TABLE features ADD COLUMN testing_in_progress BOOLEAN DEFAULT 0"
+            ))
+        if "last_tested_at" not in columns:
+            conn.execute(text(
+                "ALTER TABLE features ADD COLUMN last_tested_at DATETIME DEFAULT NULL"
+            ))
+        conn.commit()
🤖 Prompt for AI Agents
In `@api/database.py` around lines 61 - 66, The migration function
_migrate_add_testing_columns is currently a no-op but must add the deprecated
columns for DBs created after their removal; update _migrate_add_testing_columns
to check the target table schema (e.g., via inspector or PRAGMA) for the absence
of testing_in_progress and last_tested_at and, if missing, execute DDL to ALTER
TABLE and add testing_in_progress BOOLEAN NOT NULL DEFAULT FALSE and
last_tested_at DATETIME NULL (or equivalent for SQLite), ensuring the column
definitions match the SQLAlchemy model defaults and nullability and that the
operation is safe (wrap in try/except and commit/rollback as in existing
migration patterns). Ensure the function references the same table name used by
the model and handles both SQLite and other RDBMS dialect nuances when issuing
ALTER statements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@mcp_server/feature_mcp.py`:
- Around line 304-352: The feature_get_for_regression function currently reads
Feature.regression_count and then updates it in-memory, which allows lost
increments under concurrent requests; change the selection/update to perform
row-level locking or an atomic update: use
session.query(Feature).with_for_update().filter(Feature.passes ==
True).order_by(Feature.regression_count.asc(), Feature.id.asc()).limit(limit) to
lock rows before incrementing regression_count, then increment and commit (or
perform an UPDATE ... RETURNING to atomically increment regression_count and
fetch the rows), ensure session.commit() and session.refresh(...) are used after
the locked/atomic update so returned features reflect the new counts.
♻️ Duplicate comments (1)
api/database.py (1)

61-65: Missing migration for deprecated testing columns can break inserts.

Adding testing_in_progress / last_tested_at to the model means SQLAlchemy will include them in INSERTs; databases created after their removal but before this change won’t have those columns and will error. _migrate_add_testing_columns() is still a no‑op, so this gap persists.

🛠️ Suggested fix (in _migrate_add_testing_columns)
 def _migrate_add_testing_columns(engine) -> None:
-    """Legacy migration - no longer adds testing columns.
-
-    The testing_in_progress and last_tested_at columns were removed from the
-    Feature model as part of simplifying the testing agent architecture.
-    Multiple testing agents can now test the same feature concurrently
-    without coordination.
-
-    This function is kept for backwards compatibility but does nothing.
-    Existing databases with these columns will continue to work - the columns
-    are simply ignored.
-    """
-    pass
+    """Add testing columns if missing (backward compatibility)."""
+    with engine.connect() as conn:
+        result = conn.execute(text("PRAGMA table_info(features)"))
+        columns = [row[1] for row in result.fetchall()]
+        if "testing_in_progress" not in columns:
+            conn.execute(text(
+                "ALTER TABLE features ADD COLUMN testing_in_progress BOOLEAN DEFAULT 0"
+            ))
+        if "last_tested_at" not in columns:
+            conn.execute(text(
+                "ALTER TABLE features ADD COLUMN last_tested_at DATETIME DEFAULT NULL"
+            ))
+        conn.commit()

@pandel
Copy link

pandel commented Jan 24, 2026

As you referenced my issue, I tested this PR:

  • adding features via AI chat dialog worked now
  • permissions to use WbeFetch or WebSearch are still not available

rudiheydra added a commit to rudiheydra/AutoBuildr that referenced this pull request Jan 27, 2026
…toForgeAI#88)

- Created tests/test_validate_dependency_graph_complex_cycles.py (19 tests)
- Verified complex cycles (A->B->C->A) are detected with full path [1, 2, 3]
- Verified missing dependencies are also detected alongside cycles
- All cycle issues correctly marked as auto_fixable=False (requires user action)
- Updated claude-progress.txt with session notes

Feature AutoForgeAI#88 marked as PASSING.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
getworken pushed a commit to getworken/autocoder that referenced this pull request Jan 27, 2026
getworken pushed a commit to getworken/autocoder that referenced this pull request Jan 27, 2026
SUMMARY:
Fixed TypeScript build error in ProjectSetupRequired.tsx where startAgent
was being called with a boolean instead of an options object.

DETAILS:
- The startAgent API function signature was updated (in previous PR merges)
  to accept an options object: { yoloMode?, parallelMode?, maxConcurrency?, testingAgentRatio? }
- ProjectSetupRequired.tsx was still calling it with the old signature:
  startAgent(projectName, yoloMode) - passing boolean directly
- Changed to: startAgent(projectName, { yoloMode }) - wrapping in options object

This was the only remaining build error after merging 13+ PRs from upstream:
- PR AutoForgeAI#112: Security vulnerabilities and race conditions
- PR AutoForgeAI#89: Windows subprocess blocking fix
- PR AutoForgeAI#109: Rate limit handling with exponential backoff
- PR AutoForgeAI#88: MCP server config for ExpandChatSession
- PR AutoForgeAI#100: Diagnostic warnings for config loading
- PR AutoForgeAI#110: Quality gates (quality_gates.py)
- PR AutoForgeAI#113: Structured logging (structured_logging.py)
- PR AutoForgeAI#48: Knowledge files support (API, schemas, prompts)
- PR AutoForgeAI#29: Feature editing/deletion (MCP tools)
- PR AutoForgeAI#45: Chat persistence
- PR AutoForgeAI#52: Refactoring feature guidance
- PR AutoForgeAI#4: Project reset functionality
- PR AutoForgeAI#95: UI polish, health checks, cross-platform fixes

Build now passes: npm run build succeeds with all 2245 modules transformed.
@leonvanzyl
Copy link
Collaborator

Please could you resolve the conflicts

getworken pushed a commit to getworken/autocoder that referenced this pull request Jan 29, 2026
SUMMARY:
Fixed TypeScript build error in ProjectSetupRequired.tsx where startAgent
was being called with a boolean instead of an options object.

DETAILS:
- The startAgent API function signature was updated (in previous PR merges)
  to accept an options object: { yoloMode?, parallelMode?, maxConcurrency?, testingAgentRatio? }
- ProjectSetupRequired.tsx was still calling it with the old signature:
  startAgent(projectName, yoloMode) - passing boolean directly
- Changed to: startAgent(projectName, { yoloMode }) - wrapping in options object

This was the only remaining build error after merging 13+ PRs from upstream:
- PR AutoForgeAI#112: Security vulnerabilities and race conditions
- PR AutoForgeAI#89: Windows subprocess blocking fix
- PR AutoForgeAI#109: Rate limit handling with exponential backoff
- PR AutoForgeAI#88: MCP server config for ExpandChatSession
- PR AutoForgeAI#100: Diagnostic warnings for config loading
- PR AutoForgeAI#110: Quality gates (quality_gates.py)
- PR AutoForgeAI#113: Structured logging (structured_logging.py)
- PR AutoForgeAI#48: Knowledge files support (API, schemas, prompts)
- PR AutoForgeAI#29: Feature editing/deletion (MCP tools)
- PR AutoForgeAI#45: Chat persistence
- PR AutoForgeAI#52: Refactoring feature guidance
- PR AutoForgeAI#4: Project reset functionality
- PR AutoForgeAI#95: UI polish, health checks, cross-platform fixes

Build now passes: npm run build succeeds with all 2245 modules transformed.
@leonvanzyl
Copy link
Collaborator

Please resolve the conflict. Thanks!

@leonvanzyl
Copy link
Collaborator

Please continue to address the conflict. Thank you.

sundog75 and others added 4 commits January 30, 2026 10:03
When adding features to existing projects, the ExpandChatSession was
unable to use feature creation MCP tools because it lacked the MCP
server configuration that AssistantChatSession has.

This commit adds:
- EXPAND_FEATURE_TOOLS constant for feature_create, feature_create_bulk,
  and feature_get_stats MCP tools
- MCP server configuration pointing to mcp_server.feature_mcp
- Updated allowed_tools and security settings to include feature tools

The existing XML parsing fallback continues to work as defense-in-depth
if MCP fails.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Existing databases have testing_in_progress (NOT NULL) and last_tested_at
columns from before commit 486979c. Re-add them to the Feature model with
defaults to prevent NOT NULL violations when creating features via MCP.

The columns are no longer used by the application but must exist in the
model so SQLAlchemy populates them on INSERT.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Implements the regression_count column and feature_get_for_regression MCP tool
to ensure even distribution of regression testing across all passing features.

Changes:
- Add regression_count column to Feature model with migration
- Add feature_get_for_regression MCP tool that:
  - Returns passing features ordered by regression_count (ascending)
  - Increments count after selection for round-robin behavior
  - Prevents duplicate testing of same features
- Remove unused RegressionInput class

Based on PR AutoForgeAI#47 by connor-tyndall, cleanly reimplemented to avoid merge conflicts.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Use with_for_update() to acquire locks before reading features in
feature_get_for_regression. This prevents concurrent requests from
both selecting the same features and losing increment updates.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@sundog75 sundog75 force-pushed the fix/expand-feature-mcp-access branch from 5b1bbb9 to 5afd57f Compare January 30, 2026 10:06
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
server/services/expand_chat_session.py (1)

320-389: ⚠️ Potential issue | 🟡 Minor

Remove unused MCP tool declarations or implement proper tool handling.

The EXPAND_FEATURE_TOOLS are registered with the Claude client but never processed—the receive_response loop only handles TextBlock from AssistantMessage, not ToolUseBlock or ToolResultBlock. This means if Claude calls feature_create_bulk via MCP, the tool invocation is silently ignored. Either remove the tool declarations if the fallback <features_to_create> parser is the intended path, or add tool handling (like in spec_chat_session.py and assistant_chat_session.py) to actually use the MCP tools.

🤖 Fix all issues with AI agents
In `@api/database.py`:
- Around line 262-275: The migration _migrate_add_regression_count_column
currently adds regression_count without a NOT NULL constraint; update the ALTER
TABLE statement to add the column as "INTEGER NOT NULL DEFAULT 0" so it matches
the model's nullable=False, and after adding the column (or as a safety step)
run an UPDATE to set any NULL regression_count to 0 before committing to ensure
existing rows conform to the NOT NULL constraint.

In `@server/services/expand_chat_session.py`:
- Around line 391-458: The _create_features_bulk async method currently runs
blocking synchronous SQLAlchemy ORM calls (create_database, session.query,
session.add, session.flush, session.commit) inside the event loop; move the
entire DB work into a synchronous helper executed via asyncio.to_thread to avoid
blocking. Refactor by extracting the DB logic that references Feature,
create_database, SessionLocal, session.query/flush/commit/rollback/close and the
loop that builds db_feature objects into a regular sync function (e.g.,
_create_features_bulk_sync) and call it from _create_features_bulk using await
asyncio.to_thread(...), ensuring exceptions are propagated and IDs are returned
to the async caller. Make sure to keep session lifecycle (commit/rollback/close)
inside the sync helper so no blocking DB calls happen in the async context.

Comment on lines +262 to +275
def _migrate_add_regression_count_column(engine) -> None:
"""Add regression_count column to existing databases that don't have it.

This column tracks how many times each feature has been regression tested,
enabling least-tested-first selection for even test distribution.
"""
with engine.connect() as conn:
# Check if column exists
result = conn.execute(text("PRAGMA table_info(features)"))
columns = [row[1] for row in result.fetchall()]

if "regression_count" not in columns:
conn.execute(text("ALTER TABLE features ADD COLUMN regression_count INTEGER DEFAULT 0"))
conn.commit()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Align the migration with the model’s NOT NULL constraint.

Line 274 adds the column without NOT NULL, which diverges from nullable=False and allows NULLs in existing DBs. Consider matching the model constraint.

🔧 Suggested migration tweak
-            conn.execute(text("ALTER TABLE features ADD COLUMN regression_count INTEGER DEFAULT 0"))
+            conn.execute(text("ALTER TABLE features ADD COLUMN regression_count INTEGER NOT NULL DEFAULT 0"))
🤖 Prompt for AI Agents
In `@api/database.py` around lines 262 - 275, The migration
_migrate_add_regression_count_column currently adds regression_count without a
NOT NULL constraint; update the ALTER TABLE statement to add the column as
"INTEGER NOT NULL DEFAULT 0" so it matches the model's nullable=False, and after
adding the column (or as a safety step) run an UPDATE to set any NULL
regression_count to 0 before committing to ensure existing rows conform to the
NOT NULL constraint.

Comment on lines +391 to +458
async def _create_features_bulk(self, features: list[dict]) -> list[dict]:
"""
Create features directly in the database.

Args:
features: List of feature dictionaries with category, name, description, steps

Returns:
List of created feature dictionaries with IDs

Note:
Uses flush() to get IDs immediately without re-querying by priority range,
which could pick up rows from concurrent writers.
"""
# Import database classes
import sys
root = Path(__file__).parent.parent.parent
if str(root) not in sys.path:
sys.path.insert(0, str(root))

from api.database import Feature, create_database

# Get database session
_, SessionLocal = create_database(self.project_dir)
session = SessionLocal()

try:
# Determine starting priority
max_priority_feature = session.query(Feature).order_by(Feature.priority.desc()).first()
current_priority = (max_priority_feature.priority + 1) if max_priority_feature else 1

created_rows: list = []

for f in features:
db_feature = Feature(
priority=current_priority,
category=f.get("category", "functional"),
name=f.get("name", "Unnamed feature"),
description=f.get("description", ""),
steps=f.get("steps", []),
passes=False,
in_progress=False,
)
session.add(db_feature)
created_rows.append(db_feature)
current_priority += 1

# Flush to get IDs without relying on priority range query
session.flush()

# Build result from the flushed objects (IDs are now populated)
created_features = [
{
"id": db_feature.id,
"name": db_feature.name,
"category": db_feature.category,
}
for db_feature in created_rows
]

session.commit()
return created_features

except Exception:
session.rollback()
raise
finally:
session.close()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's understand the file structure and examine the method in context
cd /repo
find . -name "expand_chat_session.py" -type f | head -20

Repository: leonvanzyl/autocoder

Length of output: 177


🏁 Script executed:

# Look at the file around the specified lines to understand the context better
head -450 server/services/expand_chat_session.py | tail -100

Repository: leonvanzyl/autocoder

Length of output: 3922


🏁 Script executed:

# Check the full method signature and what calls it
rg "_create_features_bulk" --context 5

Repository: leonvanzyl/autocoder

Length of output: 1744


🏁 Script executed:

# Examine the database setup to understand if async is available
find . -name "database.py" -o -name "db.py" | head -10

Repository: leonvanzyl/autocoder

Length of output: 81


🏁 Script executed:

# Check what create_database does
rg "def create_database" --context 10

Repository: leonvanzyl/autocoder

Length of output: 1047


🏁 Script executed:

# Examine the database.py file to understand engine setup
cat -n api/database.py | head -150

Repository: leonvanzyl/autocoder

Length of output: 6849


🏁 Script executed:

# Search for SQLAlchemy imports to see if async is used anywhere
rg "from sqlalchemy" --type py | head -30

Repository: leonvanzyl/autocoder

Length of output: 1079


🏁 Script executed:

# Check if asyncio.to_thread is used anywhere in the project
rg "asyncio\.to_thread|to_thread" --type py

Repository: leonvanzyl/autocoder

Length of output: 46


🏁 Script executed:

# Look for other async methods that do DB operations
rg "async def.*:" server/services/expand_chat_session.py | head -20

Repository: leonvanzyl/autocoder

Length of output: 529


🏁 Script executed:

# Check if there are other similar async methods doing sync DB operations
rg "async def.*\(" server/services/expand_chat_session.py -A 20 | grep -E "session\.|create_database" | head -20

Repository: leonvanzyl/autocoder

Length of output: 364


🏁 Script executed:

# See the full create_database implementation to understand the engine
sed -n '200,300p' api/database.py

Repository: leonvanzyl/autocoder

Length of output: 3908


Move synchronous database operations out of async context to prevent event loop blocking.

This async method performs blocking I/O (query, flush, commit) with SQLAlchemy's synchronous ORM. Under concurrent load, these operations will stall the event loop. Use asyncio.to_thread() to offload the database work to a thread pool, allowing other tasks to progress.

🛠️ Example: offload sync DB work to a thread
-    async def _create_features_bulk(self, features: list[dict]) -> list[dict]:
+    async def _create_features_bulk(self, features: list[dict]) -> list[dict]:
+        return await asyncio.to_thread(self._create_features_bulk_sync, features)
+
+    def _create_features_bulk_sync(self, features: list[dict]) -> list[dict]:
         """
         Create features directly in the database.
         """
         # Import database classes
         import sys
         root = Path(__file__).parent.parent.parent
🤖 Prompt for AI Agents
In `@server/services/expand_chat_session.py` around lines 391 - 458, The
_create_features_bulk async method currently runs blocking synchronous
SQLAlchemy ORM calls (create_database, session.query, session.add,
session.flush, session.commit) inside the event loop; move the entire DB work
into a synchronous helper executed via asyncio.to_thread to avoid blocking.
Refactor by extracting the DB logic that references Feature, create_database,
SessionLocal, session.query/flush/commit/rollback/close and the loop that builds
db_feature objects into a regular sync function (e.g.,
_create_features_bulk_sync) and call it from _create_features_bulk using await
asyncio.to_thread(...), ensuring exceptions are propagated and IDs are returned
to the async caller. Make sure to keep session lifecycle (commit/rollback/close)
inside the sync helper so no blocking DB calls happen in the async context.

@sundog75 sundog75 closed this Jan 30, 2026
@sundog75 sundog75 deleted the fix/expand-feature-mcp-access branch January 30, 2026 10:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants