Skip to content

Implement Codex 5.3 suggestions #9

@aviyashchin

Description

@aviyashchin

DeepWiki Q&A with Code Context for Repository: Subconscious-ai/ghostshell

Q1

Thougths on this repository and how we can improve it? What tasks would you give a Codex 5.3 model to improve this repo for clarity, ease of engineering development, documentation, testing, and coding best practices. Act like you're linus torvalds.

Answer

Alright, let's cut through the bullshit and talk about what's actually wrong with this codebase. It's not terrible, but it's got that "startup MVP that grew too fast" smell all over it.

The Good Parts (Yes, There Are Some)

The core architecture is actually decent. The separation between local stdio mode and remote SSE mode with shared handlers is smart. ghostshell:1-30 The retry logic with exponential backoff is properly implemented. ghostshell:23-66 And you've got a custom exception hierarchy that actually makes sense. ghostshell:4-48

The Problems (Oh Boy, There Are Many)

1. YOUR BUILD IS BROKEN

Your CI/CD workflow references requirements.txt THREE TIMES, but the file doesn't exist. ghostshell:24-24 ghostshell:45-45 ghostshell:67-67 Your README also tells people to install from a non-existent file. ghostshell:64-64 You're using pyproject.toml for dependencies ghostshell:21-27 but your tooling doesn't know it. This is unacceptable. The build should never reference files that don't exist.

2. MASSIVE CODE DUPLICATION

You have the SAME EXACT tool implementations in two places. Look at this garbage: the entire api/index.py file from lines 125-462 is basically duplicating what's in server/tools/_core/handlers.py. ghostshell:125-303 The logic for create_experiment, check_causality, etc. is repeated. DRY is not just a suggestion, it's a requirement. This makes bugs twice as likely and fixes twice as hard.

3. YOUR README IS LYING

The clone command references the wrong repository name: git clone https://github.com/Subconscious-ai/subconscious-ai-mcp.git but the actual repo is ghostshell. ghostshell:56-57 Either fix the README or rename the repo. Pick one and stick with it.

4. LICENSE CONFUSION

Your README badge says "Proprietary" ghostshell:3-3 but your pyproject.toml says "MIT License". ghostshell:7-7 Which is it? Legal ambiguity is not cute.

5. TESTING IS A JOKE

Your test suite is pathetic. ghostshell:20-51 You're just testing that tool definitions exist. Where are the actual handler tests? Where are the integration tests? Where are the tests that verify error handling actually works? The CI runs pytest but there's barely anything to test.

6. CONFIGURATION DUPLICATION

You have configuration logic duplicated between server/config.py and api/index.py. ghostshell:20-82 ghostshell:46-71 The CORS configuration is copy-pasted. The API_BASE_URL is set in both places. This is maintenance hell waiting to happen.

7. WEAK TYPE SAFETY

Your type hints are inconsistent. Some places use proper types, others use dict or Dict[str, Any] everywhere. ghostshell:109-149 The return type of call_tool is list[TextContent] but you're constructing them inline without proper validation. Python's type system isn't perfect, but you should at least pretend to use it consistently.

8. ERROR HANDLING IS HALF-ASSED

The local server's call_tool function has this beautiful try-catch that just returns a text string. ghostshell:139-149 No structured error logging, no error codes, just a generic "Error executing tool". Meanwhile, your handlers do proper error handling. Pick a pattern and stick with it throughout the stack.

Tasks for Your Codex 5.3 Model (Or Any Competent Engineer)

PRIORITY 1: INFRASTRUCTURE & BUILD

Task 1.1: Generate a proper requirements.txt from pyproject.toml dependencies. Update CI/CD to use pip install -e . or pip install -e ".[dev]" instead of referencing a non-existent file.

Task 1.2: Fix the repository name inconsistency. Either rename the repo to subconscious-ai-mcp or fix all references in README to use ghostshell. Update all clone commands, documentation, and config examples.

Task 1.3: Resolve the license ambiguity. Choose MIT or Proprietary and make it consistent everywhere: README badge, pyproject.toml, LICENSE file, and any other references.

PRIORITY 2: CODE QUALITY & DEDUPLICATION

Task 2.1: Refactor api/index.py to use the shared handlers from server/tools/_core/handlers.py instead of reimplementing everything. The Vercel API should be a thin wrapper that:

  • Extracts token from request
  • Creates RequestTokenProvider(token)
  • Calls the shared handler
  • Returns the formatted response

Task 2.2: Consolidate configuration. Create a single source of truth for CORS settings, API URLs, and other config. Make api/index.py import from server/config.py or extract a shared config module.

Task 2.3: Add comprehensive type hints throughout. Define Pydantic models or TypedDicts for all API request/response schemas. No more naked Dict[str, Any] everywhere.

Task 2.4: Standardize error handling across the entire application. The MCP server's call_tool should return structured errors that match the ToolResult format, not just text strings.

PRIORITY 3: TESTING

Task 3.1: Write comprehensive unit tests for all 15 handler functions in server/tools/_core/handlers.py. Test success cases, error cases, and edge cases. Mock the HTTP client properly.

Task 3.2: Add integration tests that spin up a test server and verify the MCP protocol works end-to-end. Test the SSE endpoint, test tool calls, test authentication failure paths.

Task 3.3: Add property-based tests using hypothesis for input validation. Your tools take complex nested dictionaries - verify they handle malformed inputs gracefully.

Task 3.4: Add test coverage reporting. Aim for >80% coverage. Add it to CI/CD so PRs show coverage changes.

PRIORITY 4: DOCUMENTATION

Task 4.1: Add docstrings to EVERY public function. Not just the handlers, but also the helper functions. Explain what each parameter does, what it returns, what exceptions it raises. Use proper Google or NumPy docstring format.

Task 4.2: Create an ARCHITECTURE.md that explains the codebase structure better than your current CLAUDE.md. Include diagrams (ASCII art is fine, mermaid is better). Explain the data flow from MCP client → server → API backend.

Task 4.3: Add a CONTRIBUTING.md with clear guidelines: how to set up dev environment, how to run tests, how to add new tools, code style requirements, PR process.

Task 4.4: Document the API versioning strategy. You're calling /api/v1/, /api/v2/, and /api/v3/ endpoints. What happens when v1 is deprecated? How do you handle breaking changes?

PRIORITY 5: CODE ORGANIZATION

Task 5.1: The server/main.py file has unnecessary path manipulation at the top. ghostshell:17-18 Fix your Python path setup properly with proper package structure or use relative imports correctly.

Task 5.2: The APIClient class in server/utils/api_client.py is not used anywhere. ghostshell:10-82 Either use it or delete it. Dead code is worse than no code.

Task 5.3: Create proper request/response models using Pydantic for all tool inputs and outputs. This will give you automatic validation, better error messages, and documentation that doesn't lie.

Task 5.4: Add logging consistently. Your handlers log some things but not others. Add structured logging (JSON logs in production) with proper log levels, correlation IDs, and request tracing.

PRIORITY 6: RELIABILITY & OBSERVABILITY

Task 6.1: Add metrics/observability. Track: request count, error rate, latency percentiles, retry counts, authentication failures. Use something like Prometheus or at minimum, log structured metrics.

Task 6.2: Improve the retry logic. The current implementation is decent but lacks jitter. ghostshell:52-56 Add random jitter to avoid thundering herd problems when multiple clients retry at the same time.

Task 6.3: Add request timeouts at the application level. Your HTTP client has timeouts but there's no overall request deadline. Long-running experiments should have progress callbacks or webhook notifications, not just polling.

Task 6.4: Add health checks that actually check things. The current health endpoint just returns a static JSON. ghostshell:591-597 Check if you can reach the backend API, check if auth is working, check memory usage.

PRIORITY 7: SECURITY

Task 7.1: The token is passed as a query parameter in the SSE URL. ghostshell:520-522 This gets logged everywhere. Move to Authorization header only. Deprecate query param tokens properly with warnings.

Task 7.2: Add rate limiting per token/user. Right now someone can spam your API. Add sensible limits and return proper 429 responses with Retry-After headers.

Task 7.3: Validate all inputs. You're accepting arbitrary dictionaries and passing them to the backend. Add input validation, sanitization, and size limits to prevent abuse.

Task 7.4: Add security headers: HSTS, CSP, X-Content-Type-Options, etc. Your Starlette app doesn't set any security headers.

Summary

This codebase is functional but sloppy. It works for a prototype but it's not ready for production load. The biggest sins are the code duplication and the broken build system. Fix those first, then improve testing, then tackle the rest.

And for the love of god, stop copy-pasting code. If you find yourself implementing the same logic twice, that's a sign you need to refactor, not duplicate.

Notes:

The repository shows signs of rapid development without sufficient refactoring. The core abstractions (TokenProvider pattern, shared handlers, retry logic) are solid, but they're undermined by inconsistent application, duplication, and weak testing. Most of these issues are straightforward to fix with dedicated effort.

The fact that CLAUDE.md has better architecture documentation than the actual codebase comments is telling. Documentation should describe the code, not replace it.

Citations

File: server/tools/_core/handlers.py (L1-30)

"""Unified tool handlers shared between local and remote modes."""

import logging
import os
from datetime import datetime
from typing import Any, Dict, Optional, cast
from urllib.parse import urlencode

import httpx

from .base import TokenProvider, ToolResult
from .exceptions import (
    AuthenticationError,
    AuthorizationError,
    NetworkError,
    NotFoundError,
    RateLimitError,
    ServerError,
    ValidationError,
)
from .retry import with_retry

logger = logging.getLogger("subconscious-ai")

# Configuration
API_BASE_URL = os.getenv("API_BASE_URL", "https://api.subconscious.ai")
REQUEST_TIMEOUT = 300
MAX_RETRIES = 3
RETRY_DELAY = 1.0

File: server/tools/_core/retry.py (L23-66)

def with_retry(
    max_retries: int = 3,
    base_delay: float = 1.0,
    exponential: bool = True,
) -> Callable[[Callable[P, Awaitable[T]]], Callable[P, Awaitable[T]]]:
    """
    Retry decorator with exponential backoff.

    Args:
        max_retries: Maximum number of retry attempts
        base_delay: Base delay between retries in seconds
        exponential: Whether to use exponential backoff (2^attempt)

    Returns:
        Decorated async function with retry logic
    """

    def decorator(func: Callable[P, Awaitable[T]]) -> Callable[P, Awaitable[T]]:
        @wraps(func)
        async def wrapper(*args: Any, **kwargs: Any) -> T:
            last_exception: Exception | None = None

            for attempt in range(max_retries + 1):
                try:
                    return await func(*args, **kwargs)
                except RETRYABLE_ERRORS as e:
                    last_exception = e

                    if attempt < max_retries:
                        delay = base_delay * (2**attempt if exponential else 1)
                        logger.warning(
                            f"Retry {attempt + 1}/{max_retries} after {delay:.1f}s: {e}"
                        )
                        await asyncio.sleep(delay)
                    else:
                        logger.error(
                            f"Max retries ({max_retries}) exceeded: {e}"
                        )

            # If we get here, all retries failed
            if last_exception is not None:
                raise last_exception
            raise RuntimeError("Unexpected retry state")

File: server/tools/_core/exceptions.py (L4-48)

class SubconsciousError(Exception):
    """Base exception for all Subconscious AI errors."""

    pass


class AuthenticationError(SubconsciousError):
    """Token invalid or expired (HTTP 401)."""

    pass


class AuthorizationError(SubconsciousError):
    """Access denied to resource (HTTP 403)."""

    pass


class NotFoundError(SubconsciousError):
    """Resource not found (HTTP 404)."""

    pass


class ValidationError(SubconsciousError):
    """Invalid request parameters (HTTP 400, 422)."""

    pass


class RateLimitError(SubconsciousError):
    """Rate limit exceeded (HTTP 429)."""

    pass


class ServerError(SubconsciousError):
    """Backend server error (HTTP 5xx)."""

    pass


class NetworkError(SubconsciousError):
    """Network connectivity or timeout issue."""

File: .github/workflows/ci.yml (L24-24)

          pip install -r requirements.txt

File: .github/workflows/ci.yml (L45-45)

          pip install -r requirements.txt

File: .github/workflows/ci.yml (L67-67)

          pip install -r requirements.txt

File: README.md (L3-3)

[![License: Proprietary](https://img.shields.io/badge/License-Proprietary-red.svg)](LICENSE)

File: README.md (L56-57)

git clone https://github.com/Subconscious-ai/subconscious-ai-mcp.git
cd subconscious-ai-mcp

File: README.md (L64-64)

pip install -r requirements.txt

File: pyproject.toml (L7-7)

license = {text = "MIT"}

File: pyproject.toml (L21-27)

dependencies = [
    "mcp>=0.1.0",
    "httpx>=0.25.0",
    "pydantic>=2.0.0",
    "pydantic-settings>=2.0.0",
    "python-dotenv>=1.0.0",
]

File: api/index.py (L46-71)

API_BASE_URL = os.getenv("API_BASE_URL", "https://api.subconscious.ai")
SERVER_NAME = "subconscious-ai"
SERVER_VERSION = "1.0.0"

# CORS Configuration
# Note: Starlette CORSMiddleware doesn't support wildcards in origins list.
# Use allow_origin_regex for pattern matching.
CORS_ORIGINS_ENV = os.getenv("CORS_ALLOWED_ORIGINS", "")
CORS_ALLOW_ALL = os.getenv("CORS_ALLOW_ALL", "").lower() in ("true", "1", "yes")

if CORS_ORIGINS_ENV:
    CORS_ALLOWED_ORIGINS: List[str] = [o.strip() for o in CORS_ORIGINS_ENV.split(",") if o.strip()]
    CORS_ORIGIN_REGEX: Optional[str] = None
elif CORS_ALLOW_ALL:
    CORS_ALLOWED_ORIGINS = ["*"]
    CORS_ORIGIN_REGEX = None
else:
    # Default: explicit production origins + regex for dev/preview
    CORS_ALLOWED_ORIGINS = [
        "https://app.subconscious.ai",
        "https://holodeck.subconscious.ai",
        "https://ghostshell-runi.vercel.app",
    ]
    # Regex to match Vercel preview deployments and localhost
    CORS_ORIGIN_REGEX = r"https://.*\.vercel\.app|http://localhost:\d+|http://127\.0\.0\.1:\d+"

File: api/index.py (L125-303)

async def check_causality(token: str, args: dict) -> dict:
    try:
        response = await api_request("POST", "/api/v2/copilot/causality", token, {
            "why_prompt": args["why_prompt"],
            "llm_model": args.get("llm_model", "databricks-claude-sonnet-4")
        })
        return {"success": True, "data": response}
    except Exception as e:
        return {"success": False, "error": str(e)}


async def generate_attributes_levels(token: str, args: dict) -> dict:
    model_map = {"sonnet": "databricks-claude-sonnet-4", "gpt4": "azure-openai-gpt4"}
    llm_model = model_map.get(args.get("llm_model", "sonnet"), "databricks-claude-sonnet-4")
    try:
        response = await api_request("POST", "/api/v1/attributes-levels-claude", token, {
            "why_prompt": args["why_prompt"],
            "country": args.get("country", "United States"),
            "year": args.get("year", "2024"),
            "attribute_count": args.get("attribute_count", 5),
            "level_count": args.get("level_count", 4),
            "llm_model": llm_model
        })
        attrs = response if isinstance(response, list) else response.get("attributes_levels", [])
        return {"success": True, "data": {"attributes_levels": attrs}}
    except Exception as e:
        return {"success": False, "error": str(e)}


async def create_experiment(token: str, args: dict) -> dict:
    model_map = {"sonnet": "databricks-claude-sonnet-4", "gpt4": "azure-openai-gpt4"}
    llm_model = model_map.get(args.get("expr_llm_model", "sonnet"), "databricks-claude-sonnet-4")
    country = args.get("country", "United States")
    if country == "United States":
        country = "United States of America (USA)"

    payload = {
        "why_prompt": args["why_prompt"],
        "country": country,
        "attribute_count": args.get("attribute_count", 5),
        "level_count": args.get("level_count", 4),
        "is_private": args.get("is_private", False),
        "expr_llm_model": llm_model,
        "experiment_type": "conjoint",
        "confidence_level": args.get("confidence_level", "Low"),
        "year": str(datetime.now().year),
        "target_population": {
            "age": [18, 75],
            "gender": ["Male", "Female"],
            "racial_group": ["White", "African American", "Asian or Pacific Islander", "Mixed race", "Other race"],
            "education_level": ["High School Diploma", "Some College", "Bachelors", "Masters", "PhD"],
            "household_income": [0, 300000],
            "number_of_children": ["0", "1", "2", "3", "4+"]
        },
        "latent_variables": True,
        "add_neither_option": True,
        "binary_choice": False,
        "match_population_distribution": False
    }

    if args.get("pre_cooked_attributes_and_levels_lookup"):
        raw_attrs = args["pre_cooked_attributes_and_levels_lookup"]
        formatted = []
        for item in raw_attrs:
            if isinstance(item, dict):
                formatted.append([item["attribute"], item["levels"]])
            elif isinstance(item, list) and len(item) >= 2:
                formatted.append(item if isinstance(item[1], list) else [item[0], item[1:]])
        payload["pre_cooked_attributes_and_levels_lookup"] = formatted

    try:
        response = await api_request("POST", "/api/v1/experiments", token, payload)
        return {"success": True, "data": response}
    except Exception as e:
        return {"success": False, "error": str(e)}


async def get_experiment_status(token: str, args: dict) -> dict:
    try:
        response = await api_request("GET", f"/api/v1/runs/{args['run_id']}", token)
        return {"success": True, "data": response}
    except Exception as e:
        return {"success": False, "error": str(e)}


async def list_experiments(token: str, args: dict) -> dict:
    try:
        response = await api_request("GET", "/api/v1/runs/all", token)
        runs = response if isinstance(response, list) else response.get("runs", [])
        runs = runs[:args.get("limit", 20)]
        return {"success": True, "data": {"runs": runs, "count": len(runs)}}
    except Exception as e:
        return {"success": False, "error": str(e)}


async def get_experiment_results(token: str, args: dict) -> dict:
    try:
        response = await api_request("GET", f"/api/v1/runs/{args['run_id']}", token)
        return {"success": True, "data": response}
    except Exception as e:
        return {"success": False, "error": str(e)}


async def get_amce_data(token: str, args: dict) -> dict:
    try:
        response = await api_request("GET", f"/api/v3/runs/{args['run_id']}/processed/amce", token)
        return {"success": True, "data": response}
    except Exception as e:
        return {"success": False, "error": str(e)}


async def get_causal_insights(token: str, args: dict) -> dict:
    try:
        response = await api_request("POST", f"/api/v3/runs/{args['run_id']}/generate/causal-sentences", token, {})
        sentences = [item.get("sentence", str(item)) if isinstance(item, dict) else str(item) for item in response] if isinstance(response, list) else []
        return {"success": True, "data": {"causal_statements": sentences}}
    except Exception as e:
        return {"success": False, "error": str(e)}


async def validate_population(token: str, args: dict) -> dict:
    try:
        response = await api_request("POST", "/api/v1/population/validate", token, {
            "country": args.get("country", "United States of America (USA)"),
            "target_population": args.get("target_population", {})
        })
        return {"success": True, "data": response}
    except Exception as e:
        return {"success": False, "error": str(e)}


async def get_population_stats(token: str, args: dict) -> dict:
    try:
        country = args.get("country", "United States of America (USA)")
        response = await api_request("GET", f"/api/v1/population/stats?country={country}", token)
        return {"success": True, "data": response}
    except Exception as e:
        return {"success": False, "error": str(e)}


async def get_run_details(token: str, args: dict) -> dict:
    try:
        response = await api_request("GET", f"/api/v1/runs/{args['run_id']}", token)
        return {"success": True, "data": response}
    except Exception as e:
        return {"success": False, "error": str(e)}


async def get_run_artifacts(token: str, args: dict) -> dict:
    try:
        response = await api_request("GET", f"/api/v3/runs/{args['run_id']}/artifacts", token)
        return {"success": True, "data": response}
    except Exception as e:
        return {"success": False, "error": str(e)}


async def update_run_config(token: str, args: dict) -> dict:
    try:
        response = await api_request("POST", f"/api/v1/runs/{args['run_id']}/config", token, args.get("config", {}))
        return {"success": True, "data": response}
    except Exception as e:
        return {"success": False, "error": str(e)}


async def generate_personas(token: str, args: dict) -> dict:
    try:
        response = await api_request("POST", f"/api/v3/runs/{args['run_id']}/generate/personas", token, {"count": args.get("count", 5)})
        return {"success": True, "data": response}
    except Exception as e:
        return {"success": False, "error": str(e)}


async def get_experiment_personas(token: str, args: dict) -> dict:
    try:
        response = await api_request("GET", f"/api/v3/runs/{args['run_id']}/personas", token)
        return {"success": True, "data": response}
    except Exception as e:
        return {"success": False, "error": str(e)}

File: api/index.py (L520-522)

    token = extract_token(request)
    if not token:
        return JSONResponse({"error": "Token required. Add ?token=YOUR_TOKEN to URL"}, status_code=401)

File: api/index.py (L591-597)

async def health_check(request: Request) -> JSONResponse:
    return JSONResponse({
        "status": "healthy",
        "server": SERVER_NAME,
        "version": SERVER_VERSION,
        "tools": len(TOOLS)
    })

File: tests/test_tools.py (L20-51)

class TestToolDefinitions:
    """Test that tools are properly defined."""

    def test_check_causality_tool_definition(self):
        tool = check_causality_tool()
        assert tool.name == "check_causality"
        assert "why_prompt" in tool.inputSchema["properties"]
        assert "why_prompt" in tool.inputSchema["required"]

    def test_generate_attributes_levels_tool_definition(self):
        tool = generate_attributes_levels_tool()
        assert tool.name == "generate_attributes_levels"
        assert "why_prompt" in tool.inputSchema["required"]
        assert "attribute_count" in tool.inputSchema["properties"]
        assert "level_count" in tool.inputSchema["properties"]

    def test_create_experiment_tool_definition(self):
        tool = create_experiment_tool()
        assert tool.name == "create_experiment"
        assert "why_prompt" in tool.inputSchema["required"]
        assert "country" in tool.inputSchema["properties"]

    def test_get_experiment_status_tool_definition(self):
        tool = get_experiment_status_tool()
        assert tool.name == "get_experiment_status"
        assert "run_id" in tool.inputSchema["required"]

    def test_list_experiments_tool_definition(self):
        tool = list_experiments_tool()
        assert tool.name == "list_experiments"
        assert "limit" in tool.inputSchema["properties"]

File: server/config.py (L20-82)

class MCPConfig:
    """MCP server configuration."""

    auth0_jwt_token: str | None

    def __init__(self):
        # Auth0 Configuration
        self.auth0_domain: str = os.getenv("AUTH0_DOMAIN", "")
        self.auth0_audience: str = os.getenv("AUTH0_AUDIENCE", "")
        # Try M2M client credentials first, then fall back to regular
        self.auth0_client_id: str = (
            os.getenv("SUBCONSCIOUSAI_M2M_CLIENT_ID") or os.getenv("AUTH0_CLIENT_ID", "")
        )
        self.auth0_client_secret: str = (
            os.getenv("SUBCONSCIOUSAI_M2M_CLIENT_SECRET")
            or os.getenv("AUTH0_CLIENT_SECRET", "")
        )
        # Direct JWT token (optional)
        self.auth0_jwt_token = os.getenv("AUTH0_JWT_TOKEN")

        # API Configuration
        self.api_base_url = os.getenv("API_BASE_URL", "https://api.subconscious.ai")

        # Server Configuration
        self.server_name = "subconscious-ai"
        self.server_version = "1.0.0"

        # Timeout Configuration
        self.request_timeout = 300
        self.max_retries = 3
        self.retry_delay = 1.0

        # CORS Configuration
        # Note: Starlette CORSMiddleware doesn't support wildcards in origins list.
        # Use cors_origin_regex for pattern matching.
        cors_origins_env = os.getenv("CORS_ALLOWED_ORIGINS", "")
        cors_allow_all = os.getenv("CORS_ALLOW_ALL", "").lower() in ("true", "1", "yes")

        if cors_origins_env:
            self.cors_allowed_origins: List[str] = [
                origin.strip() for origin in cors_origins_env.split(",") if origin.strip()
            ]
            self.cors_origin_regex: str | None = None
        elif cors_allow_all:
            self.cors_allowed_origins = ["*"]
            self.cors_origin_regex = None
        else:
            # Default: explicit production origins + regex for dev/preview
            self.cors_allowed_origins = [
                "https://app.subconscious.ai",
                "https://holodeck.subconscious.ai",
                "https://ghostshell-runi.vercel.app",
            ]
            # Regex to match Vercel preview deployments and localhost
            self.cors_origin_regex = r"https://.*\.vercel\.app|http://localhost:\d+|http://127\.0\.0\.1:\d+"

        # Note: allow_credentials=True cannot be used with allow_origins=["*"] per CORS spec
        self.cors_allow_credentials = "*" not in self.cors_allowed_origins


# Global configuration instance
config = MCPConfig()

File: server/main.py (L17-18)

# Add parent directory to path for imports
sys.path.insert(0, str(Path(__file__).parent.parent))

File: server/main.py (L109-149)

async def call_tool(name: str, arguments: dict) -> list[TextContent]:
    """Handle tool execution."""
    handlers = {
        # Ideation
        "check_causality": handle_check_causality,
        "generate_attributes_levels": handle_generate_attributes_levels,
        # Population
        "validate_population": handle_validate_population,
        "get_population_stats": handle_get_population_stats,
        # Experiments
        "create_experiment": handle_create_experiment,
        "get_experiment_status": handle_get_experiment_status,
        "get_experiment_results": handle_get_experiment_results,
        "list_experiments": handle_list_experiments,
        # Runs
        "get_run_details": handle_get_run_details,
        "get_run_artifacts": handle_get_run_artifacts,
        "update_run_config": handle_update_run_config,
        # Personas
        "generate_personas": handle_generate_personas,
        "get_experiment_personas": handle_get_experiment_personas,
        # Analytics
        "get_amce_data": handle_get_amce_data,
        "get_causal_insights": handle_get_causal_insights,
    }

    handler = handlers.get(name)
    if not handler:
        return [TextContent(type="text", text=f"Unknown tool: {name}")]

    try:
        result = await handler(arguments)

        if result.get("success"):
            text = f"{result.get('message', 'Success')}\n\n{_format_result(result.get('data', {}))}"
            return [TextContent(type="text", text=text)]
        else:
            text = f"{result.get('message', 'Error')}: {result.get('error', 'Unknown error')}"
            return [TextContent(type="text", text=text)]
    except Exception as e:
        return [TextContent(type="text", text=f"Error executing tool {name}: {str(e)}")]

File: server/utils/api_client.py (L10-82)

class APIClient:
    """HTTP client for interacting with Subconscious AI API."""

    def __init__(self, base_url: Optional[str] = None):
        """Initialize API client.

        Args:
            base_url: Optional API base URL override
        """
        self.base_url = (base_url or config.api_base_url).rstrip("/")
        self.timeout = httpx.Timeout(config.request_timeout)
        self._token: Optional[str] = None

    def _get_headers(self) -> Dict[str, str]:
        """Get request headers with authentication."""
        if not self._token:
            self._token = get_auth_token()

        return {
            "Authorization": f"Bearer {self._token}",
            "Content-Type": "application/json",
        }

    async def _request(
        self,
        method: str,
        endpoint: str,
        **kwargs
    ) -> Dict[str, Any]:
        """Make HTTP request to API.

        Args:
            method: HTTP method (GET, POST, etc.)
            endpoint: API endpoint path
            **kwargs: Additional arguments for httpx request

        Returns:
            JSON response data

        Raises:
            httpx.HTTPError: If request fails
        """
        url = f"{self.base_url}{endpoint}"

        # Update headers with auth token
        headers = self._get_headers()
        if "headers" in kwargs:
            headers.update(kwargs.pop("headers"))

        async with httpx.AsyncClient(timeout=self.timeout) as client:
            response = await client.request(
                method=method,
                url=url,
                headers=headers,
                **kwargs
            )
            response.raise_for_status()
            return cast(Dict[str, Any], response.json())

    async def get(self, endpoint: str, **kwargs) -> Dict[str, Any]:
        """Make GET request."""
        return await self._request("GET", endpoint, **kwargs)

    async def post(self, endpoint: str, json: Optional[Dict[str, Any]] = None, **kwargs) -> Dict[str, Any]:
        """Make POST request."""
        return await self._request("POST", endpoint, json=json, **kwargs)

    async def put(self, endpoint: str, json: Optional[Dict[str, Any]] = None, **kwargs) -> Dict[str, Any]:
        """Make PUT request."""
        return await self._request("PUT", endpoint, json=json, **kwargs)

    async def delete(self, endpoint: str, **kwargs) -> Dict[str, Any]:
        """Make DELETE request."""

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions