Skip to content

ctocopilot/coagent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 

Repository files navigation

CTO Copilot - MCP Server

A minimal application demonstrating how to build an OpenAI Apps SDK compatible MCP server with widget rendering in ChatGPT.

Overview

This project shows how to integrate an application with the ChatGPT Apps SDK using the Model Context Protocol (MCP). It includes a working MCP server that exposes tools and resources that can be called from ChatGPT, with responses rendered natively in ChatGPT.

Key Components

1. MCP Server Route (app/mcp/route.ts)

The core MCP server implementation that exposes tools and resources to ChatGPT.

Key features:

  • Tool registration with OpenAI-specific metadata
  • Resource registration that serves HTML content for iframe rendering
  • Cross-linking between tools and resources via templateUri

OpenAI-specific metadata:

{
  "openai/outputTemplate": widget.templateUri,      // Links to resource
  "openai/toolInvocation/invoking": "Loading...",   // Loading state text
  "openai/toolInvocation/invoked": "Loaded",        // Completion state text
  "openai/widgetAccessible": false,                 // Widget visibility
  "openai/resultCanProduceWidget": true            // Enable widget rendering
}

Full configuration options: OpenAI Apps SDK MCP Documentation

2. PDF Generation (app/mcp/helpers/pdfConverter.ts)

A utility to convert Markdown content into PDF documents entirely in Node.js without headless browsers.

Features:

  • Pure JS generation: Uses pdf-lib and marked for lightweight serverless deployment.
  • Rich formatting: Supports headers, code blocks, lists, blockquotes, and inline styling.
  • Custom Layout Engine: properly handles text wrapping and pagination.
  • Multiple Storage Backends: Supports Vercel Blob, AWS S3, n8n webhooks, and local storage.

3. Asset Configuration

Critical: Set assetPrefix to ensure static assets are fetched from the correct origin when running inside an iframe.

4. Storage Configuration

The application supports multiple storage backends for PDF files, controlled via environment variables:

Environment Variables:

  • UPLOAD_VIA_VERCEL=true - Enable Vercel Blob storage (requires BLOB_READ_WRITE_TOKEN)
  • UPLOAD_VIA_S3=true - Enable AWS S3 storage (requires AWS credentials and bucket configuration)
  • UPLOAD_VIA_N8N_GDRIVE=true - Enable n8n webhook upload
  • N8N_WEBHOOK_URL - Custom n8n webhook URL (defaults to the provided secondspring URL)

Storage Behavior:

  • Development Mode: In development (NODE_ENV=development), the system always uses local disk storage regardless of upload flags
  • Production Mode: When upload flags are set, the system uses composite storage that tries multiple methods in order:
    1. Vercel Blob (if UPLOAD_VIA_VERCEL=true)
    2. n8n Webhook (if UPLOAD_VIA_N8N_GDRIVE=true)
    3. AWS S3 (if UPLOAD_VIA_S3=true)
  • The first successful upload method is used; if all methods fail, an error is thrown
  • If no upload flags are set in production, the system falls back to Vercel Blob (if token exists) or local storage

Upload via AWS S3 (Using Access Keys)

The application supports uploading files to AWS S3 using access keys. This is useful for production deployments where you want to store generated PDFs in your own S3 bucket.

Step 1: Create AWS S3 Bucket

  1. Log in to the AWS Console
  2. Navigate to S3 and create a new bucket
  3. Note your bucket name and region (e.g., my-pdf-bucket, us-east-1)

Step 2: Create IAM User with S3 Access

  1. Go to IAM → Users → Create User
  2. Create a user (e.g., pdf-uploader)
  3. Attach a policy that grants S3 permissions: json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["s3:PutObject", "s3:GetObject", "s3:GetBucketLocation"], "Resource": [ "arn:aws:s3:::your-bucket-name/*", "arn:aws:s3:::your-bucket-name" ] } ] }
  4. Create access keys for the user: - Go to the user → Security credentials tab - Click "Create access key" - Save the Access Key ID and Secret Access Key (you won't be able to see the secret again)

Step 3: Configure Environment Variables

Set the following environment variables in your deployment (Vercel, local .env, etc.):

Option A: Standard AWS SDK Variables (Recommended)

UPLOAD_VIA_S3=true
AWS_ACCESS_KEY_ID=your-access-key-id
AWS_SECRET_ACCESS_KEY=your-secret-access-key
AWS_S3_BUCKET_NAME=your-bucket-name
AWS_REGION=us-east-1

Option B: Custom Variable Names If you prefer to use custom environment variable names:

UPLOAD_VIA_S3=true
AWS_KEY_ID=your-access-key-id
AWS_SECRET_ACCESS_KEY=your-secret-access-key
AWS_S3_BUCKET_NAME=your-bucket-name
AWS_REGION=us-east-1

Step 4: Configure Bucket for Public Access (Optional)

If you want the generated URLs to be publicly accessible:

  1. Go to your S3 bucket → Permissions tab
  2. Edit "Block public access" settings (if needed)
  3. Add a bucket policy to allow public read access: json { "Version": "2012-10-17", "Statement": [ { "Sid": "PublicReadGetObject", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::your-bucket-name/*" } ] }

Note: If you don't configure public access, the URLs will be generated but may return 403 Forbidden errors when accessed directly. You can still use the files programmatically with the same credentials.

Step 5: Verify Configuration

The S3 storage implementation includes several automatic features:

  • Automatic Region Detection: If the bucket region differs from your configured AWS_REGION, the system automatically detects and uses the correct region
  • Endpoint Error Retry: If an endpoint error occurs (wrong region), the system automatically retries with the correct region
  • Default Region: If AWS_REGION is not set, defaults to us-east-1
  • Region Parsing: Supports comments in region variable (e.g., AWS_REGION=us-east-1 # my region)
  • URL Generation: Generates public URLs in the format:
    • https://bucket-name.s3.region.amazonaws.com/filename.pdf (for most regions)
    • https://bucket-name.s3.amazonaws.com/filename.pdf (for us-east-1)
  • Content Type: Files are uploaded with Content-Type: application/pdf

Example .env file:

# Enable S3 upload
UPLOAD_VIA_S3=true

# AWS Credentials
AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

# S3 Configuration
AWS_S3_BUCKET_NAME=my-pdf-bucket
AWS_REGION=us-east-1

Security Best Practices:

  • Never commit access keys to version control
  • Use environment variables or secrets management (Vercel Secrets, AWS Secrets Manager, etc.)
  • Rotate access keys regularly
  • Use IAM policies with least privilege (only grant necessary S3 permissions)
  • Consider using IAM roles instead of access keys when running on AWS infrastructure

Troubleshooting:

  • 403 Forbidden errors: Check that your IAM user has s3:PutObject and s3:GetBucketLocation permissions
  • Region mismatch errors: The system should auto-detect and correct this, but ensure AWS_REGION matches your bucket region if issues persist
  • Endpoint errors: Usually indicates a region mismatch - the system will automatically retry with the correct region
  • Missing credentials: Ensure both AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are set (or AWS_KEY_ID if using custom names)
  • Bucket not found: Verify AWS_S3_BUCKET_NAME matches your actual bucket name exactly
  • Development mode: Remember that in development, S3 uploads are disabled and local storage is used instead

5. SDK Bootstrap (app/layout.tsx)

The <NextChatSDKBootstrap> component patches browser APIs to work correctly within the ChatGPT iframe:

What it patches:

  • history.pushState / history.replaceState - Prevents full-origin URLs in history
  • window.fetch - Rewrites same-origin requests to use the correct base URL
  • <html> attribute observer - Prevents ChatGPT from modifying the root element

Getting Started

Installation

npm install
# or
pnpm install

Database Setup

Initialize Database:

make setup
# or
npx tsx scripts/init-db.ts
pnpm db:push

Reset Database (Drops All Tables):

make db-reset
# Interactive confirmation required

# For non-interactive use (CI/CD):
make db-reset-force

Generate Migrations:

make db-generate
# or
pnpm db:generate

Apply Migrations:

make db-migrate
# or
pnpm db:migrate

Setup Row Level Security (RLS):

make setup-rls
# or
npx tsx scripts/setup-rls.ts

Note: RLS is automatically applied when running make setup.

Row Level Security (RLS) Policies:

  • SELECT: Users can view their own files (where user_id matches auth.uid()) or public files (where user_id is NULL)
  • INSERT: Users can create files with their own user_id or public files (NULL user_id)
  • UPDATE: Users can only update files they own
  • DELETE: Users can only delete files they own
  • Service Role: Has full access (typically used by server-side applications)

Important:

  • When using Supabase's service role connection string (server-side), RLS is bypassed by default
  • RLS policies apply when using authenticated user connections (client-side)
  • The user_id field should match Supabase's auth.uid() for RLS to work correctly

Development

npm run dev
# or
pnpm dev

Open http://localhost:3000 to see the app.

Testing the MCP Server

The MCP server is available at:

http://localhost:3000/mcp

Using the MCP Inspector

The easiest way to test your MCP server is using the MCP Inspector:

Option 1: Streamable HTTP (Recommended)

This connects directly to your running Next.js server - no separate script needed!

# Terminal 1: Start the dev server
make dev
# or: pnpm dev

# Terminal 2: Start the inspector UI
make inspector
# or: pnpm inspector

Then in the Inspector UI:

  1. Select "Streamable HTTP" as the transport type
  2. Enter the URL: http://localhost:3000/mcp
  3. Click Connect

Option 2: STDIO Transport (Legacy)

For direct STDIO-based testing without the HTTP server:

make inspector-stdio
# or: pnpm inspector:stdio

Note: The Streamable HTTP method is preferred because it tests the exact same code path that ChatGPT will use in production.

Connecting from ChatGPT

  1. Deploy your app.
  2. In ChatGPT, navigate to Settings → Connectors → Create and add your MCP server URL with the /mcp path (e.g., https://your-app.com/mcp)

Note: Connecting MCP servers to ChatGPT requires developer mode access. See the connection guide for setup instructions.

Project Structure

app/
├── mcp/
│   └── route.ts          # MCP server with tool/resource registration
├── layout.tsx            # Root layout with SDK bootstrap
├── page.tsx              # Homepage content
└── globals.css           # Global styles
middleware.ts             # CORS handling

How It Works

  1. Tool Invocation: ChatGPT calls a tool registered in app/mcp/route.ts
  2. Resource Reference: Tool response includes templateUri pointing to a registered resource
  3. Widget Rendering: ChatGPT fetches the resource HTML and renders it in an iframe
  4. Client Hydration: The app hydrates inside the iframe with patched APIs
  5. Navigation: Client-side navigation uses patched fetch

Learn More

coagent

About

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors