Real-time SEO intelligence for AI coding tools (Cursor, Claude Code).
Bring Google Search Console data, SEO insights, and AI-powered recommendations directly into your editor. No context switching, no delays.
- 🚀 Real-time SEO intelligence in your editor (Cursor, Claude Code)
- 🔍 Google Search Console integration - See clicks, impressions, rankings
- 🤖 AI-powered recommendations - Fix issues with one command
- 📊 Pre-deployment checks - Catch SEO issues before they go live
- 🎯 Zero context switching - Stay in your workflow
- Node.js 18 or higher
- Rampify account (free to start)
npm install -g @rampify/mcp-serverThe global installation makes the rampify-mcp command available system-wide.
Before configuring the MCP server, get your API key:
- Sign up for Rampify (free to start)
- Go to your Rampify dashboard
- Navigate to Settings → API Keys
- Click "Generate New Key"
- Copy the key (starts with
sk_live_...) - Use it in the configuration below
Recommended: Configure MCP server per-project so each project knows its domain:
cd /path/to/your/project
claude mcp add --scope local rampify "npx" \
"-y" "@rampify/mcp-server" \
-e BACKEND_API_URL=https://www.rampify.dev \
-e API_KEY=sk_live_your_api_key_here \
-e SEO_CLIENT_DOMAIN=your-domain.com
# Reload your IDE windowNow you can use MCP tools without specifying domain:
get_page_seo- Automatically uses your project's domainget_issues- Automatically uses your project's domaincrawl_site- Automatically uses your project's domain
For global access across all projects (must specify domain in each request):
claude mcp add --scope user rampify "npx" \
"-y" "@rampify/mcp-server" \
-e BACKEND_API_URL=https://www.rampify.dev \
-e API_KEY=sk_live_your_api_key_here
# Reload your IDE windowAdd to your Cursor settings UI or ~/.cursor/config.json:
{
"mcpServers": {
"rampify": {
"command": "npx",
"args": [
"-y",
"@rampify/mcp-server"
],
"env": {
"BACKEND_API_URL": "https://www.rampify.dev",
"API_KEY": "sk_live_your_api_key_here",
"SEO_CLIENT_DOMAIN": "your-domain.com"
}
}
}
}Add to your Claude Code MCP settings:
{
"mcpServers": {
"rampify": {
"command": "npx",
"args": [
"-y",
"@rampify/mcp-server"
],
"env": {
"BACKEND_API_URL": "https://www.rampify.dev",
"API_KEY": "sk_live_your_api_key_here",
"SEO_CLIENT_DOMAIN": "your-domain.com"
}
}
}
}BACKEND_API_URL(required): Rampify API endpoint - always usehttps://www.rampify.devAPI_KEY(required): Your API key from Rampify dashboard (starts withsk_live_...)SEO_CLIENT_DOMAIN(optional): Default domain for this project (e.g.,yoursite.com)CACHE_TTL(optional): Cache duration in seconds (default: 3600)LOG_LEVEL(optional):debug,info,warn, orerror(default:info)
Ask Claude directly:
"What SEO tools are available?"
"What can you do for SEO?"
"List all SEO intelligence tools"
Claude will show you all available tools with descriptions.
Recommended: Use natural language (Claude will pick the right tool)
"What SEO issues does my site have?" → Calls get_issues
"Check this page's SEO" → Calls get_page_seo
"Crawl my site" → Calls crawl_site
Alternative: Call tools directly (if you know the exact name)
get_issues({ domain: "example.com" })
get_page_seo({ domain: "example.com", url_path: "/blog/post" })
crawl_site({ domain: "example.com" })After Deployment:
1. "Crawl my site" (refresh data)
2. "Show me the issues" (review problems)
3. "Check this page's SEO" (verify specific pages)
Before Deployment:
1. "Check SEO of localhost:3000/new-page" (test locally)
2. Fix issues in editor
3. "Re-check SEO" (verify fixes)
4. Deploy when clean!
Regular Monitoring:
1. "What's my site's health score?"
2. "Show critical issues only"
3. Fix high-priority items
4. "Crawl my site" (refresh)
| Tool | Purpose | When to Use |
|---|---|---|
get_page_seo |
Get SEO data for a specific page | Analyzing individual pages, checking performance |
get_issues |
Get all SEO issues with health score | Site-wide audits, finding problems |
crawl_site |
Trigger fresh crawl | After deployments, to refresh data |
generate_schema |
Auto-generate structured data | Adding schema.org JSON-LD to pages |
generate_meta |
Generate optimized meta tags | Fixing title/description issues, improving CTR |
Get comprehensive SEO data and insights for a specific page. Works with both production sites AND local dev servers!
Parameters:
domain(optional): Site domain (e.g., "example.com" or "localhost:3000"). UsesSEO_CLIENT_DOMAINenv var if not provided.url_path(optional): Page URL path (e.g., "/blog/post")file_path(optional): Local file path (will be resolved to URL)content(optional): Current file content
Examples:
Production Site:
Ask Claude: "What's the SEO status of this page?" (while editing a file)
# Uses SEO_CLIENT_DOMAIN from env var
Or explicitly:
get_page_seo({ domain: "example.com", url_path: "/blog/post" })
Local Development Server:
Ask Claude: "Audit the local version of this page"
get_page_seo({ domain: "localhost:3000", url_path: "/blog/new-post" })
# Or set default to local:
SEO_CLIENT_DOMAIN=localhost:3000
# Now all queries default to local dev server
Response includes:
- Source indicator:
production_database,local_dev_server, ordirect_content - Fetched from: Exact URL that was analyzed
- Performance metrics (clicks, impressions, position, CTR) - only for production
- Top keywords ranking for this page - only for production
- Detected SEO issues with fixes - works for both local and production
- Quick win opportunities
- AI summary and recommendations
Test pages BEFORE deployment:
-
Start your dev server:
npm run dev # Usually runs on localhost:3000 -
Query local pages:
Ask Claude: "Check SEO of localhost:3000/blog/draft-post" -
Fix issues in your editor, then re-check:
Ask Claude: "Re-check SEO for this page on localhost" -
Deploy when clean!
What gets analyzed locally:
- ✅ Title tags
- ✅ Meta descriptions
- ✅ Heading structure (H1, H2, H3)
- ✅ Images and alt text
- ✅ Schema.org structured data
- ✅ Internal/external links
- ❌ Search performance (GSC data not available for local)
Response format:
{
"source": "local_dev_server",
"fetched_from": "http://localhost:3000/blog/new-post",
"url": "http://localhost:3000/blog/new-post",
"issues": [...],
"ai_summary": "**Local Development Analysis**..."
}Get SEO issues for entire site with health score. Returns a comprehensive report of all detected problems.
Parameters:
domain(optional): Site domain (usesSEO_CLIENT_DOMAINif not provided)filters(optional):severity: Array of severity levels to include (['critical', 'warning', 'info'])issue_types: Array of specific issue typeslimit: Max issues to return (1-100, default: 50)
Examples:
Ask Claude: "What SEO issues does my site have?"
# Uses SEO_CLIENT_DOMAIN from env var
Ask Claude: "Show me only critical SEO issues"
# AI will filter by severity: critical
Ask Claude: "Check SEO issues for example.com"
get_issues({ domain: "example.com" })
Response includes:
- Health score (0-100) and grade (A-F)
- Issue summary by severity (critical, warning, info)
- Detailed list of issues with fix recommendations
- Recommended actions prioritized by impact
Use cases:
- Site-wide SEO audits
- Finding all problems at once
- Tracking improvements over time
- Prioritizing fixes by severity
Trigger a fresh site crawl and analysis. This is an active operation that fetches and analyzes all pages.
Parameters:
domain(optional): Site domain (usesSEO_CLIENT_DOMAINif not provided)
Examples:
Ask Claude: "Crawl my site after deploying changes"
# Uses SEO_CLIENT_DOMAIN from env var
Ask Claude: "Analyze example.com"
crawl_site({ domain: "example.com" })
What it does:
- Discovers all URLs (via sitemap or navigation crawl)
- Checks each URL (status, speed, SEO elements)
- Detects issues (missing tags, errors, broken links)
- Updates database with current state
- Automatically clears cache so next
get_issuesorget_page_seoshows fresh data
Response includes:
- Total URLs found
- URLs checked
- Issues detected
- Crawl duration
- Crawl method (sitemap vs navigation)
When to use:
- After deploying code changes
- After fixing SEO issues
- Before running
get_issuesto ensure fresh data - Weekly/monthly for monitoring
Note: This is the only tool that actively crawls your site. get_issues and get_page_seo just fetch existing data.
Auto-generate structured data (schema.org JSON-LD) for any page. Detects page type and generates appropriate schema with validation.
Parameters:
domain(optional): Site domain (usesSEO_CLIENT_DOMAINif not provided)url_path(required): Page URL path (e.g., "/blog/post")schema_type(optional): Specific schema type or "auto" to detect (default: "auto")
Supported schema types:
Article/BlogPosting- Blog posts, articles, newsProduct- Product pages, e-commerceOrganization- About pages, company infoFAQPage- FAQ pages with Q&ABreadcrumbList- Auto-added for navigation
Examples:
Ask Claude: "Generate schema for /blog/indexnow-faster-indexing"
# Auto-detects Article schema
Ask Claude: "Generate Product schema for /products/widget"
generate_schema({ url_path: "/products/widget", schema_type: "Product" })
Ask Claude: "Add structured data to this page"
# If editing a file, Claude will detect the URL and generate schema
What it does:
- Fetches page HTML (local or production)
- Analyzes content (title, description, author, date, images)
- Detects page type from URL patterns and content
- Generates appropriate JSON-LD schema
- Validates schema and warns about placeholders
- Returns ready-to-use code snippets
Response includes:
- Detected page type
- List of recommended schemas
- Generated JSON-LD for each schema
- Validation results with warnings
- Code snippets (Next.js or HTML)
- Implementation instructions
Use cases:
- Fixing "missing schema" warnings from
get_issues - Adding rich snippets for better search visibility
- Enabling Google Discover eligibility (requires Article schema)
- Improving CTR with enhanced search results
Example output:
{
"detected_page_type": "Article",
"recommended_schemas": ["Article", "BreadcrumbList"],
"schemas": [
{
"type": "Article",
"json_ld": { ... },
"validation": {
"valid": false,
"warnings": ["Replace placeholder values with actual data"]
}
}
],
"implementation": {
"where_to_add": "In your page component's metadata",
"code_snippet": "// Next.js code here",
"instructions": "1. Add code to page.tsx..."
}
}Pro tip: After generating schema, test it with Google Rich Results Test
Generate optimized meta tags (title, description, Open Graph tags) for a page. Now uses your client profile to generate highly personalized, business-aware meta tags that align with your target audience, brand voice, and competitive positioning.
Parameters:
domain(optional): Site domain (usesSEO_CLIENT_DOMAINif not provided)url_path(required): Page URL path (e.g., "/blog" or "/blog/post")include_og_tags(optional): Include Open Graph tags for social sharing (default: true)framework(optional): Framework format for code snippet -nextjs,html,astro, orremix(default: "nextjs")
✨ NEW: Client Profile Integration
The tool automatically fetches your client profile and uses context like:
- Target keywords → Ensures they appear in title/description
- Target audience → Adjusts tone and technical depth
- Brand voice → Matches your preferred tone (conversational, technical, formal)
- Differentiators → Highlights unique selling points for better CTR
- Primary CTA → Ends description with appropriate call-to-action
Examples:
Ask Claude: "Generate better meta tags for /blog"
# Auto-analyzes content and generates optimized title/description
Ask Claude: "Fix the title too short issue on /blog/post"
generate_meta({ url_path: "/blog/post" })
Ask Claude: "Create meta tags without OG tags for /about"
generate_meta({ url_path: "/about", include_og_tags: false })
Ask Claude: "Generate HTML meta tags for /products/widget"
generate_meta({ url_path: "/products/widget", framework: "html" })
What it does:
- Fetches page HTML (local or production)
- Analyzes current meta tags (title, description)
- Extracts content structure (headings, topics, word count)
- Detects page type (homepage, blog_post, blog_index, product, about)
- Identifies key topics from content
- Returns analysis for AI to generate optimized meta tags
- Provides framework-specific code snippets
Response includes:
- Page analysis:
- Current title and description
- Main heading and all headings
- Word count and content preview
- Detected page type
- Key topics extracted from content
- Images for OG tags
- Current issues:
- Title too short/long
- Meta description too short/long
- Missing meta tags
- AI-generated meta tags:
- Optimized title (50-60 characters)
- Compelling meta description (150-160 characters)
- Open Graph tags (if requested)
- Twitter Card tags (if requested)
- Ready-to-use code for your framework
Use cases:
- Fixing "title too short" or "description too short" warnings
- Improving click-through rate (CTR) from search results
- Optimizing social media sharing (OG tags)
- Aligning meta tags with actual page content
- A/B testing different meta descriptions
Real-World Impact: Before vs. After
Without Profile Context (Generic):
Title: Project Management Software | Company
Description: Manage your projects efficiently with our powerful collaboration platform. Streamline workflows and boost productivity.
With Profile Context (Target audience: developers, Differentiators: "real-time collaboration, 50% faster"):
Title: Real-Time Dev Collaboration | 50% Faster | Company
Description: Built for developers: API-first project management with real-time sync. Ship 50% faster than competitors. Try free for 30 days →
Profile Warnings System:
If your profile is incomplete, you'll get helpful warnings:
{
"profile_warnings": [
"⚠️ Target audience not set - recommendations will be generic. Add this in your business profile for better results.",
"⚠️ No target keywords set - can't optimize for ranking goals. Add keywords in your business profile.",
"💡 Add your differentiators in the business profile to make meta descriptions more compelling.",
"💡 Set your brand voice in the business profile to ensure consistent tone."
]
}Or if no profile exists at all:
{
"profile_warnings": [
"📝 No client profile found. Fill out your profile at /clients/{id}/profile for personalized recommendations."
]
}Example workflow:
1. User: "What SEO issues does my site have?"
→ get_issues shows "Title too short on /blog"
2. User: "Fix the title issue on /blog"
→ generate_meta analyzes /blog page
→ Fetches client profile for context
→ Shows warnings if profile incomplete
→ Claude generates optimized, personalized title
→ Returns Next.js code snippet to add to page
3. User copies code to app/blog/page.tsx
4. User: "Re-check SEO for /blog"
→ get_page_seo confirms title is now optimal
Setting Up Your Profile:
To get the most value from generate_meta:
- Visit
/clients/{your-client-id}/profilein the dashboard - Fill out key fields:
- Target Audience (e.g., "developers and technical founders")
- Target Keywords (e.g., "real-time collaboration, dev tools")
- Brand Voice (e.g., "technical but approachable")
- Your Differentiators (e.g., "50% faster than competitors")
- Primary CTA (e.g., "try_free" or "request_demo")
- Use the tool - Profile context is automatically applied
- See better results - Meta tags now match your business context
SEO Best Practices (Built-in):
- Title length: 50-60 characters (includes brand name if space allows)
- Description length: 150-160 characters (compelling call-to-action)
- Keyword placement: Primary keywords near the start
- Uniqueness: Each page gets unique meta tags based on its content
- Accuracy: Meta tags reflect actual page content (no clickbait)
Framework-specific output:
Next.js (App Router):
export const metadata = {
title: "Your Optimized Title | Brand",
description: "Your compelling meta description...",
openGraph: {
title: "Your Optimized Title",
description: "Your compelling meta description...",
images: [{ url: "/path/to/image.jpg" }],
},
};HTML:
<title>Your Optimized Title | Brand</title>
<meta name="description" content="Your compelling meta description...">
<meta property="og:title" content="Your Optimized Title">
<meta property="og:description" content="Your compelling meta description...">Pro tips:
- Run after fixing content to ensure meta tags match
- Test social sharing with Facebook Sharing Debugger
- Monitor CTR improvements in Google Search Console
- Update meta tags when page content significantly changes
npm run watchThis will recompile TypeScript on every change.
# In one terminal, start your backend
cd /path/to/rampify
npm run dev
# In another terminal, build and run MCP server
cd packages/mcp-server
npm run devThe MCP server will connect to your local backend at http://localhost:3000.
Set LOG_LEVEL=debug in your .env file to see detailed logs:
LOG_LEVEL=debug npm run devMCP Server (packages/mcp-server)
├── src/
│ ├── index.ts # MCP server entry point
│ ├── config.ts # Configuration loader
│ ├── tools/ # MCP tool implementations
│ │ ├── get-seo-context.ts
│ │ ├── scan-site.ts
│ │ └── index.ts # Tool registry
│ ├── services/ # Business logic
│ │ ├── api-client.ts # Backend API client
│ │ ├── cache.ts # Caching layer
│ │ └── url-resolver.ts # File path → URL mapping
│ ├── utils/
│ │ └── logger.ts # Logging utility
│ └── types/ # TypeScript types
│ ├── seo.ts
│ └── api.ts
└── build/ # Compiled JavaScript (generated)
The MCP server caches responses for 1 hour (configurable via CACHE_TTL) to improve performance.
Cache is cleared automatically when:
- Entries expire (TTL reached)
- Server restarts
- You manually clear (not yet implemented)
Solution: Add the site to your dashboard first at http://localhost:3000
Checklist:
- Is the backend running? (
npm run devin root directory) - Is
BACKEND_API_URLcorrect in.env? - Check logs with
LOG_LEVEL=debug
Checklist:
- Did you build the server? (
npm run build) - Is the path absolute in Cursor config?
- Restart Cursor after changing config
- Check Cursor logs (Help → Toggle Developer Tools → Console)
Common causes:
- Site not analyzed yet (run analysis in dashboard first)
- GSC not connected (connect in dashboard settings)
- No URLs in database (trigger site analysis)
Solution:
- Make sure your dev server is running (
npm run dev) - Verify the port (default is 3000, but yours might be different)
- Use full domain with port:
localhost:3000(not justlocalhost) - Check dev server logs for CORS or other errors
Example error:
Could not connect to local dev server at http://localhost:3000/blog/post.
Make sure your dev server is running (e.g., npm run dev).
How to tell which source you're using:
Every response includes explicit source and fetched_from fields:
{
"source": "local_dev_server", // or "production_database"
"fetched_from": "http://localhost:3000/page",
...
}Pro tip: Set SEO_CLIENT_DOMAIN per project to avoid specifying domain every time:
- For local dev:
SEO_CLIENT_DOMAIN=localhost:3000 - For production:
SEO_CLIENT_DOMAIN=yoursite.com
- ✅
get_page_seo- Get SEO data for a specific page - ✅
get_issues- Get all site issues with health score - ✅
crawl_site- Trigger fresh site crawl - ✅
generate_schema- Auto-generate structured data (Article, Product, etc.) - ✅
generate_meta- AI-powered title and meta description generation
- 📋
suggest_internal_links- Internal linking recommendations - 📋
check_before_deploy- Pre-deployment SEO validation - 📋
optimize_blog_post- Deep optimization for blog content - 📋
optimize_landing_page- Conversion-focused SEO
- Bulk operations across multiple pages
- Historical trend analysis
- Competitive monitoring
- Advanced AI insights and recommendations
Need help?
- Documentation - Complete guides and tutorials
- GitHub Issues - Report bugs or request features
- Rampify Settings - Manage your sites and API keys
- What is Rampify? - Product overview
- MCP Server Guide - Detailed documentation
- Blog - SEO tips and product updates
MIT