Feature Proposal: MCP Server Integration for AI Agents #73
devroopsaha744
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Feature Proposal: MCP Server Integration for AI Agents
Hey everyone,
I want to propose adding Model Context Protocol (MCP) server support to this project. Before diving into the details, I'd love to get community feedback on whether this aligns with the project's goals and if others see value in this direction.
What I Want to Add
An MCP server that exposes LeetCode functionality as native tools for AI agents and LLM clients like Claude. The server would be modular, organized into three main areas: user data (profiles, badges, submissions), problems (daily challenges, lists, filters), and discussions (trending topics, threads, comments).
Why I Want This
I was building an AI agent myself which involved asking coding questions in a conversational manner (an AI interviewer, this was the project: https://github.com/devroopsaha744/julius_ai). So since I discovered this repo and I needed that functionality, I decided to add this feature as well and contribute here.
The main problem I kept running into: LLM knowledge has a cutoff date, but LeetCode updates its problems on a weekly basis. It would be incredibly useful if the LLM has access to the latest problems without being limited by its training data. With MCP, the agent discovers LeetCode tools automatically. You can have a conversation with Claude about your coding progress, and it naturally fetches your submission history. You can ask about trending discussions, and it just works. No custom integration, no API documentation to read, no response parsing logic.
Why MCP Over Direct API Calls?
Now you might ask: why not just wrap the API call in a function and call it through the tool calling mechanism? Here's why MCP is fundamentally different:
Standardized Interface - Tool calling requires each developer to define a custom schema or adapter. MCP enforces a formal protocol with structured metadata, parameters, schemas, and IO streams that are recognized by compliant clients.
Native Discoverability - MCP tools self-advertise their capabilities. Clients like Claude Desktop, LangGraph MCP runner, OpenAI responses API etc automatically list and introspect available tools without any configuration.
Persistent Connection Model - MCP uses a transport (stdio, WebSocket, TCP) maintaining a live context. Tool calling executes isolated RPC-style calls. MCP allows continuous sessions, state retention, and background streaming.
Uniform Schema Validation - Tool calling depends on ad-hoc JSON contracts. MCP standardizes parameter types and validation so clients can auto-generate UI controls, forms, or structured prompts.
Security Isolation - The LLM never touches API keys or network credentials. The MCP server owns them; the client interacts through schema-validated commands only.
Multi-tool Orchestration - A single MCP server can expose dozens of tools across domains (user, problems, discussions) under one runtime context. Tool calling would require separate definitions per endpoint.
Agent Runtime Integration - LLM runtimes like Anthropic, LangChain MCP adapters, and others can invoke MCP tools directly without embedding API logic or wrappers.
Stream and Event Support - MCP supports streaming outputs, logs, or progress events over persistent channels, which are unavailable in simple tool-call RPCs.
Technical Direction
The server would query leetcode.com/graphql directly rather than going through the existing REST API. This keeps the two systems independent and ensures REST endpoints remain unchanged. Tool names would follow MCP conventions (underscores, alphanumeric only). The implementation would include an Inspector launcher for local testing before deployment.
The modular approach means developers can run only what they need. Testing user tools? Run just that module. Deploying a discussion-focused agent? Load only discussion tools. It keeps things lightweight and flexible.
What This Enables
AI agents that understand your coding journey. Assistants that help you discover problems based on patterns in your submissions. Tools that monitor discussions and surface relevant threads. Automated progress tracking. Conversational interfaces for exploring LeetCode without opening a browser. Most importantly, AI agents with access to the latest LeetCode problems, not just what was in their training data.
Questions for You
Does this direction make sense for the project? Are there concerns about maintaining an MCP server alongside the REST API? What LeetCode features would be most valuable as MCP tools? Is the modular approach overkill, or does it add genuine flexibility?
Also, why would you personally use (or not use) something like this? What's missing from this proposal? What am I not thinking about?
Would love to hear everyone's thoughts before I invest time building this out.
Beta Was this translation helpful? Give feedback.
All reactions