This project demonstrates how to extend GitHub Copilot capabilities using Python, Langchain, and the Model Context Protocol (MCP), allowing for more deterministic and powerful interactions with the LLM.
As a GitHub Copilot trainer, I've observed its impressive evolution. However, I've always wanted to access GitHub Copilot's internals to build more complex transformation chains. This project shows how to achieve that goal by leveraging MCP (Model Context Protocol) and Langchain.
While GitHub Copilot repository custom instructions feature improved customization options, the introduction of the MCP protocol opened new possibilities for extending Copilot's functionality through custom MCP servers.
Instruction files are not always deterministic - they need to be fine-tuned when new LLM versions are released to reduce hallucinations. LLMs often struggle with precise text manipulations, sometimes creatively reinterpreting tasks and adding unwanted artifacts. What we need is a way to inject deterministic logic into our instructions.
This project demonstrates a solution through:
Pythonscripts installed on the local machine.- Custom
MCPserver configuration through.vscode/mcp.json - Custom tools defined in the
mcp_server/tools directory
With this setup, GitHub Copilot gains access to new, well-documented tools that it can see as part of your project. When you ask Copilot to "create a tool that does X", it can generate a solution very close to what you need. You simply accept its changes and restart MCP to get a new deterministic tool for your specific logic.
This document outlines potential data leak scenarios and provides guidance on risk mitigation when working with this tool.
Run this command in the root folder of your project and follow instructions:
With git
git clone https://github.com/codenjoyme/mcpyrex-python.git mcp_server; cd mcp_server\build; .\install-windows.ps1Without git
$work = "mcp_server"; $url = "https://github.com/codenjoyme/mcpyrex-python/archive/refs/heads/main.zip"; New-Item -ItemType Directory -Force -Path $work; (New-Object System.Net.WebClient).DownloadFile($url, "$work\project.zip"); Expand-Archive -Path "$work\project.zip" -DestinationPath "$work\tmp"; Remove-Item "$work\project.zip"; Move-Item "$work\tmp\mcpyrex-python-main\*" "$work"; Move-Item "$work\tmp\mcpyrex-python-main\.*" "$work" -Force; Remove-Item "$work\tmp" -Recurse; Set-Location "$work\build"; .\install-windows.ps1Update existing installation:
cd .\mcp_server\build; .\install-windows.ps1Security policy issues If you got this error:
.\install-windows.ps1 : File C:\workspace\mcp_server\build\install-windows.ps1 cannot be loaded because running scripts is disabled on this system. For more information, see about_Execution_Policies at https://go.microsoft.com/fwlink/?LinkID=135170.
At line:1 char:510
+ ... work\tmp" -Recurse; Set-Location "$work\build"; .\install-windows.ps1
+ CategoryInfo : SecurityError: (:) [], PSSecurityException
+ FullyQualifiedErrorId : UnauthorizedAccessPlease run before script:
Get-ExecutionPolicy Set-ExecutionPolicy -Scope CurrentUser -ExecutionPolicy RemoteSignedWith git
git clone https://github.com/codenjoyme/mcpyrex-python.git mcp_server && cd mcp_server/build && ./install-macos.shWithout git
work="mcp_server"; url="https://github.com/codenjoyme/mcpyrex-python/archive/refs/heads/main.zip"; mkdir -p "$work"; curl -L -o "$work/project.zip" "$url"; cd "$work"; unzip -q project.zip; mv mcpyrex-python-main/* .; mv mcpyrex-python-main/.* . 2>/dev/null || true; rmdir mcpyrex-python-main; rm project.zip; cd build; chmod +x ./install-macos.sh; ./install-macos.shUpdate existing installation:
cd ./mcp_server/build && ./install-macos.shWith git
git clone https://github.com/codenjoyme/mcpyrex-python.git mcp_server && cd mcp_server/build && ./install-linux.shWithout git
work="mcp_server"; url="https://github.com/codenjoyme/mcpyrex-python/archive/refs/heads/main.zip"; mkdir -p "$work"; wget -O "$work/project.zip" "$url"; cd "$work"; unzip -q project.zip; mv mcpyrex-python-main/* .; mv mcpyrex-python-main/.* . 2>/dev/null || true; rmdir mcpyrex-python-main; rm project.zip; cd build; ./install-linux.shUpdate existing installation:
cd ./mcp_server/build && ./install-linux.sh-
This script will:
- Download the latest version from GitHub to
./.mcp-pythonfolder - Extract all files
- Navigate to the build directory
- Run the interactive installation script automatically
- Download the latest version from GitHub to
-
The installation script will automatically:
- Detect or install Python 3.11+
- Create virtual environment
- Ask you to choose between Cursor or VSCode
- Copy appropriate configuration files
- Install Python dependencies via pip
- Set up your MCP server integration
-
After installation:
- Your
.vscode/mcp.jsonor.cursor/mcp.jsonwill be configured - Your workspace settings will be updated
- GitHub Copilot instructions will be in place
- Start using enhanced GitHub Copilot capabilities!
- Your
-
Simple Examples (
mcp_server/simple/): Demonstrates integration with different LLM providers and key LangChain conceptsquery_openai.py- OpenAI integrationquery_azure.py- Azure OpenAI integrationprompt_template.py- Prompt template management and usagerag.py- Retrieval-Augmented Generation (RAG) with vector searchagent.py- AI Agent with specialized tools (character counting, MD5 hashing, regex matching)structured_output.py- Structured data extraction in multiple formats (JSON, YAML, Markdown)chain_of_thought.py- Step-by-step problem solving with transparent reasoningcolor.py- Helper module for colored terminal output
-
MCP Implementation:
mcp_server/test/server.py- Client implementation for testingmcp_server/- Server with custom tools
-
Install file
mcp_server/build/install.ps1:- Setup virtual environment
- Setup libraries
- Smoke tests
{
"servers": {
"mcpyrex": {
"type": "stdio",
"command": "${workspaceFolder}\\.mcp-python\\langchain_env\\Scripts\\python.exe",
"args": ["${workspaceFolder}\\.mcp-python\\mcp_server\\server.py"]
}
}
}lng_batch_run- advanced pipeline execution with conditionals, loops, and parallel processinglng_count_words- word counting, demonstrates python function callinglng_get_tools_info- tools information retrieval, collects all the information about tools in one place, that helps inGithub Copilot.lng_llm_rag_add_dataandlng_rag_search- demonstrates RAG (Retrieval Augmented Generation) functionalitylng_llm_prompt_template- unified prompt template management with file storage (save, use, list)lng_llm_run_chain- demostrates Chain executionlng_llm_agent_demo- demonstrates Agent functionalitylng_llm_structured_output- demonstrates Structured outputlng_llm_chain_of_thought- demonstrates Chain of Thought reasoning approach with Memory usage- And more in
mcp_server/tools/
You can run tools directly in the terminal using the mcp_server/run.py script. This allows you to quickly test and execute any tool without the overhead of the MCP server.
python -m mcp_server.run
python -m mcp_server.run list
python -m mcp_server.run schema lng_count_words
python -m mcp_server.run run lng_count_words '{\"input_text\":\"Hello world\"}'
python -m mcp_server.run run lng_math_calculator '{\"expression\":\"2+3*4\"}'
python -m mcp_server.run batch lng_count_words '{\"input_text\":\"Hello\"}' lng_math_calculator '{\"expression\":\"2+3\"}'To debug mcp protocol you can run server in terminal
.\langchain_env\Scripts\python.exe .\mcp_server\server.py,
then update mcp.json from "args": ["${workspaceFolder}\\mcp_server\\server.py"] to
"args": ["${workspaceFolder}\\mcp_server\\server_fake.py"] and run fake server.
After that copy request from mcp_server/logs/mcp_out.log:
2025-07-16 20:54:30,339 - mcp_fake_logger - INFO - [<] {"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2025-03-26","capabilities":{"roots":{"listChanged":true}},"clientInfo":{"name":"Visual Studio Code - Insiders","version":"1.100.0-insider"}}}
to the server console. Then copy output to the mcp_server/logs/mcp_in.log and save. Then reiterate.
While LLMs can handle simple tasks, more complex operations like processing Excel files or extracting specific data benefit greatly from custom MCP tools. The LLM can still help generate the code for these tools, providing the best of both worlds.
To control whether MCP (Model Context Protocol) is enabled or disabled, you need to modify the ${workspaceFolder}/.vscode/settings.json file:
{
"chat.mcp.enabled": true,
"github.copilot.chat.codeGeneration.useInstructionFiles": false
}{
"chat.mcp.enabled": false,
"github.copilot.chat.codeGeneration.useInstructionFiles": true
}API keys and other credentials should be stored in a .env file (not included in the repository).
This project is based on concepts from the blog post: Как расширить GithubCopilot с помощью Python и Langchain через MCP
The original repository is available at: https://github.com/codenjoyme/mcpyrex-python
If you're experiencing issues with MCP connectivity or want to test the system without full MCP integration, this project provides an alternative HTTP-based architecture that allows you to quickly test and interact with LangChain tools.
The alternative setup consists of three components:
- Component 1:
mcp_server/server.py- The original MCP server with LangChain tools (will run automatically with Component 2) - Component 2:
mcp_server/proxy.py- Full HTTP proxy server with complete MCP protocol implementation - Component 3:
mcp_server/execute.py- Client for quick requests
Open a terminal and run:
python mcp_server/proxy.pyThe server will start on http://127.0.0.1:8080 and provide:
GET /health- Server health check with MCP connection statusGET /tools- List all available LangChain toolsPOST /execute- Execute tools using full MCP protocol
Open another terminal and test the system:
Check server health:
python mcp_server/execute.py healthList available tools:
python mcp_server/execute.py listExecute tools:
# Count words in text
python mcp_server/execute.py exec lng_count_words --params '{\"input_text\": \"Hello world this is a test\"}'
# Math calculations
python mcp_server/execute.py exec lng_math_calculator --params '{\"expression\": \"2 + 3 * 4\"}'
# Chain of thought reasoning
python mcp_server/execute.py exec lng_chain_of_thought --params '{\"question\": \"What is 15 * 24?\"}'
# RAG functionality
python mcp_server/execute.py exec lng_rag_add_data --params '{\"input_text\": \"Your document content\"}'
python mcp_server/execute.py exec lng_rag_search --params '{\"query\": \"search term\"}'python mcp_server/execute.py examplesIf you encounter issues:
- Server not starting: Check if port 8080 is available
- Connection refused: Ensure the test server is running
- JSON parsing errors: Use single quotes for PowerShell parameters
- Tool errors: Check the server console for detailed error messages
You can also test the system directly in your browser:
- Health check:
http://127.0.0.1:8080/health - Use tools like Postman or curl for POST requests to
/execute
This alternative setup provides a reliable fallback when MCP is not available while maintaining full access to your LangChain tools.