Monacopilot is a powerful and customizable AI auto-completion plugin for the Monaco Editor. Inspired by GitHub Copilot.
- 🎯 Multiple AI Provider Support (Anthropic, OpenAI, Groq, Google)
- 🔄 Real-time Code Completions
- ⚡️ Efficient Caching System
- 🎨 Context-Aware Suggestions
- 🛠️ Customizable Completion Behavior
- 📦 Framework Agnostic
- 🔌 Custom Model Support
- 🎮 Manual Trigger Support
- Examples
- Demo
- Installation
- Usage
- Register Completion Options
- Copilot Options
- Completion Request Options
- Cross-Language API Handler Implementation
- Contributing
Here are some examples of how to integrate Monacopilot into your project:
inline-completions-demo.mp4
In the demo, we are using the onTyping
trigger mode with the Groq model, which is why you see such quick and fast completions. Groq provides very fast response times.
To install Monacopilot, run:
npm install monacopilot
Set up an API handler to manage auto-completion requests. An example using Express.js:
import express from 'express';
import {Copilot} from 'monacopilot';
const app = express();
const port = process.env.PORT || 3000;
const copilot = new Copilot(process.env.ANTHROPIC_API_KEY!, {
provider: 'anthropic',
model: 'claude-3-5-haiku',
});
app.use(express.json());
app.post('/complete', async (req, res) => {
const {completion, error, raw} = await copilot.complete({
body: req.body,
});
// Process raw LLM response if needed
// `raw` can be undefined if an error occurred, which happens when `error` is present
if (raw) {
calculateCost(raw.usage.input_tokens);
}
// Handle errors if present
if (error) {
console.error('Completion error:', error);
res.status(500).json({completion: null, error});
}
res.status(200).json({completion});
});
app.listen(port);
The handler should return a JSON response with the following structure:
{
"completion": "Generated completion text"
}
Or in case of an error:
{
"completion": null,
"error": "Error message"
}
If you prefer to use a different programming language for your API handler in cases where your backend is not in JavaScript, please refer to the section Cross-Language API Handler Implementation for guidance on implementing the handler in your chosen language.
Now, Monacopilot is set up to send completion requests to the /complete
endpoint and receive completions in response.
The copilot.complete
method processes the request body sent by Monacopilot and returns the corresponding completion.
Now, let's integrate AI auto-completion into your Monaco editor. Here's how you can do it:
import * as monaco from 'monaco-editor';
import {registerCompletion} from 'monacopilot';
const editor = monaco.editor.create(document.getElementById('container'), {
language: 'javascript',
});
registerCompletion(monaco, editor, {
// Examples:
// - '/api/complete' if you're using the Next.js (API handler) or similar frameworks.
// - 'https://api.example.com/complete' for a separate API server
// Ensure this can be accessed from the browser.
endpoint: 'https://api.example.com/complete',
// The language of the editor.
language: 'javascript',
});
Note: The
registerCompletion
function returns acompletion
object with aderegister
method. This method should be used to clean up the completion functionality when it's no longer needed. For example, in a React component, you can callcompletion.deregister()
within theuseEffect
cleanup function to ensure proper disposal when the component unmounts.
🎉 Congratulations! The AI auto-completion is now connected to the Monaco Editor. Start typing and see completions in the editor.
The trigger
option determines when the completion service provides code completions. You can choose between receiving suggestions/completions in real-time as you type or after a brief pause.
registerCompletion(monaco, editor, {
trigger: 'onTyping',
});
Trigger | Description | Notes |
---|---|---|
'onIdle' (default) |
Provides completions after a brief pause in typing. | This approach is less resource-intensive, as it only initiates a request when the editor is idle. |
'onTyping' |
Provides completions in real-time as you type. | Best suited for models with low response latency, such as Groq models or Claude 3-5 Haiku. This trigger mode initiates additional background requests to deliver real-time suggestions, a method known as predictive caching. |
'onDemand' |
Does not provide completions automatically. | Completions are triggered manually using the trigger function from the registerCompletion return. This allows for precise control over when completions are provided. |
on-typing-demo.mp4
Note: If you prefer real-time completions, you can set the
trigger
option to'onTyping'
. This may increase the number of requests made to the provider and the cost. This should not be too costly since most small models are very inexpensive.
If you prefer not to trigger completions automatically (e.g., on typing or on idle), you can trigger completions manually. This is useful in scenarios where you want to control when completions are provided, such as through a button click or a keyboard shortcut.
const completion = registerCompletion(monaco, editor, {
trigger: 'onDemand',
});
completion.trigger();
To set up manual triggering, configure the trigger
option to 'onDemand'
. This disables automatic completions, allowing you to call the completion.trigger()
method explicitly when needed.
You can set up completions to trigger when the Ctrl+Shift+Space
keyboard shortcut is pressed.
const completion = registerCompletion(monaco, editor, {
trigger: 'onDemand',
});
monaco.editor.addCommand(
monaco.KeyMod.CtrlCmd | monaco.KeyMod.Shift | monaco.KeyCode.Space,
() => {
completion.trigger();
},
);
You can add a custom editor action to trigger completions manually.
const completion = registerCompletion(monaco, editor, {
trigger: 'onDemand',
});
monaco.editor.addEditorAction({
id: 'monacopilot.triggerCompletion',
label: 'Complete Code',
contextMenuGroupId: 'navigation',
keybindings: [
monaco.KeyMod.CtrlCmd | monaco.KeyMod.Shift | monaco.KeyCode.Space,
],
run: () => {
completion.trigger();
},
});
Improve the quality and relevance of Copilot's suggestions by providing additional code context from other files in your project. This feature allows Copilot to understand the broader scope of your codebase, resulting in more accurate and contextually appropriate completions.
registerCompletion(monaco, editor, {
relatedFiles: [
{
path: './utils.js',
content:
'export const reverse = (str) => str.split("").reverse().join("")',
},
],
});
For instance, if you begin typing const isPalindrome =
in your current file, Copilot will recognize the reverse
function from the utils.js
file you provided earlier. It will then suggest a completion that utilizes this function.
Specify the name of the file being edited to receive more contextually relevant completions.
registerCompletion(monaco, editor, {
filename: 'utils.js', // e.g., "index.js", "utils/objects.js"
});
Now, the completions will be more relevant to the file's context.
Enable completions tailored to specific technologies by using the technologies
option.
registerCompletion(monaco, editor, {
technologies: ['react', 'next.js', 'tailwindcss'],
});
This configuration will provide completions relevant to React, Next.js, and Tailwind CSS.
To manage potentially lengthy code in your editor, you can limit the number of lines included in the completion request using the maxContextLines
option.
For example, if there's a chance that the code in your editor may exceed 500+ lines
, you don't need to provide 500 lines
to the model. This would increase costs due to the huge number of input tokens. Instead, you can set maxContextLines
to maybe 80
or 100
, depending on how accurate you want the completions to be and how much you're willing to pay for the model.
registerCompletion(monaco, editor, {
maxContextLines: 80,
});
Note: If you're using
Groq
as your provider, it's recommended to setmaxContextLines
to60
or less due to its low rate limits and lack of pay-as-you-go pricing. However,Groq
is expected to offer pay-as-you-go pricing in the near future.
Monacopilot caches completions by default. It uses a FIFO (First In First Out) strategy, reusing cached completions when the context and cursor position match while editing (default: true
). To disable caching:
registerCompletion(monaco, editor, {
enableCaching: false,
});
You can handle errors that occur during completion requests by providing an onError
function when calling registerCompletion
. This allows you to customize error handling and logging based on your application's needs.
This will disable the default error handling and logging behavior of Monacopilot.
registerCompletion(monaco, editor, {
onError: error => {
console.error(error);
},
});
The requestHandler
option in the registerCompletion
function allows you to handle requests sent to the specified endpoint, offering high customization for both requests and responses. By leveraging this functionality, you can manipulate and customize the request or response to meet your specific requirements.
registerCompletion(monaco, editor, {
endpoint: 'https://api.example.com/complete',
// ... other options
requestHandler: async ({endpoint, body}) => {
const response = await fetch(endpoint, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(body),
});
const data = await response.json();
return {
completion: data.completion,
};
},
});
The requestHandler
function takes an object with endpoint
and body
as parameters.
Property | Type | Description |
---|---|---|
endpoint |
string |
The endpoint to which the request is sent. This is the same as the endpoint in registerCompletion . |
body |
object |
The body of the request processed by Monacopilot. |
Note: The
body
object contains properties generated by Monacopilot. If you need to include additional properties in the request body, you can create a new object that combines the existingbody
with your custom properties. For example:const customBody = { ...body, myCustomProperty: 'value', };
The requestHandler
should return an object with the following property:
Property | Type | Description |
---|---|---|
completion |
string or null |
The completion text to be inserted into the editor. Return null if no completion is available. |
The example below demonstrates how to use the requestHandler
function for more customized handling:
registerCompletion(monaco, editor, {
endpoint: 'https://api.example.com/complete',
// ... other options
requestHandler: async ({endpoint, body}) => {
try {
const response = await fetch(endpoint, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-Request-ID': generateUniqueId(),
},
body: JSON.stringify({
...body,
additionalProperty: 'value',
}),
});
if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}
const data = await response.json();
if (data.error) {
console.error('API Error:', data.error);
return {completion: null};
}
return {completion: data.completion.trim()};
} catch (error) {
console.error('Fetch error:', error);
return {completion: null};
}
},
});
You can specify a different provider and model by setting the provider
and model
parameters in the Copilot
instance.
const copilot = new Copilot(process.env.OPENAI_API_KEY, {
provider: 'openai',
model: 'gpt-4o',
});
The default provider is anthropic
, and the default model is claude-3-5-haiku
.
Tip: Even though the default provider and model are
anthropic
andclaude-3-5-haiku
, it's always recommended to specify a provider and model when using Monacopilot. This ensures your code remains consistent even if the default settings change in future updates.
There are other providers and models available. Here is a list:
Provider | Models |
---|---|
Groq | llama-3-70b |
OpenAI | gpt-4o , gpt-4o-mini , o1-mini (beta model) |
Anthropic | claude-3-5-sonnet , claude-3-haiku , claude-3-5-haiku |
gemini-1.5-pro , gemini-1.5-flash , gemini-1.5-flash-8b |
You can use a custom LLM that isn't built into Monacopilot by setting up a model
when you create a new Copilot. This feature lets you connect to LLMs from other services or your own custom-built models.
Please ensure you are using a high-quality model, especially for coding tasks, to get the best and most accurate completions. Also, use a model with very low response latency (preferably under 1.5 seconds) to enjoy a great experience and utilize the full power of Monacopilot.
const copilot = new Copilot(process.env.HUGGINGFACE_API_KEY, {
// You don't need to set the provider if you are using a custom model.
// provider: 'huggingface',
model: {
config: (apiKey, prompt) => ({
endpoint:
'https://api-inference.huggingface.co/models/openai-community/gpt2',
headers: {
Authorization: `Bearer ${apiKey}`,
'Content-Type': 'application/json',
},
body: {
inputs: prompt.user,
parameters: {
max_length: 100,
num_return_sequences: 1,
temperature: 0.7,
},
},
}),
transformResponse: response => ({text: response[0].generated_text}),
},
});
The model
option accepts an object with two functions:
Function | Description | Type |
---|---|---|
config |
A function that receives the API key and prompt data, and returns the configuration for the custom model API request. | (apiKey: string, prompt: { system: string; user: string }) => { endpoint: string; body?: object; headers?: object } |
transformResponse |
A function that takes the raw/parsed response from the custom model API and returns an object with the text property. |
(response: unknown) => { text: string | null; } |
The config
function must return an object with the following properties:
Property | Type | Description |
---|---|---|
endpoint |
string |
The URL of the custom model API endpoint. |
body |
object or undefined |
The body of the custom model API request. |
headers |
object or undefined |
The headers of the custom model API request. |
The transformResponse
function must return an object with the text
property. This text
property should contain the text generated by the custom model. If no valid text can be extracted, the function should return null
for the text
property.
You can add custom headers to the provider's completion requests. For example, if you select OpenAI
as your provider, you can add a custom header to the OpenAI completion requests made by Monacopilot.
copilot.complete({
options: {
headers: {
'X-Custom-Header': 'custom-value',
},
},
});
You can customize the prompt used for generating completions by providing a customPrompt
function in the options parameter of the copilot.complete
method. This allows you to tailor the AI's behavior to your specific needs.
copilot.complete({
options: {
customPrompt: metadata => ({
system: 'Your custom system prompt here',
user: 'Your custom user prompt here',
}),
},
});
The system
and user
prompts in the customPrompt
function are optional. If you omit either the system
or user
prompt, the default prompt for that field will be used. Example of customizing only the system prompt:
copilot.complete({
options: {
customPrompt: metadata => ({
system:
'You are an AI assistant specialized in writing React components, focusing on creating clean...',
}),
},
});
The customPrompt
function receives a completionMetadata
object, which contains information about the current editor state and can be used to tailor the prompt.
Property | Type | Description |
---|---|---|
language |
string |
The programming language of the code. |
cursorPosition |
{ lineNumber: number; column: number } |
The current cursor position in the editor. |
filename |
string or undefined |
The name of the file being edited. Only available if you have provided the filename option in the registerCompletion function. |
technologies |
string[] or undefined |
An array of technologies used in the project. Only available if you have provided the technologies option in the registerCompletion function. |
relatedFiles |
object[] or undefined |
An array of objects containing the path and content of related files. Only available if you have provided the relatedFiles option in the registerCompletion function. |
textAfterCursor |
string |
The text that appears after the cursor. |
textBeforeCursor |
string |
The text that appears before the cursor. |
editorState |
object |
An object containing the completionMode property. |
The editorState.completionMode
can be one of the following:
Mode | Description |
---|---|
insert |
Indicates that there is a character immediately after the cursor. In this mode, the AI will generate content to be inserted at the cursor position. |
complete |
Indicates that there is a character after the cursor but not immediately. In this mode, the AI will generate content to complete the text from the cursor position. |
continue |
Indicates that there is no character after the cursor. In this mode, the AI will generate content to continue the text from the cursor position. |
For additional completionMetadata
needs, please open an issue.
The customPrompt
function should return an object with two properties:
Property | Type | Description |
---|---|---|
system |
string or undefined |
A string representing the system prompt for the model. |
user |
string or undefined |
A string representing the user prompt for the model. |
Here's an example of a custom prompt that focuses on generating React component code:
const customPrompt = ({textBeforeCursor, textAfterCursor}) => ({
system:
'You are an AI assistant specialized in writing React components. Focus on creating clean, reusable, and well-structured components.',
user: `Please complete the following React component:
${textBeforeCursor}
// Cursor position
${textAfterCursor}
Use modern React practices and hooks where appropriate. If you're adding new props, make sure to include proper TypeScript types. Please provide only the completed part of the code without additional comments or explanations.`,
});
copilot.complete({
options: {customPrompt},
});
By using a custom prompt, you can guide the model to generate completions that better fit your coding style, project requirements, or specific technologies you're working with.
While the example in this documentation uses JavaScript/Node.js (which is recommended), you can set up the API handler in any language or framework. For JavaScript, Monacopilot provides a built-in function that handles all the necessary steps, such as generating the prompt, sending it to the model, and processing the response. However, if you're using a different language, you'll need to implement these steps manually. Here's a general approach to implement the handler in your preferred language:
-
Create an endpoint that accepts POST requests (e.g.,
/complete
). -
The endpoint should expect a JSON body containing completion metadata.
-
Use the metadata to construct a prompt for your LLM.
-
Send the prompt to your chosen LLM and get the completion.
-
Return a JSON response with the following structure:
{ "completion": "Generated completion text" }
Or in case of an error:
{ "completion": null, "error": "Error message" }
- The prompt should instruct the model to return only the completion text, without any additional formatting or explanations.
- The completion text should be ready for direct insertion into the editor.
Check out the prompt.ts file to see how Monacopilot generates the prompt. This will give you an idea of how to structure the prompt for your LLM to achieve the best completions.
The request body's completionMetadata
object contains essential information for crafting a prompt for the LLM to generate accurate completions. See the Completion Metadata section for more details.
Here's a basic example using Python and FastAPI:
from fastapi import FastAPI, Request
app = FastAPI()
@app.post('/complete')
async def handle_completion(request: Request):
try:
body = await request.json()
metadata = body['completionMetadata']
prompt = f"""Please complete the following {metadata['language']} code:
{metadata['textBeforeCursor']}
<cursor>
{metadata['textAfterCursor']}
Use modern {metadata['language']} practices and hooks where appropriate. Please provide only the completed part of the
code without additional comments or explanations."""
# Simulate a response from a model
response = "Your model's response here"
return {
'completion': response,
'error': None
}
except Exception as e:
return {
'completion': None,
'error': str(e)
}
Now, Monacopilot is set up to send completion requests to the /complete
endpoint and receive completions in response.
registerCompletion(monaco, editor, {
endpoint: 'https://my-python-api.com/complete',
// ... other options
});
For guidelines on contributing, please read the contributing guide.
We welcome contributions from the community to enhance Monacopilot's capabilities and make it even more powerful. ❤️