This is a fork of the CodeGenX open source plugin, with a simple backend implemented for personal use.
There are no packages.
To experimentally run this plugin, you can
- install and run the backend
- open this repository in VSCode, run
npm i --ci
, and press F5.
This project was created in less than 6 hours and has almost zero features at the moment.
Current short-term goals:
- fix obvious bugs (context length, failure to redo completion request)
- configuration options (inference config, multiple completion options, model choice)
- copilot-like tab-completion
- reduce hardware requirements (e.g. lower RAM spike, lower vram with LLM.int8, use accelerate and other speedups)
The default model is fine-tuned on Python code. Details on how to change this can be found in here
This project exists because DeepGenX open-sourced their extension.