Bring your own copilot server and customize commands to refactor instead of autofill or tabbed completion.
-
Updated
Aug 3, 2023 - JavaScript
Bring your own copilot server and customize commands to refactor instead of autofill or tabbed completion.
Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization
JavaScript bindings for the ggml-js library
Add a description, image, and links to the ggml topic page so that developers can more easily learn about it.
To associate your repository with the ggml topic, visit your repo's landing page and select "manage topics."