An Electron application that integrates with llama.cpp to process text prompts using LLM models. This project demonstrates how to create a Node.js native addon that interfaces with the llama.cpp library.
- Load LLM models through a user-friendly interface
- Process text prompts asynchronously in a separate thread
- Built with Electron for cross-platform compatibility
- Direct integration with llama.cpp via a Node.js addon
- Node.js (v16+)
- npm or yarn
- C++ compiler (GCC, Clang, or MSVC)
- CMake (for building llama.cpp)
- Git
-
Clone this repository:
git clone https://github.com/aruntemme/llama.cpp-electron.git cd llama.cpp-electron -
Install dependencies:
npm install -
Clone and build llama.cpp (required before building the Node.js addon):
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp mkdir build cd build cmake .. cmake --build . --config Release cd ../.. -
Build the Node.js addon:
npm run build -
Start the application:
npm start
- Launch the application
- Click "Select Model" to choose a llama.cpp compatible model file (.bin or .gguf)
- Enter a prompt in the text area
- Click "Process Prompt" to analyze the text
- View the results in the results section
You'll need to download LLM model files separately. Compatible models include:
- GGUF format models (recommended)
- Other formats supported by llama.cpp
You can download models from Hugging Face or other repositories. Place them in a location accessible by the application.
- Model loading errors: Ensure your model file is compatible with llama.cpp
- Addon building errors: Make sure llama.cpp is properly built before building the addon
- Performance issues: Large models may require more memory and processing power
- Cannot find llama.h: Make sure you've built llama.cpp using the steps above
- Loading model fails: Verify the model path is correct and the model is in a supported format
- Electron startup errors: Check the terminal output for detailed error messages
src/- Main application source codeaddon/- C++ Node.js addon codemain.js- Electron main processpreload.js- Preload script for IPCrenderer.js- Frontend logicindex.html- Main application UIstyles.css- Application styling
llama.cpp/- Submodule for llama.cpp library
This project is licensed under the ISC License - see the LICENSE file for details.