This is a basic cybernetic framework that utilizes the llama-cpp-python module with CUDA support to run a spaceship-themed roleplay. It uses a custom agentic AI crew framework, written in pure Python without a pre-built library. The application will offer both a command-line interface (CLI) and a graphical user interface (UI Matrix Theme) built with Tkinter. It also includes basic functionality to open, close, and prepare files for a GGUF model.
You haven't provided the contents of your .env file or the specific tool paths, but I can give you a template for what they should look like.
A correct .env file should contain key-value pairs for sensitive information or configurable settings. For your project, this might include:
# Path to your GGUF model file
MODEL_PATH = "C:\\path\\to\\your\\model\\file.gguf"
# Or for a non-Windows path
# MODEL_PATH = "/path/to/your/model/file.gguf"
# Other potential variables
# LOG_LEVEL = "INFO"
# API_KEY = "your_api_key_if_needed"The tool paths would be defined within your Python scripts (like tools.py), and they would reference these .env variables using a library like python-dotenv.
The commands you provided are a good starting point for a Windows setup, but there's a potential compatibility issue and some small improvements for clarity.
Here's a refined version of the installation steps:
- Create and Activate Virtual Environment
py -3.12 -m venv .rvn .rvn\Scripts\Activate.ps1
- Upgrade
pipand Core Packagespython.exe -m pip install --upgrade pip pip install --upgrade pip setuptools wheel
- Install Python Libraries
pip install python-dotenv pydantic requests
- Install
llama-cpp-pythonwith CUDApip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cu125 --upgrade --force-reinstall --no-cache-dir
- Install PyTorch with CUDA
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
- Run the Application
python warp-core.py
Important Note: The CUDA version for llama-cpp-python (cu125) and PyTorch (cu128) are different. While this might work, it's generally best practice to ensure your CUDA dependencies are aligned to avoid unexpected issues. Check if there is a llama-cpp-python wheel available for cu128 to maintain consistency.
- Full API and usage docs are available in
docs/index.md.