This project provides a Python-based interactive command-line interface that emulates a traditional shell environment. It supports standard command execution, tab completion, and integrates with a Large Language Model (LLM) to interpret natural language commands. Users can execute typical shell commands or invoke the LLM for advanced assistance by prefixing commands with !.
- Interactive Shell Interface: Real-time command input with tab completion.
- Command Execution: Run standard shell commands (
dir,cd, etc.) and display their output. - LLM Integration: Interpret natural language commands prefixed with
!and convert them into executable shell commands. - Confirmation Prompt: Prompt users for confirmation before executing commands suggested by the LLM.
- Cross-Platform Support: Compatible with Windows, macOS, and Linux environments.
- Python: Version 3.8 or higher.
- C Compiler: Required for building certain dependencies.
- Windows: Visual Studio Build Tools
- macOS: Xcode Command Line Tools
- Linux: GCC or Clang
prompt_toolkit: For the interactive shell interface.llama-cpp-python: Python bindings for thellama.cpplibrary, enabling LLM functionalities.huggingface_hub: Interface to download models from Hugging Face.
-
Install
prompt_toolkitandhuggingface_hub:pip install prompt_toolkit huggingface_hub
-
Install
llama-cpp-python:The installation of
llama-cpp-pythonmay require additional configuration based on your operating system and hardware. Below are the general steps:-
macOS with Metal (MPS) Support:
Ensure you have Xcode installed:
xcode-select --install
Then, install
llama-cpp-pythonwith Metal support:CMAKE_ARGS="-DGGML_METAL=on" pip install llama-cpp-python -
Windows:
Install the necessary build tools:
- Download and install Visual Studio Build Tools.
- During installation, select "Desktop development with C++".
Then, install
llama-cpp-python:pip install llama-cpp-python
-
Linux:
Install the necessary build tools:
sudo apt-get update sudo apt-get install build-essential
Then, install
llama-cpp-python:pip install llama-cpp-python
For detailed installation instructions and troubleshooting, refer to the llama-cpp-python documentation.
-
-
Download a Quantized Model:
To utilize the LLM functionalities, download a quantized model (e.g., 6-bit) from Hugging Face:
from huggingface_hub import hf_hub_download model_path = hf_hub_download( repo_id="TheBloke/LLaMA-2-7B-chat-GGML", filename="model.q6_K.bin" )
This script downloads the specified model file and returns the local path where it's saved.
-
Running the Shell Interface:
Save the provided Python script (e.g.,
shell_interface.py) and execute it:python shell_interface.py
-
Command Examples:
-
Standard Command Execution:
> dirExecutes the
dircommand and displays the output. -
Invoking the LLM:
> !list files in the current directoryThe LLM interprets the natural language command and suggests an equivalent shell command:
LLM suggested command: dir Do you want to execute this command? (yes/no): yesUpon confirmation, the suggested command is executed.
-
-
Exiting the Interface:
To exit, type:
> exit
.
├── shell_interface.py # Main Python script for the shell interface
└── command_history.txt # Command history file (automatically generated)
- Enhanced LLM Integration: Connect to live LLM services for more dynamic command generation.
- Advanced Shell Features: Support for piping (
|), redirection (>), and custom aliases. - Dynamic Tab Completion: Offer command suggestions based on the current shell environment and user-defined commands.
- Extended Shell Support: Emulate environments like
bash,PowerShell, orcmdmore closely.
Contributions are welcome! To contribute:
- Fork the repository.
- Create a new branch for your feature or bug fix.
- Submit a pull request with a detailed description of your changes.
This project is licensed under the MIT License. See the LICENSE file for more details.
For questions or suggestions, feel free to reach out:
- Author: Eddie Offermann
- Email: eddie@bigblueceiling.com
- GitHub: My GitHub Profile