Action Model is a FastAPI-based application that interacts with an LLM (GPT-4o-mini) to generate, validate, and execute Bash commands in a controlled environment. The system uses LangGraph for state management and Docker for script execution.
🛠 FastAPI Backend: Provides an API for handling user queries.
⚡ Asynchronous Execution: Uses async/await for non-blocking operations.
🤖 LLM Integration: Interacts with GPT-4o-mini to generate and validate Bash commands.
🔍 Command Validation: Ensures that generated commands are correct before execution.
🖥 Bash Script Execution: Runs validated commands inside a controlled Docker environment.
📜 Session-Based Communication: Supports thread-based execution with a session ID.
.
├── fast.py # FastAPI application
├── main.py # CLI-based interaction
├── model.py # LangGraph state management & LLM interactions
├── script.py # Bash script execution logic
├── templates/ # Prompt templates for LLM
├── requirements.txt # Python dependencies
└── README.md # Documentation
- Docker (for script execution)
- OpenAI API Key (required for LLM interactions)
- Clone the repository
git clone https://github.com/noxs1d/action-model.git cd action-model - Set up environment variables
export OPENAI_API_KEY=your_api_key_here - Build the Docker image
docker build -t action-model . - Run the container
docker run -p 8000:8000 --env OPENAI_API_KEY=$OPENAI_API_KEY --name action-model action-model
The API will be available at http://localhost:8000/docs.
- 🛡 Enhance Security: Implement command whitelisting to prevent dangerous operations.
- 🚀 Improve Async Handling: Replace blocking
subprocess.run()withasyncio.create_subprocess_exec(). - ✅ Add Unit Tests: Use
pytestto ensure API stability. - 📊 Implement Logging: Improve debugging with structured logs.