A powerful web application that helps you discover, manage, and execute Python programs with AI assistance. This tool simplifies the process of finding Python programs in a directory, understanding their parameters, and executing them with proper inputs.
- Project Overview
- Features
- Installation
- Usage Guide
- Code Structure
- Customization and Configuration
- Troubleshooting and FAQs
- Future Roadmap
- License
- Contact Information
The Python Program Execution Assistant is designed to help developers, data scientists, and even non-technical users run Python programs without needing to understand the underlying code. It uses AI-powered agents (via CrewAI) to:
- Discover Python programs in a specified directory
- Identify required parameters for execution
- Validate user inputs
- Execute programs and display results in a user-friendly format
This application is perfect for:
- Teams sharing Python utilities
- Data scientists running analysis scripts
- Educators demonstrating code execution
- Anyone who wants to run Python programs without writing code
- Intuitive Web Interface: Built with Streamlit for a clean, responsive user experience
- Automatic Program Discovery: Finds Python programs with
execute()
functions in any directory - Parameter Detection: Automatically identifies required parameters for each program
- AI-Assisted Execution: Uses CrewAI to intelligently handle program execution
- Beautiful Results Display: Formats execution results in an easy-to-understand way
- Error Handling: Provides clear error messages and troubleshooting information
- Customizable: Configure the application to suit your needs
Before installing the Python Program Execution Assistant, ensure you have:
- Python 3.8 or higher installed
- Pip package manager
- Ollama (for local LLM execution) or access to OpenAI API (optional)
- Clone or download the repository:
git clone https://github.com/abanmitra/Python-Program-Execution-Assistant.git
cd python-program-execution-assistant
- Create and activate a virtual environment (recommended):
# On Windows
python -m venv .venv
.venv\Scripts\activate
# On macOS/Linux
python -m venv .venv
source ./venv/bin/activate
- Install required dependencies:
pip install -r requirements.txt
- Set up environment variables:
Create a .env
file in the root directory with the following configuration:
# Ollama Configuration (for local model)
OLLAMA_MODEL=ollama/deepseek-r1:14b
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_TEMPERATURE=0.8
# Application Configuration
DEFAULT_PROGRAMS_DIRECTORY=/path/to/your/programs
- Install and start Ollama (if using local models):
Follow the instructions at Ollama's official website to install Ollama and download the required model (e.g., deepseek-r1:14b
).
Run the application with:
application_root$ streamlit run .\src\app.py
This will start the web server and open the application in your default browser. If it doesn't open automatically, navigate to http://localhost:8501
.
- When the application starts, you'll see the directory selection section at the top.
- You can either:
- Enter the path manually in the text field
- Use the directory browser (select any file in your target directory)
- Click the "Confirm Directory" button to proceed.
- After confirming the directory, click the "Discover Available Programs" button.
- The application will scan the directory for Python files with
execute()
functions. - Discovered programs will be displayed with their names, paths, and required parameters.
- Select a program from the dropdown menu in the "Program Execution" section.
- Fill in the required parameters for the selected program.
- Click the "Execute Program" button to run the program.
- The execution progress will be displayed in real-time in the right panel.
After execution:
- The application will display a "Execution Results" section at the bottom.
- For successful executions, you'll see:
- Program output formatted in a user-friendly way
- Key metrics (if available)
- Visualizations (for numerical data)
- For failed executions, you'll see detailed error information.
The application is organized into the following structure:
my_project/
β
βββ src/ # Source code directory
βββ .env # Environment variables
βββ requirements.txt # Dependencies
βββ app.py # Main application file
βββ agents/ # AI Agents configuration
β βββ ollama/
β βββ ProgramExecutionAgents.py
βββ tasks/ # Task definitions
β βββ ProgramExecutionTasks.py
βββ exec_tools/ # Execution tools
βββ CustomTools.py
βββ ProgramDiscoveryTools.py
βββ ProgramExecutionTools.py
Key components:
- app.py: The main Streamlit application that defines the user interface and workflow
- ProgramExecutionAgents.py: Defines the AI agents that discover and execute programs
- ProgramExecutionTasks.py: Defines tasks for program discovery, parameter validation, and execution
- CustomTools.py: Wrapper classes for program discovery and execution tools
- ProgramDiscoveryTools.py: Tools for finding and inspecting Python programs
- ProgramExecutionTools.py: Tools for dynamically loading and executing Python programs
You can customize the application by modifying the following environment variables in the .env
file:
Variable | Description | Default |
---|---|---|
OLLAMA_MODEL |
The AI model to use for program execution | ollama/deepseek-r1:14b |
OLLAMA_BASE_URL |
URL for the Ollama API | http://localhost:11434 |
OLLAMA_TEMPERATURE |
Temperature setting for the AI model (higher = more creative) | 0.8 |
DEFAULT_PROGRAMS_DIRECTORY |
Default directory to search for Python programs | - |
Reference: Custom Ollama Model Creation
To extend the application with custom tools:
- Create a new Python file in the
src/exec_tools
directory. - Implement your tool class following the pattern in
CustomTools.py
. - Register your tool with the agent in
ProgramExecutionAgents.py
.
Example for a custom tool:
# In a new file, e.g., MyCustomTool.py
from langchain_core.tools import Tool
class MyCustomTool:
"""A custom tool for specific functionality"""
def __call__(self, *args, **kwargs):
# Implement your functionality here
return result
def get_tool(self) -> Tool:
"""Create a LangChain Tool instance"""
return Tool(
name="my_custom_tool",
func=self.__call__,
description="Description of what your tool does"
)
# Then in ProgramExecutionAgents.py, add:
from src.exec_tools.MyCustomTool import MyCustomTool
# And in the program_execution_agent method:
my_tool = MyCustomTool()
my_langchain_tool = my_tool.get_tool()
# Add to the tools list:
tools=[
discovery_langchain_tool,
execution_langchain_tool,
my_langchain_tool
]
- Make sure the directory path is correct and accessible.
- Use absolute paths (like
C:/Users/username/projects
) instead of relative paths.
- Ensure your Python files have an
execute()
function. - Check that the files don't have syntax errors.
- Make sure the directory doesn't contain only Python packages or modules.
- Check that all required parameters are provided correctly.
- Look at the error details for specific error messages from your program.
- Make sure your
execute()
function handles exceptions properly.
- Ensure Ollama is installed and running (
ollama run deepseek-r1:14b
). - Check that the
OLLAMA_BASE_URL
in your.env
file matches your Ollama installation. - Verify that the model specified in
OLLAMA_MODEL
is downloaded in Ollama.
The application looks for Python files that contain an execute()
function. This function should:
- Accept parameters that users will provide through the UI
- Return results that can be displayed in the UI
- Handle exceptions gracefully
Here's a simple example:
# example_program.py
def execute(name="World", times=1):
"""
A simple greeting program.
Parameters:
- name (str): Name to greet
- times (int): Number of times to repeat the greeting
Returns:
- dict: Greeting results
"""
try:
# Convert times to int if it's a string
times = int(times) if isinstance(times, str) else times
# Create the greeting
greeting = f"Hello, {name}!"
repeated = [greeting] * times
# Return results
return {
"greeting": greeting,
"repeated": repeated,
"count": times
}
except Exception as e:
return {"error": str(e)}
Yes, you can modify the .env
file to use OpenAI or other API-based models instead of Ollama. You'll need to update the configuration and potentially modify the agent setup in ProgramExecutionAgents.py
.
Future enhancements planned for the Python Program Execution Assistant:
- Program Editing: Add functionality to edit discovered programs directly from the UI
- Batch Execution: Enable running multiple programs in sequence
- Scheduled Execution: Add the ability to schedule program execution
- Result Export: Add options to export execution results in various formats
- User Authentication: Add user authentication for secure access
- Program Templates: Provide templates for creating new compatible programs
- Custom UI Themes: Allow users to customize the appearance of the application
This project is licensed under the MIT License - see the LICENSE file for details.
For questions, support, or feedback, please contact:
- Email: difworksaban@gmail.com
- GitHub: Python Program Execution Assistant
Thank you for using the Python Program Execution Assistant! We hope it makes your Python program management easier and more efficient.