AI-powered code generation assistant for the AVEVA PI System, built with Google Gemini 2.0 Flash API.
This project implements a five-stage sequential pipeline for automatically generating production-ready code to interact with PI System components. The pipeline follows the specifications outlined in the User Requirements Document (URD).
User Request → API Selection → Logic Creation → Code Creation → Test Run → File Output
- Automatically identifies the most appropriate PI System API
- Available APIs: PI SDK, PI AF SDK, PI Web API, PI SQL Client
- Uses Gemini 2.0 Flash for intelligent selection
- Converts user requests into explicit pseudo-code
- Defines data structures and error handling strategies
- Creates step-by-step logical flow
- Generates implementation code in target language
- Supports: Python, C#, JavaScript, TypeScript, Java, PowerShell, C++
- Follows PI API best practices
- Performs static analysis and quality checks
- Validates syntax, logic, best practices, error handling, and security
- Provides recommendations for improvement
- Packages code with documentation and metadata
- Generates README, manifest, and code files
- Creates file integrity hashes
- Python 3.7 or higher
- Google Gemini API key (set as environment variable)
- Clone the repository:
git clone <repository-url>
cd pi-system-coder- Install dependencies:
pip install -r requirements.txt- Configure environment variables:
Option A: Using .env file (recommended)
# Copy the example file
cp env.example .env
# Edit .env file with your API key
# GEMINI_API_KEY=your-api-key-here
# GEMINI_MODEL=gemini-2.0-flash-exp
# LOG_LEVEL=INFOOption B: Using environment variables directly
# Linux/Mac
export GEMINI_API_KEY="your-api-key-here"
# Windows Command Prompt
set GEMINI_API_KEY=your-api-key-here
# Windows PowerShell
$env:GEMINI_API_KEY="your-api-key-here"Each tool can be run independently for testing:
# API Selection
python backend/src/tools/api_selection.py
# Logic Creation
python backend/src/tools/logic_creation.py
# Code Creation
python backend/src/tools/code_creation.py
# Test Run
python backend/src/tools/test_run.py
# File Output
python backend/src/tools/file_output.pyRun all tests:
python -m unittest discover -s backend/tests -p "test_*.py" -vRun specific test file:
python backend/tests/test_api_selection.py
python backend/tests/test_logic_creation.py
python backend/tests/test_code_creation.py
python backend/tests/test_test_run.py
python backend/tests/test_file_output.pyRun the MCP server to expose tools via Model Context Protocol:
python -m backend.mcp.serverComplete pipeline workflow example:
from backend.src.tools.api_selection import api_selection
from backend.src.tools.logic_creation import logic_creation
from backend.src.tools.code_creation import code_creation
from backend.src.tools.test_run import test_run
from backend.src.tools.file_output import file_output, write_files_to_disk
# Stage 1: API Selection
user_request = "Read PI tag values for the last 24 hours"
api_result = api_selection(user_request)
# Stage 2: Logic Creation
if api_result["status"] == "success":
logic_result = logic_creation(
user_request=user_request,
selected_api=api_result["selected_api"]
)
# Stage 3: Code Creation
if logic_result["status"] == "success":
code_result = code_creation(
pseudo_code=logic_result["pseudo_code"],
data_structures=logic_result["data_structures"],
error_handling_strategy=logic_result["error_handling_strategy"],
selected_api=api_result["selected_api"],
target_language="Python"
)
# Stage 4: Test Run
if code_result["status"] == "success":
test_result = test_run(
code=code_result["code"],
target_language="Python",
selected_api=api_result["selected_api"]
)
# Stage 5: File Output
if test_result["status"] == "success":
output_result = file_output(
code=code_result["code"],
target_language="Python",
selected_api=api_result["selected_api"],
dependencies=code_result["dependencies"],
test_results=test_result
)
# Write files to disk
if output_result["status"] == "success":
files = write_files_to_disk(output_result, "output")
print(f"Generated files: {files}")pi-system-coder/
├── backend/ # Backend source code
│ ├── mcp/ # MCP Server
│ │ ├── server.py # MCP server implementation
│ │ └── README.md # MCP server documentation
│ ├── src/
│ │ ├── tools/ # Five-stage pipeline tools
│ │ │ ├── api_selection.py # Stage 1: API selection
│ │ │ ├── logic_creation.py # Stage 2: Logic creation
│ │ │ ├── code_creation.py # Stage 3: Code creation
│ │ │ ├── test_run.py # Stage 4: Test run
│ │ │ └── file_output.py # Stage 5: File output
│ │ └── config/ # Configuration management
│ └── tests/ # Backend unit tests
│ ├── test_api_selection.py
│ ├── test_logic_creation.py
│ ├── test_code_creation.py
│ ├── test_test_run.py
│ └── test_file_output.py
│
├── frontend/ # Frontend application (future)
│
├── config/ # Shared configuration files
│
├── docs/ # Additional documentation
│
├── scripts/ # Utility scripts
│
├── requirements.txt # Python dependencies
├── pyproject.toml # Modern Python project config
├── README.md # This file
├── USER_REQUIREMENTS_DOCUMENT.md # Full requirements specification
└── system_prompt.md # System prompt for AI assistant
- PI SDK: High-performance server-side data access
- PI AF SDK: Asset Framework operations
- PI Web API: RESTful cross-platform access
- PI SQL Client: Direct database queries
- Primary: Python, C#, VB.NET, JavaScript/TypeScript
- Secondary: Java, PowerShell, C++
- Syntax validation
- Logic consistency checks
- PI API best practices enforcement
- Security scanning (hardcoded credentials, SQL injection, etc.)
- Error handling verification
- Comprehensive documentation generation
- No Hardcoded Credentials: Generated code uses environment variables or configuration files
- Security Scanning: Automatic detection of security vulnerabilities
- Safe Patterns: Follows PI System security best practices
- Secure API Usage: Uses only public, documented SDK methods
- Fork the repository
- Create a feature branch
- Make your changes
- Add/update tests
- Ensure all tests pass
- Submit a pull request
The project includes comprehensive unit tests with mocking for Gemini API calls:
- Unit Tests: Each tool has dedicated test file
- Coverage: Tests cover success paths, error handling, and edge cases
- Integration Tests: Can be added for full pipeline testing (requires API key)
- Mocking: Uses
unittest.mockto avoid API costs during development
Run tests:
python -m unittest discover -v- User Requirements Document: See
USER_REQUIREMENTS_DOCUMENT.mdfor complete specifications - System Prompt: See
system_prompt.mdfor AI assistant behavior - Code Comments: Each module includes detailed docstrings
- Examples: Main block in each tool provides usage examples
This project is licensed under the MIT License - see LICENSE file for details.
- AVEVA for PI System APIs
- Google for Gemini AI capabilities
- Built following the User Requirements Document specifications
For issues, questions, or contributions, please open an issue on the repository.
- v1.0.0 - Initial release with five-stage pipeline
- API selection tool
- Logic creation tool
- Code creation tool
- Test run tool
- File output tool
- Comprehensive unit tests
- Web-based UI for pipeline execution
- CLI for command-line usage
- Additional language support
- More PI System API integrations
- Batch processing capabilities
- Code templates and snippets library