Skip to content

Vishnu-Opsera/action-intellicode-suite

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

25 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ€– LLM-Powered CI/CD Intelligence Suite

A containerized AI-powered CI/CD pipeline service that brings intelligent automation to your GitHub Actions workflows. This service provides automated code analysis, test generation, smart deployment strategies, and intelligent reporting using Large Language Models.

✨ Features

  • πŸ” Intelligent Code Analysis: AI-powered security vulnerability detection and code quality assessment
  • πŸ§ͺ Automated Test Generation: Generate comprehensive unit tests for files with low coverage
  • πŸš€ Smart Deployment Strategies: AI-recommended deployment strategies based on risk assessment
  • πŸ“Š Intelligent Reporting: Generate actionable insights and summaries of CI/CD pipeline results
  • πŸ“ˆ Comprehensive Analytics: Track security posture, quality metrics, deployment patterns, and MTTR
  • 🧠 AI-Powered Log Analysis: Analyze GitHub Actions, application, and deployment logs for insights
  • πŸ“Š Real-Time Dashboards: Monitor KPIs and trends with interactive dashboards
  • πŸ“‹ Advanced Reporting: Generate detailed reports on security, quality, deployment frequency, and more
  • 🐳 Containerized Service: Easy-to-use Docker container that integrates with any GitHub Actions workflow
  • πŸ”„ Batch Processing: Execute multiple operations in a single API call
  • πŸ›‘οΈ Security-First: Built with security best practices and non-root container execution

πŸš€ Quick Start

Using the GitHub Action

Add this to your .github/workflows/ci.yml:

name: LLM-Powered CI/CD Pipeline

on:
  push:
    branches: [ main, develop ]
  pull_request:
    branches: [ main ]

jobs:
  llm-analysis:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0
      
      - name: LLM Code Analysis
        uses: your-org/llm-cicd-action@v1
        with:
          openai-api-key: ${{ secrets.OPENAI_API_KEY }}
          analysis-type: 'analyze'
          security-scan-results: '[]'
          quality-metrics: '[]'
      
      - name: Generate Tests (if low coverage)
        if: steps.llm-analysis.outputs.deployment-approved == 'true'
        uses: your-org/llm-cicd-action@v1
        with:
          openai-api-key: ${{ secrets.OPENAI_API_KEY }}
          analysis-type: 'generate-tests'
          source-files: '[{"path": "src/components/Button.tsx", "content": "..."}]'

Using the Docker Container Directly

# Pull and run the container
docker run -d \
  -p 3000:3000 \
  -e OPENAI_API_KEY="your-openai-api-key" \
  -e GITHUB_TOKEN="your-github-token" \
  ghcr.io/your-org/llm-cicd-service:latest

# Test the service
curl http://localhost:3000/health

πŸ“‹ API Endpoints

Health Check

GET /health

Service Status

GET /api/status

Code Analysis

POST /api/analyze
Content-Type: application/json
Authorization: Bearer YOUR_OPENAI_API_KEY

{
  "diff": "git diff content...",
  "githubContext": {
    "ref": "refs/heads/main",
    "event_name": "push",
    "actor": "username"
  },
  "securityFindings": [],
  "codeQualityIssues": []
}

Generate Report

POST /api/generate-report
Content-Type: application/json
Authorization: Bearer YOUR_OPENAI_API_KEY
X-GitHub-Token: YOUR_GITHUB_TOKEN

{
  "workflowResults": {},
  "securityData": {},
  "qualityData": {},
  "testingData": {},
  "deploymentData": {},
  "performanceData": {}
}

Smart Deployment

POST /api/deploy
Content-Type: application/json
Authorization: Bearer YOUR_OPENAI_API_KEY

{
  "environment": "production",
  "analysisResults": {
    "security_risk": "low",
    "code_quality_score": 8,
    "deployment_risk": "medium"
  },
  "deploymentConfig": {}
}

Generate Tests

POST /api/generate-tests
Content-Type: application/json
Authorization: Bearer YOUR_OPENAI_API_KEY

{
  "sourceFiles": [
    {
      "path": "src/components/Button.tsx",
      "content": "export const Button = () => { ... }"
    }
  ],
  "coverageThreshold": 70,
  "testFramework": "jest"
}

Batch Operations

POST /api/batch
Content-Type: application/json
Authorization: Bearer YOUR_OPENAI_API_KEY

{
  "operations": [
    {
      "type": "analyze",
      "data": { "diff": "...", "githubContext": {...} }
    },
    {
      "type": "generate-tests",
      "data": { "sourceFiles": [...] }
    }
  ]
}

πŸ“Š Analytics & Metrics API

Security Metrics

GET /api/analytics/metrics/security?days=30

Quality Metrics

GET /api/analytics/metrics/quality?days=30

Deployment Metrics

GET /api/analytics/metrics/deployment?days=30

MTTR Metrics

GET /api/analytics/metrics/mttr?days=30

Dashboard Overview

GET /api/dashboard/overview?days=30

Comprehensive Report

GET /api/reports/comprehensive?days=30

Log Analysis

POST /api/analytics/logs/analyze/github-actions
Content-Type: application/json
Authorization: Bearer YOUR_OPENAI_API_KEY

{
  "logData": {
    "repository": "my-org/my-repo",
    "workflow": "CI/CD Pipeline",
    "logs": [
      {
        "timestamp": "2024-01-15T10:30:00Z",
        "level": "error",
        "message": "Build failed: timeout"
      }
    ]
  }
}

πŸ› οΈ Configuration

Environment Variables

Variable Description Required Default
OPENAI_API_KEY OpenAI API key for LLM functionality Yes -
GITHUB_TOKEN GitHub token for repository access No -
PORT Service port No 3000
NODE_ENV Node environment No production
ALLOWED_ORIGINS Comma-separated list of allowed CORS origins No *

GitHub Action Inputs

Input Description Required Default
openai-api-key OpenAI API key Yes -
github-token GitHub token No ${{ github.token }}
analysis-type Type of analysis to perform No analyze
environment Target environment No staging
coverage-threshold Test coverage threshold No 70
container-image Docker image to use No latest
service-port Service port No 3000
timeout Request timeout in seconds No 300

πŸ“Š Outputs

The GitHub Action provides the following outputs:

  • analysis-result: Complete analysis results
  • deployment-decision: Deployment decision and strategy
  • generated-report: Generated intelligent report
  • generated-tests: Generated test files
  • security-risk: Security risk level (low/medium/high/critical)
  • code-quality-score: Code quality score (1-10)
  • deployment-approved: Whether deployment is approved
  • deployment-strategy: Recommended deployment strategy
  • metrics: Collected analytics metrics (security, quality, deployment)

πŸ“ˆ Analytics & Metrics

Key Metrics Tracked

  • Security Posture: Vulnerability counts, risk levels, security gate triggers
  • Quality Metrics: Code quality scores, test coverage, improvement trends
  • Deployment Frequency: Success rates, strategy distribution, timing patterns
  • MTTR: Mean time to recovery, incident response times
  • Rollback Metrics: Rollback frequency, reasons, prevention strategies
  • Artifact Patterns: Build sizes, types, optimization opportunities

Real-Time Dashboards

  • Overview Dashboard: High-level KPIs and trends
  • Security Dashboard: Security posture and vulnerability analysis
  • Quality Dashboard: Code quality metrics and improvement areas
  • Deployment Dashboard: Deployment patterns and success rates
  • Performance Dashboard: System performance and resource usage

AI-Powered Log Analysis

  • GitHub Actions Logs: Workflow performance, errors, bottlenecks
  • Application Logs: Application health, performance, user impact
  • Deployment Logs: Deployment success, failures, rollback patterns
  • Intelligent Insights: AI-generated recommendations and trend analysis

πŸ”§ Development

Local Development

# Clone the repository
git clone <repository-url>
cd action-intellicode-suite

# Install dependencies
npm install

# Start development server
npm run start:dev

# Run tests
npm test

# Run linting
npm run lint

Building the Container

# Build the Docker image
docker build -t llm-cicd-service .

# Run locally
docker run -p 3000:3000 \
  -e OPENAI_API_KEY="your-key" \
  llm-cicd-service

Publishing

# Build and push to GitHub Container Registry
docker build -t ghcr.io/your-org/llm-cicd-service:latest .
docker push ghcr.io/your-org/llm-cicd-service:latest

πŸ›‘οΈ Security

  • Non-root execution: Container runs as non-root user
  • Security headers: Helmet.js for security headers
  • Input validation: All inputs are validated and sanitized
  • Rate limiting: Built-in protection against abuse
  • Secrets management: Secure handling of API keys and tokens

πŸ“ˆ Performance

  • Multi-stage Docker build: Optimized container size
  • Health checks: Built-in health monitoring
  • Graceful shutdown: Proper signal handling
  • Request timeouts: Configurable timeout limits
  • Resource limits: Memory and CPU constraints

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Submit a pull request

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ†˜ Support

  • Documentation: Check this README and inline code comments
  • Issues: Report bugs and request features via GitHub Issues
  • Discussions: Join community discussions in GitHub Discussions

πŸ”„ Version History

  • v1.0.0: Initial release with core LLM functionality
  • v1.1.0: Added batch processing and improved error handling
  • v1.2.0: Enhanced security features and performance optimizations

Built for running GHA with super intelligence

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

No packages published

Contributors 5