Skip to content

PramodDutta/AIATBSeleniumJavaFramework

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

3 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ”¬ AI-Powered Test Automation Framework

Java Selenium Python License

A comprehensive Selenium Java test automation framework with AI-powered failure analysis using local embeddings - no data leaves your machine!

Author: Pramod Dutta
Website: The Testing Academy


πŸ“‹ Table of Contents


✨ Features

Test Automation Framework (Java)

  • βœ… Page Object Model (POM) - Clean separation of test logic and page elements
  • βœ… Selenium 4.27 - Latest Selenium with built-in WebDriver Manager
  • βœ… TestNG - Powerful test framework with parallel execution support
  • βœ… Allure Reporting - Beautiful test reports with screenshots on failure
  • βœ… Automatic Screenshots - Captures screenshots, page source, and URL on test failure
  • βœ… Structured Logging - JSONL format logs for AI analysis

Cloud Testing Platforms

  • ☁️ BrowserStack Integration - Run tests on 3000+ real browsers and devices
  • ☁️ LambdaTest Integration - Scalable cross-browser testing in the cloud
  • πŸ–₯️ Local Execution - Run tests locally with Chrome, Firefox, or Edge
  • πŸ”„ Easy Switching - Switch between local/cloud with a single parameter

CI/CD Integration

  • πŸ”§ Jenkins Pipeline - Ready-to-use Jenkinsfile with all stages
  • πŸ“¦ Parameterized Builds - Configure browser, environment, and test suite
  • πŸ“Š Allure Reports in Jenkins - Integrated reporting with trend analysis
  • πŸ€– AI Report Generation - Automatic AI analysis report in CI pipeline

AI Analysis Pipeline (Python)

  • 🧠 Local AI Embeddings - Sentence-Transformers running 100% locally
  • πŸ” Semantic Similarity Search - Find similar failures by meaning, not just keywords
  • πŸ“Š Interactive Dashboard - Streamlit-based UI for visualizing test results
  • πŸ”’ Privacy First - No data leaves your machine, no external API calls
  • πŸ“ˆ Analytics - Track failure patterns, test duration, and trends

image

πŸ— Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    TEST AUTOMATION FRAMEWORK                        β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                                                                     β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”          β”‚
β”‚  β”‚  Test Cases  │───▢│  Page Objects │───▢│   Selenium   β”‚          β”‚
β”‚  β”‚  (TestNG)    β”‚    β”‚    (POM)      β”‚    β”‚   WebDriver  β”‚          β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜          β”‚
β”‚         β”‚                                                           β”‚
β”‚         β–Ό                                                           β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”          β”‚
β”‚  β”‚   Allure     β”‚    β”‚   JSONL      β”‚    β”‚  Screenshots β”‚          β”‚
β”‚  β”‚   Reports    β”‚    β”‚   Logs       β”‚    β”‚  on Failure  β”‚          β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜          β”‚
β”‚                             β”‚                                       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                              β”‚
                              β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    AI ANALYSIS PIPELINE                             β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                                                                     β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”          β”‚
β”‚  β”‚  Log Parser  │───▢│  Embeddings  │───▢│   Qdrant     β”‚          β”‚
β”‚  β”‚  (Python)    β”‚    β”‚  (Local AI)  β”‚    β”‚ Vector Store β”‚          β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜          β”‚
β”‚                                                 β”‚                   β”‚
β”‚                                                 β–Ό                   β”‚
β”‚                      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”          β”‚
β”‚                      β”‚      Streamlit Dashboard         β”‚          β”‚
β”‚                      β”‚  - Metrics & Charts              β”‚          β”‚
β”‚                      β”‚  - Failure Analysis              β”‚          β”‚
β”‚                      β”‚  - Similarity Search             β”‚          β”‚
β”‚                      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜          β”‚
β”‚                                                                     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ“¦ Prerequisites

Requirement Version Purpose
Java JDK 17+ Running Selenium tests
Maven 3.8+ Build and dependency management
Python 3.9+ AI analysis pipeline
Chrome/Firefox Latest Browser for testing

πŸš€ Quick Start

# 1. Clone the repository
git clone https://github.com/pramods12/AIATBSeleniumJavaFramework.git
cd AIATBSeleniumJavaFramework

# 2. Install Java (macOS)
brew install openjdk@17

# 3. Run tests
./mvnw clean test

# 4. View Allure report
./mvnw allure:serve

# 5. Setup AI Analysis
cd ai-analysis
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt

# 6. Launch Dashboard
streamlit run dashboard.py

πŸ“ Project Structure

AIATBSeleniumJavaFramework/
β”œβ”€β”€ πŸ“„ pom.xml                      # Maven configuration
β”œβ”€β”€ πŸ“„ testng.xml                   # TestNG suite configuration
β”œβ”€β”€ πŸ“„ mvnw                         # Maven wrapper
β”‚
β”œβ”€β”€ πŸ“‚ src/
β”‚   β”œβ”€β”€ πŸ“‚ main/java/com/automation/
β”‚   β”‚   β”œβ”€β”€ πŸ“‚ pages/               # Page Object classes
β”‚   β”‚   β”‚   β”œβ”€β”€ BasePage.java
β”‚   β”‚   β”‚   β”œβ”€β”€ LoginPage.java
β”‚   β”‚   β”‚   └── DashboardPage.java
β”‚   β”‚   β”œβ”€β”€ πŸ“‚ listeners/           # TestNG listeners
β”‚   β”‚   β”‚   └── AllureScreenshotListener.java
β”‚   β”‚   └── πŸ“‚ analytics/           # Test analytics
β”‚   β”‚       └── TestAnalyticsLogger.java
β”‚   β”‚
β”‚   └── πŸ“‚ test/java/com/automation/
β”‚       β”œβ”€β”€ πŸ“‚ base/                # Base test class
β”‚       β”‚   └── BaseTest.java
β”‚       └── πŸ“‚ tests/               # Test classes
β”‚           └── LoginTest.java
β”‚
β”œβ”€β”€ πŸ“‚ ai-analysis/                 # AI Analysis Pipeline
β”‚   β”œβ”€β”€ πŸ“„ requirements.txt         # Python dependencies
β”‚   β”œβ”€β”€ πŸ“„ dashboard.py             # Streamlit dashboard
β”‚   β”œβ”€β”€ πŸ“„ main.py                  # CLI entry point
β”‚   β”œβ”€β”€ πŸ“‚ config/
β”‚   β”‚   └── settings.py             # Configuration
β”‚   └── πŸ“‚ src/
β”‚       β”œβ”€β”€ log_parser.py           # JSONL log parser
β”‚       β”œβ”€β”€ embedding_service.py    # Local AI embeddings
β”‚       β”œβ”€β”€ vector_store.py         # Qdrant vector database
β”‚       β”œβ”€β”€ analysis_service.py     # Analysis orchestration
β”‚       └── api.py                  # FastAPI REST API
β”‚
└── πŸ“‚ target/                      # Build output
    β”œβ”€β”€ πŸ“‚ allure-results/          # Allure report data
    └── πŸ“‚ analytics-logs/          # JSONL test logs
        └── test-events.jsonl

πŸ“ Step-by-Step Setup

Step 1: Install Java JDK 17+

macOS (Homebrew):

brew install openjdk@17
echo 'export PATH="/opt/homebrew/opt/openjdk@17/bin:$PATH"' >> ~/.zshrc
source ~/.zshrc
java -version

Ubuntu/Debian:

sudo apt update
sudo apt install openjdk-17-jdk
java -version

Windows:

  1. Download from Adoptium
  2. Run installer
  3. Add to PATH

Step 2: Clone and Build Project

# Clone repository
git clone https://github.com/yourusername/AIATBSeleniumJavaFramework.git
cd AIATBSeleniumJavaFramework

# Make Maven wrapper executable
chmod +x mvnw

# Build project (downloads dependencies)
./mvnw clean compile

Step 3: Run Tests

# Run all tests locally
./mvnw clean test

# Run specific test class
./mvnw test -Dtest=LoginTest

# Run with specific browser
./mvnw test -Dbrowser=firefox

# Run in headless mode (for CI)
./mvnw test -Dheadless=true

Step 3.1: Run on BrowserStack

# Set credentials (or use .env file)
export BROWSERSTACK_USERNAME=your_username
export BROWSERSTACK_ACCESS_KEY=your_access_key

# Run tests on BrowserStack
./mvnw test -Dexecution.env=browserstack -Dbrowser=chrome

Step 3.2: Run on LambdaTest

# Set credentials (or use .env file)
export LAMBDATEST_USERNAME=your_username
export LAMBDATEST_ACCESS_KEY=your_access_key

# Run tests on LambdaTest
./mvnw test -Dexecution.env=lambdatest -Dbrowser=chrome

Step 4: View Allure Report

# Generate and open Allure report
./mvnw allure:serve

This opens a beautiful HTML report showing:

  • βœ… Test results with pass/fail status
  • πŸ“Έ Screenshots on failure
  • πŸ“„ Page source on failure
  • πŸ”— URLs where tests failed

Step 5: Setup AI Analysis Pipeline

# Navigate to AI analysis directory
cd ai-analysis

# Create Python virtual environment
python3 -m venv venv

# Activate virtual environment
# macOS/Linux:
source venv/bin/activate
# Windows:
.\venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

# First run downloads the AI model (~90MB, one-time)

Step 6: Launch Dashboard

# Make sure you're in ai-analysis directory with venv activated
cd ai-analysis
source venv/bin/activate

# Launch Streamlit dashboard
streamlit run dashboard.py --server.port 8501

Open http://localhost:8501 in your browser.

Step 7: Use the REST API (Optional)

# Start API server
python main.py serve

# Index test logs
curl -X POST http://localhost:8000/api/v1/index

# Get summary
curl http://localhost:8000/api/v1/summary

# Find similar failures
curl -X POST http://localhost:8000/api/v1/similar \
  -H "Content-Type: application/json" \
  -d '{"error_message": "Element not found"}'

πŸ§ͺ Running Tests

Basic Commands

# Run all tests
./mvnw clean test

# Run with Allure report generation
./mvnw clean test allure:serve

# Run specific test class
./mvnw test -Dtest=LoginTest

# Run specific test method
./mvnw test -Dtest=LoginTest#testSuccessfulLogin

# Run tests in parallel (configured in testng.xml)
./mvnw test -Dparallel=methods -DthreadCount=4

Test Output Locations

Output Location Description
Test Results target/surefire-reports/ TestNG XML reports
Allure Data target/allure-results/ Allure report data
Analytics Logs target/analytics-logs/test-events.jsonl JSONL for AI analysis
Screenshots target/allure-results/*.png Failure screenshots

πŸ“Š AI Analysis Dashboard

Dashboard Pages

Page Description
πŸ“Š Dashboard Overview with pass/fail metrics, pie charts, duration analysis
πŸ“‹ Test Events Full list of all test events with filters and CSV export
πŸ”΄ Failures Detailed failure view with error messages and stacktraces
πŸ” Similar Search AI-powered semantic search for similar past failures
πŸ“ˆ Analytics Timeline visualization, failures by class, duration statistics

Screenshots

The dashboard provides:

  • Real-time test metrics
  • Interactive charts (Plotly)
  • Semantic similarity search
  • CSV export functionality
  • Failure pattern analysis

🧠 How AI Similarity Search Works

The Problem with Traditional Search

Traditional Keyword Search:
  "Connection timeout" β‰  "Network unavailable"  ❌ No match!

AI Embedding Search:
  "Connection timeout" β‰ˆ "Network unavailable"  βœ… 85% similar!

How Embeddings Work

  1. Text β†’ Vector: Error messages are converted to 384-dimensional vectors
  2. Semantic Meaning: Similar meanings = similar vectors
  3. Cosine Similarity: Find closest matches in vector space
Error Message                          β†’ Vector (384 numbers)
─────────────────────────────────────────────────────────────
"Element not found: login button"      β†’ [0.23, -0.15, 0.87, ...]
"Could not locate login button"        β†’ [0.21, -0.14, 0.85, ...]  ← 92% Similar!
"Database connection failed"           β†’ [-0.45, 0.32, 0.11, ...]  ← Different

Privacy: 100% Local

Component Location Data Sharing
Sentence-Transformers Local CPU Model downloaded once, runs offline
Qdrant Vector DB In-memory Data stays in RAM
Dashboard localhost Never exposed externally

No data leaves your machine!


βš™οΈ Configuration

Java Framework Configuration

testng.xml - Test suite configuration:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE suite SYSTEM "https://testng.org/testng-1.0.dtd">
<suite name="Automation Test Suite" parallel="methods" thread-count="2">
    <listeners>
        <listener class-name="io.qameta.allure.testng.AllureTestNg"/>
        <listener class-name="com.automation.listeners.AllureScreenshotListener"/>
    </listeners>
    <test name="Login Tests">
        <classes>
            <class name="com.automation.tests.LoginTest"/>
        </classes>
    </test>
</suite>

AI Analysis Configuration

ai-analysis/config/settings.py:

# Paths
LOGS_PATH = "../target/analytics-logs/test-events.jsonl"

# Embedding Model (runs locally)
EMBEDDING_MODEL = "all-MiniLM-L6-v2"  # 384 dimensions
EMBEDDING_DIMENSION = 384

# Vector Store
QDRANT_COLLECTION = "test_failures"
SIMILARITY_THRESHOLD = 0.3  # Minimum similarity score

πŸ”Œ API Reference

REST API Endpoints

Base URL: http://localhost:8000

Endpoint Method Description
/ GET Health check
/api/v1/health GET Detailed health status
/api/v1/index POST Load and index test logs
/api/v1/summary GET Get failure summary
/api/v1/similar POST Find similar failures
/api/v1/analyze POST Analyze specific failure

Example API Calls

# Health check
curl http://localhost:8000/

# Index logs
curl -X POST http://localhost:8000/api/v1/index

# Get summary
curl http://localhost:8000/api/v1/summary

# Find similar failures
curl -X POST http://localhost:8000/api/v1/similar \
  -H "Content-Type: application/json" \
  -d '{
    "error_message": "Element not found: login button",
    "top_k": 5
  }'

API Response Examples

Index Response:

{
  "status": "success",
  "total_events": 10,
  "failures_indexed": 1,
  "passed": 4,
  "failed": 1
}

Similar Failures Response:

[
  {
    "score": 0.85,
    "test_name": "testLoginButton",
    "class_name": "LoginTest",
    "message": "Element not found: signin-btn",
    "timestamp": "2024-01-15T10:30:00Z"
  }
]

πŸ›  Extending the Framework

Adding New Page Objects

// src/main/java/com/automation/pages/NewPage.java
public class NewPage extends BasePage {

    @FindBy(id = "element-id")
    private WebElement myElement;

    public NewPage(WebDriver driver) {
        super(driver);
    }

    public void clickElement() {
        click(myElement);
    }
}

Adding New Tests

// src/test/java/com/automation/tests/NewTest.java
public class NewTest extends BaseTest {

    @Test
    @Description("Test description for Allure")
    public void testNewFeature() {
        // Your test logic
    }
}

πŸ”§ Jenkins CI/CD Integration

Jenkinsfile Features

The included Jenkinsfile provides a complete CI/CD pipeline:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Checkout   │───▢│    Build    │───▢│  Run Tests  │───▢│   Allure    │───▢│ AI Analysis β”‚
β”‚             β”‚    β”‚   Compile   β”‚    β”‚  (Params)   β”‚    β”‚   Report    β”‚    β”‚   Report    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Pipeline Parameters

Parameter Options Description
BROWSER chrome, firefox, edge Browser for test execution
EXECUTION_ENV local, browserstack, lambdatest Test execution environment
TEST_SUITE testng.xml TestNG suite file
GENERATE_AI_REPORT true/false Generate AI analysis report

Jenkins Setup

  1. Install Required Plugins:

    • Allure Jenkins Plugin
    • Pipeline Plugin
    • Credentials Plugin
  2. Configure Credentials:

    Jenkins β†’ Manage Jenkins β†’ Credentials β†’ Add:
    - browserstack-username (Secret text)
    - browserstack-access-key (Secret text)
    - lambdatest-username (Secret text)
    - lambdatest-access-key (Secret text)
    
  3. Configure Tools:

    Jenkins β†’ Global Tool Configuration:
    - JDK17 (Java 17)
    - Maven3 (Maven 3.8+)
    
  4. Create Pipeline Job:

    • New Item β†’ Pipeline
    • Pipeline β†’ Definition: Pipeline script from SCM
    • SCM: Git β†’ Repository URL: your-repo-url
    • Script Path: Jenkinsfile

Running Pipeline

# Trigger with default parameters
# Or use Jenkins UI to customize:
# - Browser: chrome/firefox/edge
# - Environment: local/browserstack/lambdatest
# - AI Report: enabled/disabled

☁️ Cloud Testing Configuration

BrowserStack Setup

  1. Get Credentials: https://www.browserstack.com/accounts/settings

  2. Set Environment Variables:

    export BROWSERSTACK_USERNAME=your_username
    export BROWSERSTACK_ACCESS_KEY=your_access_key
  3. Run Tests:

    ./mvnw test -Dexecution.env=browserstack -Dbrowser=chrome
  4. View Results: https://automate.browserstack.com/dashboard

LambdaTest Setup

  1. Get Credentials: https://accounts.lambdatest.com/detail/profile

  2. Set Environment Variables:

    export LAMBDATEST_USERNAME=your_username
    export LAMBDATEST_ACCESS_KEY=your_access_key
  3. Run Tests:

    ./mvnw test -Dexecution.env=lambdatest -Dbrowser=firefox
  4. View Results: https://automation.lambdatest.com/timeline

Cloud Capabilities

Both platforms are configured with:

  • Video recording of test execution
  • Network logs capture
  • Console logs capture
  • Screenshot on failure
  • Selenium 4.27 support

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit changes (git commit -m 'Add amazing feature')
  4. Push to branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

πŸ“„ Generate Beautiful HTML Report

Generate a standalone HTML report with charts and failure details:

cd ai-analysis
source venv/bin/activate
python report_generator.py -o test_report.html

This generates a beautiful report with:

  • πŸ“Š Pass/fail metrics with charts
  • πŸ”΄ Detailed failure cards with stacktraces
  • βœ… Passed test summary
  • 🧠 AI analysis information

πŸ“‹ Context YAML

The context.yaml file contains all the information needed to recreate this project. Use it as:

  1. Reference - Understand the project structure and components
  2. AI Context - Provide to AI assistants to understand the project
  3. Documentation - Quick overview of all technologies and configurations
# Key sections in context.yaml:
project:        # Project metadata
stack:          # Technology versions (Java, Python, libraries)
structure:      # File/folder organization
features:       # Detailed feature descriptions
commands:       # All CLI commands needed
test_site:      # Test application details

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.


πŸ‘¨β€πŸ’» Author

Pramod Dutta


⭐ Star History

If you find this project useful, please consider giving it a star! ⭐


Made with ❀️ by The Testing Academy

image

About

AI-Powered Selenium Java Test Automation Framework with Local AI Analysis

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors