A robust and maintainable Python test automation framework built with pytest and Selenium WebDriver
A robust and maintainable Python test automation framework built with pytest and Selenium WebDriver. This framework follows the Page Object Model (POM) design pattern and provides a comprehensive set of utilities for web application testing.
- Features
- Project Structure
- Prerequisites
- Installation
- Quick Start
- Usage
- Examples
- Page Object Model
- Configuration
- Reports
- Logging
- Advanced Features
- Troubleshooting
- Why Selenium in 2025?
- Framework Architecture
- Roadmap
- Contributing
- License
- Page Object Model: Organized page classes for better maintainability
- Cross-Browser Testing: Support for Chrome and Firefox browsers with auto-download capabilities (Edge support planned for next release)
- Multi-Environment Support: Built-in support for multiple environments (dev, qa, staging, prod) with easy switching
- Headless Mode: Run tests without opening browser windows
- Parallel Execution: Run tests in parallel with configurable worker count for faster execution
- Multiple Report Formats: HTML reports with screenshots and interactive Allure reports
- Auto-Driver Management: Automatic driver download with PATH detection fallback
- Logging: Comprehensive logging of test execution
- Configuration Management: Flexible configuration for different environments
- Data-Driven Testing: Support for external test data
- Screenshot Capture: Automatic screenshots on test failures with Allure integration
pytest-selenium-framework/
├── config/ # Configuration files
│ ├── config.json # Framework settings (browser, timeouts, reporting)
│ └── environments.json # Environment-specific settings (dev, qa, staging, prod)
├── data/ # Test data files
│ └── fixtures.json # Test data for data-driven tests
├── docs/ # Documentation assets
│ └── screenshots/ # Allure report screenshots for documentation
├── examples/ # Example files
│ ├── __init__.py # Package initialization
│ └── environment_usage_example.py # Example demonstrating environment configuration
├── logs/ # Log files (auto-generated)
│ └── test_run_*.log # Test execution logs with timestamps
├── pages/ # Page Object Model classes
│ ├── __init__.py # Package initialization
│ ├── base_page.py # Base page with common methods
│ └── selenium_page.py # Selenium website page
├── reports/ # Test reports (auto-generated)
│ ├── html/ # HTML reports with screenshots
│ ├── screenshots/ # Failure screenshots with error logs
│ ├── allure-results/ # Allure test results (raw data)
│ └── allure-report/ # Generated Allure HTML reports
├── tests/ # Test files
│ ├── __init__.py # Package initialization
│ ├── conftest.py # Pytest configuration and fixtures
│ └── test_framework_capabilities.py # Comprehensive test suite showcasing all framework features
├── utilities/ # Utility classes
│ ├── __init__.py # Package initialization
│ ├── logger.py # Logging configuration
│ ├── screenshot_helper.py # Screenshot capture utility
│ ├── driver_factory.py # WebDriver factory for unified driver creation
│ ├── config_manager.py # Configuration manager for loading and caching configs
│ └── exceptions.py # Custom exception classes
├── requirements.txt # Project dependencies
├── setup.py # Package setup script
├── pytest.ini # Pytest configuration with Allure support
├── ROADMAP.md # Planned features and roadmap
├── LICENSE # License file
└── README.md # Project documentation
Before installing the framework, ensure you have:
- Python 3.8+ (check with
python --version) - pip package manager
- One or more browsers installed:
- Chrome/Chromium
- Firefox
After installation, you can verify your setup:
# Check Python version
python --version # Should be 3.8+
# Verify packages installed
pip list | grep selenium
pip list | grep pytest
# Test driver availability (optional - drivers will auto-download if not in PATH)
chromedriver --version # If installed via Homebrew
geckodriver --version # If installed via HomebrewIf drivers aren't in PATH, the framework will automatically download them on first run using webdriver-manager.
-
Clone the repository:
git clone https://github.com/yashwant-das/pytest-selenium-framework.git cd pytest-selenium-framework -
Create and activate a virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
-
Browser Driver Setup:
The framework supports multiple driver installation methods:
Option A: Automatic Download (Recommended)
- The framework automatically downloads drivers using webdriver-manager
- No manual setup required for Chrome and Firefox
Option B: Manual Installation (More Reliable)
# macOS with Homebrew brew install chromedriver geckodriverOption C: PATH Detection
- The framework first checks for drivers in your system PATH
- Falls back to webdriver-manager if PATH drivers not found
- Provides the most reliable driver resolution
Get up and running in minutes:
# 1. Clone and setup
git clone https://github.com/yashwant-das/pytest-selenium-framework.git
cd pytest-selenium-framework
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
# 2. Run your first test
pytest tests/test_framework_capabilities.py::TestPageObjectModel::test_page_object_navigation -v
# 3. View the HTML report
open reports/html/report.htmlThat's it! The framework will automatically handle driver downloads, create necessary directories, and generate reports.
Basic Test Execution:
# Run all tests
pytest
# Run specific test file
pytest tests/test_framework_capabilities.py
# Run specific test class
pytest tests/test_framework_capabilities.py::TestPageObjectModel
# Run specific test method
pytest tests/test_framework_capabilities.py::TestPageObjectModel::test_page_object_navigationBrowser Selection:
# Run tests in Chrome (default)
pytest --browser chrome
# Run tests in Firefox
pytest --browser firefoxExecution Modes:
# Run tests in headless mode (faster, no UI)
pytest --headless
# Run tests in parallel (much faster for large test suites)
pytest -n 4
# Combine options for optimal performance
pytest --browser chrome --headless -n 4Test Filtering:
# Run only smoke tests
pytest -m smoke
# Run only Chrome browser tests
pytest -m browser_chrome
# Run only parallel execution tests
pytest -m parallel
# Run tests excluding slow tests
pytest -m "not slow"Advanced Examples:
# Full parallel execution with headless mode
pytest -n 4 --headless --browser chrome
# Cross-browser testing
pytest --browser chrome -m smoke
pytest --browser firefox -m smoke
# Run specific test class in parallel
pytest tests/test_framework_capabilities.py::TestParallelExecution -n 4| Option | Description | Example |
|---|---|---|
--browser |
Browser to use (chrome, firefox) | --browser firefox |
--headless |
Run tests in headless mode | --headless |
-n or --numprocesses |
Number of parallel workers | pytest -n 4 |
-m |
Run tests matching marker | pytest -m smoke |
-v |
Verbose output | pytest -v |
-k |
Run tests matching expression | pytest -k "navigation" |
Performance Comparison:
- Sequential execution: ~20 seconds for 7 tests
- Parallel execution (4 workers): ~10 seconds for 7 tests (50% faster)
- Headless mode: Additional 20-30% performance improvement
The framework includes example files to help you get started:
See examples/environment_usage_example.py for a complete example demonstrating:
- How to use
ConfigManagerto access environment configurations - Creating environment-aware page objects
- Testing across multiple environments
- Pytest test patterns with environment support
Run the example:
python examples/environment_usage_example.pyThis example shows practical usage patterns that you can adapt for your own test automation needs.
The framework follows the Page Object Model design pattern:
- BasePage: Contains common methods for all pages
- Specific Pages: Each page has its own class with specific locators and methods
Example:
from pages.base_page import BasePage
class SeleniumPage(BasePage):
# Page-specific locators
LOCATORS = {
"title": ("tag name", "h1"),
"search_input": ("id", "searchbox"),
"search_button": ("id", "search-button")
}
def search(self, query):
"""Perform a search on the page"""
self.find_element(*self.LOCATORS["search_input"]).send_keys(query)
self.find_element(*self.LOCATORS["search_button"]).click()The framework uses a well-organized configuration structure split across multiple files for better maintainability:
This file manages environment-specific settings and supports multiple environments. The framework provides built-in support for environment switching without code changes.
Current Configuration:
{
"default": {
"url": "https://www.selenium.dev",
"description": "Default environment for demo purposes"
},
"environments": {
"dev": {
"url": "https://www.selenium.dev",
"description": "Development environment"
},
"qa": {
"url": "https://www.selenium.dev",
"description": "QA testing environment"
},
"staging": {
"url": "https://www.selenium.dev",
"description": "Staging environment for pre-production testing"
},
"prod": {
"url": "https://www.selenium.dev",
"description": "Production environment"
}
}
}For Your Project:
Update the URLs and add any environment-specific settings:
{
"default": {
"url": "https://www.example.com",
"api_key": "default_key"
},
"environments": {
"dev": {
"url": "https://dev.example.com",
"api_key": "dev_key",
"timeout": 30
},
"qa": {
"url": "https://qa.example.com",
"api_key": "qa_key",
"timeout": 20
},
"staging": {
"url": "https://staging.example.com",
"api_key": "staging_key"
},
"prod": {
"url": "https://www.example.com",
"api_key": "prod_key"
}
}
}Accessing Environment Config:
from utilities.config_manager import ConfigManager
config_manager = ConfigManager()
# Get default environment
default = config_manager.get_env_config()
# Get specific environment
qa_config = config_manager.get_env_config("qa")
prod_config = config_manager.get_env_config("prod")This file contains browser-specific settings and configuration:
{
"browser": {
"default": "chrome",
"headless": false,
"implicit_wait": 10,
"window_size": {
"width": 1920,
"height": 1080
},
"chrome": {
"arguments": [
"--no-sandbox",
"--disable-dev-shm-usage"
],
"preferences": {}
},
"firefox": {
"arguments": [],
"preferences": {}
}
}
}Configuration Options:
- default: Default browser to use
- headless: Run browsers in headless mode by default
- implicit_wait: Default implicit wait time in seconds
- window_size: Default browser window size
- chrome/firefox: Browser-specific settings
- arguments: List of browser command-line arguments
- preferences: Browser preferences (Chrome) or options (Firefox)
This file contains test-specific data used in data-driven tests. The structure is flexible and can be customized for your application:
{
"selenium": {
"navigation": {
"about": "About",
"downloads": "Downloads",
"documentation": "Documentation",
"projects": "Projects",
"support": "Support",
"blog": "Blog"
},
"components": {
"webdriver": "Selenium WebDriver",
"ide": "Selenium IDE",
"grid": "Selenium Grid"
}
}
}Usage in Tests:
from utilities.config_manager import ConfigManager
config_manager = ConfigManager()
test_data = config_manager.get_test_data()
# Access navigation items
nav_items = test_data["selenium"]["navigation"]
# Access components
components = test_data["selenium"]["components"]Customization:
You can add any test data structure needed for your application. The framework loads this data and makes it available
through the test_data fixture in your tests.
The framework generates comprehensive test reports in multiple formats:
Standard pytest-html reports are automatically generated:
- Location:
reports/html/report.html - Features:
- Test results summary
- Screenshots on failure
- Test execution time
- Environment details
- Browser information
Interactive Allure reports provide enhanced visualization with automatically populated sections:
Generate and View Allure Reports:
# Generate Allure report
allure generate reports/allure-results -o reports/allure-report --clean
# Serve interactive report
allure serve reports/allure-resultsAllure Dashboard:
Interactive Allure test result dashboard with comprehensive test execution overview
Allure Features:
- Interactive test result dashboard
- Environment Section: Automatically populated with browser, OS, Python version, framework details
- Executors Section: Shows executor information (automatically generated)
- Trend Section: Test execution trends and history across multiple runs (automatically tracked)
- Screenshots and error logs attached to failed tests
- Test categorization and filtering
- Timeline view of test execution
Allure Report Views:
Categories view showing test categorization and grouping
Test suites view displaying organized test suite structure
Graphs view with test execution statistics and visualizations
Timeline view showing test execution timeline and duration
Behaviours view displaying test scenarios organized by behavior
Packages view showing test organization by package structure
Automatic Allure Configuration:
The framework automatically generates the following files on every test run:
-
reports/allure-results/environment.properties- Contains:- Browser name and version (auto-detected)
- Python version
- OS information (name, version, architecture)
- Framework details
- Test URL from configuration
- Headless mode setting
-
reports/allure-results/executor.json- Contains:- Executor name and type
- Build information
- Report metadata
-
reports/allure-results/history/- Automatically copied from previous reports for Trend visualization
Sample Allure Commands:
# View existing Allure report
allure serve reports/allure-results
# Generate static report
allure generate reports/allure-results -o reports/allure-report
# Open static report in browser (macOS)
open reports/allure-report/index.htmlNote: The Environment, Executors, and Trend sections are automatically populated. No manual configuration is required. The Trend section will appear after the second test run when history data becomes available.
Screenshots are automatically captured and attached to reports:
- On Failure: All failed tests automatically capture screenshots
- Integration: Screenshots are embedded in both HTML and Allure reports
- Storage:
reports/screenshots/with timestamps and test names - Error Logs: Detailed error information saved alongside screenshots
The framework provides comprehensive logging with automatic timestamping:
- Test Execution Logs:
logs/test_run_YYYYMMDD_HHMMSS.log - Console Logging: Real-time test execution information
- Log Levels: INFO, WARNING, ERROR with proper formatting
Log Features:
- Automatic log rotation with timestamps
- Detailed browser configuration logging
- Driver detection and initialization logs
- Test execution progress tracking
The framework supports parallel test execution using pytest-xdist:
# Run tests with 4 parallel workers
pytest -n 4
# Optimal parallel execution with headless mode
pytest -n 4 --headless --browser chromeBenefits:
- 50% faster execution with 4 workers
- Dynamic load balancing with work-stealing scheduling
- Independent browser sessions for each worker
- Automatic test distribution across workers
Smart driver detection and management:
- PATH Detection: Checks system PATH for installed drivers first
- Auto-Download: Falls back to webdriver-manager for automatic download
- Version Compatibility: Ensures driver-browser compatibility
- Cross-Platform: Works on macOS, Windows, and Linux
The framework supports multiple environment configurations through config/environments.json. See the Configuration section for detailed information on environment setup and usage.
Quick Example:
from utilities.config_manager import ConfigManager
# Get environment config
config_manager = ConfigManager()
env_config = config_manager.get_env_config("qa") # or "dev", "staging", "prod"Required directories (logs, reports, screenshots) are automatically created when tests run. No manual setup needed.
Driver Not Found Errors:
-
The framework will automatically download drivers if not found in PATH
-
If auto-download fails, manually install drivers:
# macOS brew install chromedriver geckodriver # Or download from browser vendor websites
Browser Not Installed:
- Ensure the browser is installed before running tests
- Tests will fail with clear error messages if browser is missing
Import Errors:
- Verify all dependencies are installed:
pip install -r requirements.txt - Check Python version:
python --version(should be 3.8+)
Permission Issues:
- On macOS/Linux, ensure drivers have execute permissions
- Framework automatically fixes permissions for auto-downloaded drivers
Configuration Errors:
- Verify
config/config.jsonexists and is valid JSON - Check
config/environments.jsonanddata/fixtures.jsonare valid
If tests fail, check:
- Log files in
logs/directory for detailed error messages - Screenshots in
reports/screenshots/for visual debugging - HTML/Allure reports for test execution details
The framework provides clear error messages to help diagnose issues quickly.
While modern tools like Playwright have gained popularity, Selenium remains valuable:
- Ecosystem Maturity: Extensive community, documentation, and integrations
- Enterprise Adoption: Many organizations standardize on Selenium
- Python Integration: Excellent pytest integration and Python ecosystem support
- Cross-Browser Support: Proven support for Chrome and Firefox
- Learning Value: Demonstrates solid framework design patterns
- Existing Infrastructure: Works with existing Selenium Grid setups
This framework provides a production-ready, maintainable foundation for Selenium-based test automation.
DriverFactory (utilities/driver_factory.py):
- Unified driver creation for all browsers
- Smart PATH detection with webdriver-manager fallback
- Consistent configuration handling
ConfigManager (utilities/config_manager.py):
- Singleton pattern for configuration loading
- Caches all configs to avoid repeated file reads
- Provides typed access to browser, environment, and test data configs
ScreenshotHelper (utilities/screenshot_helper.py):
- Centralized screenshot capture logic
- Automatic Allure and HTML report integration
- Support for failure and pass screenshots
BasePage (pages/base_page.py):
- Common page object methods with type hints
- Explicit waits and error handling
- Screenshot integration for debugging
See ROADMAP.md for planned features and upcoming releases.
Next Release (v0.2.0):
- Microsoft Edge browser support
This is a personal test automation framework project, but suggestions and improvements are welcome!
- Fork the repository
- Create a feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request
MIT License - feel free to use this as a template for your own test automation framework.
Built with ❤️ using Python, Selenium, pytest, and Allure