A production-ready, 4-layer test automation framework with AI-powered test generation via Model Context Protocol (MCP). Built with Python and Selenium. Designed for teams who need structure and manual testers transitioning to automation.
Tests → Roles → Tasks → Pages → WebInterface
This framework provides a clean, maintainable architecture for browser test automation. Instead of writing spaghetti Selenium code, you get organized layers that separate concerns:
- Tests - What you're verifying (assertions only)
- Roles - Who is doing it (GuestUser, RegisteredUser, Admin)
- Tasks - What workflow they're performing (login, browse catalog, checkout)
- Pages - How to interact with UI elements (click, type, select)
Result: Tests that are easy to read, maintain, and scale.
| You Are | This Framework Helps You |
|---|---|
| Manual Tester | Learn automation with clear patterns, not messy scripts |
| Junior Automator | Skip the "how do I organize this?" phase |
| QA Team | Get a ready-made structure instead of building from scratch |
| Solo Developer | Production-grade architecture for your projects |
- 4-Layer Architecture - Clean separation of concerns
- Page Object Model - UI changes don't break your tests
- Role-Based Testing - Test as different user personas
- Built-in Logging - Automatic logging at every layer
- HTML Reports - Professional test reports out of the box
- Chrome Support - Works with Chrome browser (other browsers can be added via WebInterface)
- JSON Test Data - Externalized, maintainable test data
- 11 MCP Tools - Complete test generation workflow
- User Story → Tests - Convert requirements to test scenarios automatically
- Element Discovery - AI discovers page elements for you
- Code Generation - Generate Page Objects, Tasks, Roles, Tests
- Agent Agnostic - Works with Claude Code, Cursor, Windsurf, or any MCP-compatible AI agent
Get running in 5 minutes:
- Python 3.12 or higher
- Chrome browser installed
- Git
# Clone the repository
git clone https://github.com/solosza/py-selenium-framework-mcp.git
cd py-selenium-framework-mcp
# Create virtual environment (recommended)
python -m venv venv
# Activate virtual environment
# Windows:
venv\Scripts\activate
# macOS/Linux:
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt# Run all tests
pytest tests/ -v
# Run with HTML report
pytest tests/ -v --html=reports/report.html --self-contained-html
# Run specific test category
pytest tests/catalog/ -v
# Run in headless mode (no browser window)
pytest tests/ -v --headless=TrueAfter running tests, open reports/report.html in your browser for a detailed report.
Current test suite against Automation Practice:
| Test Suite | Tests | Status |
|---|---|---|
| Invalid Credentials | 6 | All Passing |
| Browse Category | 4 | All Passing |
| Filter Products | 4 | All Passing |
| Sort by Price | 4 | All Passing |
| Registration | 1/6 | Environment Issues* |
| Valid Login | 0/2 | Environment Issues* |
| Quick View | 1/4 | Website Issues* |
| Logout | 0/3 | Skipped (requires login) |
Total: 18 passing, 10 failing, 3 skipped
*Failures are due to test environment (no pre-registered user on live site) and website bugs - not framework issues. The framework architecture is fully validated.
The framework uses JSON configuration files:
framework/resources/config/environment_config.json
Default configuration points to Automation Practice demo site. To test your own application, update the URL:
{
"DEFAULT": {
"url": "https://your-app-url.com"
}
}Test user data is stored in:
tests/data/test_users.json
Example structure:
{
"registered_user": {
"email": "test@example.com",
"password": "SecurePass123!"
},
"new_user": {
"email": "newuser@example.com",
"first_name": "John",
"last_name": "Doe"
}
}The framework currently supports Chrome browser. The architecture is designed to support additional browsers by extending the WebInterface class - contributions welcome!
# All tests
pytest tests/ -v
# By marker (category)
pytest tests/ -v -m catalog
pytest tests/ -v -m auth
pytest tests/ -v -m smoke
# Single test file
pytest tests/catalog/test_browse_category.py -v
# Single test function
pytest tests/catalog/test_browse_category.py::test_browse_women_category -v
# With HTML report
pytest tests/ -v --html=reports/report.html --self-contained-html
# Headless mode (CI/CD)
pytest tests/ -v --headless=Truetests/catalog/test_browse_category.py::test_browse_women_category PASSED [25%]
PASSED- Test succeededFAILED- Assertion failed (check report for details)ERROR- Test crashed (setup/teardown issue)SKIPPED- Intentionally skipped (missing prerequisite)
┌─────────────────────────────────────────────────────────────┐
│ TEST LAYER │
│ • Arrange, Act, Assert only │
│ • Calls ONE role method │
│ • Asserts via Page Object state-checks │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ ROLE LAYER │
│ • User personas (GuestUser, RegisteredUser) │
│ • Orchestrates multiple tasks into workflows │
│ • Contains user credentials/context │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ TASK LAYER │
│ • Single business operations (login, add_to_cart) │
│ • Calls page object methods │
│ • Domain-focused, not UI-focused │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ PAGE OBJECT LAYER │
│ • One class per page/component │
│ • Locators as class constants │
│ • Atomic methods (click, type, select) │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────┐
│ WEB INTERFACE │
│ • Selenium wrapper │
│ • Handles waits, logging, error handling │
│ • Single point of browser interaction │
└─────────────────────────────────────────────────────────────┘
| Problem | How This Framework Solves It |
|---|---|
| UI changes break everything | Locators in ONE place (Page Objects) |
| Tests are hard to read | Tests only have assertions, logic is in Roles/Tasks |
| Code duplication | Tasks are reusable across tests |
| Hard to test different users | Roles encapsulate user-specific behavior |
| Debugging is painful | Automatic logging at every layer |
GOLDEN RULES:
1. Locators ONLY in Page Objects
2. Tasks/Roles return NOTHING (None)
3. Tests assert via Page Object state-check methods
4. No inheritance - composition only
5. One responsibility per layer
For complete architecture documentation, see FRAMEWORK.md.
The framework includes an MCP (Model Context Protocol) server that enables AI coding agents to generate tests automatically. This is the bridge from manual testing to automation.
Model Context Protocol is an open standard that allows AI agents to interact with external tools. This framework provides 11 specialized tools for test automation.
The MCP server works with any MCP-compatible AI coding agent:
- Claude Code (Anthropic)
- Cursor (with MCP support)
- Windsurf (Codeium)
- Any future MCP-compatible agent
Code Generation Tools (Bottom-Up Workflow: 1→6)
| # | Tool | Purpose |
|---|---|---|
| 1 | generate_tests_from_user_story |
Convert user story → test scenarios (Given-When-Then) |
| 2 | discover_page_elements |
Discover interactive elements on any page |
| 3 | generate_page_object |
Generate POM code from discovered elements |
| 4 | generate_task |
Create task workflow methods from POM |
| 5 | generate_role |
Create role class from tasks |
| 6 | generate_test_template |
Generate complete pytest test code |
Utility Tools
| Tool | Purpose |
|---|---|
list_tests |
Catalog all available tests |
get_framework_structure |
Map framework architecture |
run_test |
Execute tests and return results |
analyze_failure |
AI-powered debugging for failed tests |
get_test_coverage |
Calculate test coverage |
- Python 3.12+
- An MCP-compatible AI agent (Claude Code, Cursor, etc.)
# Install MCP server dependencies
pip install -r mcp_server/requirements.txtFor Claude Code:
# Add the MCP server to Claude Code
claude mcp add qa-automation -- python /path/to/mcp_server/server.pyFor Other Agents: Configure your agent to run the MCP server:
Command: python mcp_server/server.py
You describe what you want to test. The AI builds everything for you.
You say:
"I want to test that a guest user can browse the Women category and see products"
The AI does the rest:
- Creates test scenarios from your description
- Discovers buttons, links, and forms on the page
- Generates the Page Object (code that knows how to click and type)
- Generates the Task (code that performs actions like "browse category")
- Generates the Role (code that represents "guest user")
- Generates the test (ties it all together)
- Runs the test and shows you the results
What you get:
def test_browse_women_category(web_interface, config):
guest = GuestUser(web_interface, config["url"])
guest.browse_category("Women")
assert product_list_page.has_products()That's it. You described the test in plain English. The AI wrote the code.
You: I need to test that users can filter products by size
AI: I'll create that test for you.
First, here are the test scenarios I came up with:
1. Filter by size S - shows only small products
2. Filter by size M - shows only medium products
3. Filter by invalid size - handles gracefully
I found the filter elements on the page and generated all the code.
The test is ready. Want me to run it?
You: Yes
AI: Test passed! The filter correctly shows only size "S" products.
No manual coding required. You describe, the AI builds.
When a test fails, the AI doesn't just report the error. It fixes it.
Website changed?
AI: The test failed - the "Add to Cart" button moved to a different location.
I found the new element and updated the Page Object.
Re-running... Test passed!
Generated code has errors?
AI: The test failed - my generated code had a syntax error.
I checked the framework patterns, fixed the code to match.
Re-running... Test passed!
The AI analyzes failures, looks at existing code patterns in your framework, updates the code to match, and re-runs automatically. Whether it's a website change or a code generation mistake, the AI learns from your codebase and fixes it.
The MCP tools are in mcp_server/tools/. Each tool follows a consistent pattern:
async def tool_name(arguments: dict) -> str:
"""Tool description."""
# Implementation
return json.dumps(result)To add new tools:
- Create
mcp_server/tools/tool_XX_name.py - Register in
mcp_server/server.py - Follow existing patterns
New to automation? Start here:
Think of it like your manual testing:
| Manual Testing | Framework Layer |
|---|---|
| "I'm testing as a guest user" | Role (GuestUser) |
| "I need to browse products" | Task (browse_category) |
| "I click the Women menu link" | Page Object (click method) |
| "I verify products are displayed" | Test (assertion) |
Open tests/catalog/test_browse_category.py:
def test_browse_women_category(web_interface, config):
# Arrange - Set up
guest = GuestUser(web_interface, config["url"])
product_list_page = ProductListPage(web_interface)
# Act - Do the action
guest.browse_category("Women")
# Assert - Verify result
assert product_list_page.has_products(), "Products should be displayed"That's it. The test reads like a user story.
- Test creates a
GuestUser - Calls
guest.browse_category("Women") - Role calls
catalog_tasks.browse_category("Women") - Task calls
navigation_menu.click_category("Women") - Page Object calls
self.web.click(locator)
Copy an existing test and modify:
- Pick a test file to copy
- Change the category or action
- Run it:
pytest tests/your_test.py -v
Read FRAMEWORK.md for complete patterns and examples.
py-selenium-framework-mcp/
├── framework/ # Core framework (reusable)
│ ├── interfaces/
│ │ └── web_interface.py # Selenium wrapper
│ ├── pages/ # Page Objects
│ │ ├── auth/ # Login, Registration pages
│ │ └── catalog/ # Product listing, filters
│ ├── tasks/ # Business operations
│ │ ├── common/ # Shared tasks (login, navigate)
│ │ └── catalog/ # Catalog-specific tasks
│ ├── roles/ # User personas
│ │ ├── auth/ # RegisteredUser
│ │ └── guest/ # GuestUser
│ └── resources/
│ ├── config/ # Environment settings
│ ├── chromedriver/ # Driver factory
│ └── utilities/ # Logging, helpers
│
├── tests/ # Test scenarios
│ ├── conftest.py # Pytest fixtures
│ ├── data/ # Test data (JSON)
│ ├── auth/ # Authentication tests
│ └── catalog/ # Catalog tests
│
├── docs/ # Documentation (gitignored)
│ └── DEFECT_LOG.md # Issue tracking
│
├── mcp_server/ # MCP AI integration
│ ├── server.py # MCP server entry point
│ ├── requirements.txt # MCP dependencies
│ ├── tools/ # 11 MCP tools
│ │ ├── tool_01_generate_tests_from_user_story.py
│ │ ├── tool_02_discover_page_elements.py
│ │ ├── tool_03_generate_page_object.py
│ │ ├── tool_04_generate_task.py
│ │ ├── tool_05_generate_role.py
│ │ ├── tool_06_generate_test_template.py
│ │ └── ... (tools 07-11)
│ └── utils/ # Code generation utilities
│
├── FRAMEWORK.md # Complete architecture reference
├── CLAUDE.md # AI assistant instructions
└── README.md # This file
The included tests serve dual purposes:
These tests demonstrate production-quality automation:
- 33 test scenarios across authentication and catalog
- 18 passing tests validating framework architecture
- Real-world error handling and edge cases
Use these as templates for your own tests:
| Test File | What It Demonstrates |
|---|---|
test_browse_category.py |
Simple happy path testing |
test_invalid_credentials.py |
Negative testing patterns |
test_filter_products.py |
Complex user workflows |
test_sort_by_price.py |
State verification patterns |
The 4-layer architecture (Roles → Tasks → Pages → WebInterface) is framework-agnostic. The current implementation uses Selenium, but the patterns work anywhere.
Wanted: Community implementations for:
- Playwright (Python)
- Cypress (JavaScript)
- Puppeteer (Node.js)
The key abstraction point is WebInterface - implement the same interface for your framework of choice.
- Fork the repository
- Create a feature branch (
git checkout -b feature/your-feature) - Follow existing code patterns
- Submit a pull request
- Follow the 4-layer architecture
- Locators only in Page Objects
- Tasks/Roles return
None - Use
@autologgerdecorators - See FRAMEWORK.md for patterns
ChromeDriver version mismatch
selenium.common.exceptions.SessionNotCreatedException
Solution: The framework uses webdriver-manager which auto-downloads the correct driver. If issues persist, update Chrome browser.
Element not found / Timeout
selenium.common.exceptions.TimeoutException
Solution: The target website may be slow. Increase timeout in web_interface.py or check if the site is accessible.
Tests fail on first run Some tests require a registered user that doesn't exist on the demo site. This is expected - see Test Results.
- Architecture questions: Read FRAMEWORK.md
- Bug reports: Open an issue at GitHub Issues with Python version, error message, and steps to reproduce
MIT License - See LICENSE for details.
Free to use, modify, and distribute. Attribution appreciated but not required.
Built by solosza as a portfolio project demonstrating QA engineering and test architecture skills.
- Architecture patterns inspired by enterprise test automation frameworks
- Demo site: Automation Practice
- Selenium WebDriver team
- pytest and pytest-html maintainers
Questions? Open an issue or check the documentation.
Found this useful? Star the repo to show support!