A comprehensive test automation framework using Playwright with Behave for BDD-style testing with rich HTML and Allure reporting capabilities.
β
Page Object Model (POM) - Clean, maintainable test architecture
β
Automatic Wait Handling - Built-in waits for reliable tests
β
BDD Support - Write tests in Gherkin (Given/When/Then)
β
Traditional pytest - Also supports standard pytest tests
β
API Testing - Comprehensive REST API testing with JWT validation
β
Multi-Browser - Chromium, Firefox, WebKit
β
Multi-Platform - Windows, macOS, Linux
β
Parallel Execution - Run tests concurrently
β
Rich Reporting - HTML, Allure reports with screenshots
β
Microsoft SSO Support - Automated login with MFA handling
py-automation-testing/
βββ features/ # BDD feature files (Gherkin)
β βββ api/ # API test scenarios
β β βββ test.feature
β βββ web/ # Web UI test scenarios
β βββ home_page.feature
β βββ login.feature
βββ steps/ # BDD step definitions
β βββ api/ # API step definitions
β β βββ test_keycloak_steps.py
β βββ web/ # Web step definitions
β βββ test_home_page_steps.py
β βββ test_login_steps.py
βββ pages/ # Page Object Model
β βββ base_page.py # Base class with automatic waits
β βββ home_page.py # Home page object
β βββ login_page.py # Login page object
βββ tests/ # Traditional pytest tests
β βββ web/
β βββ test_sample_homepage.py
β βββ test_login.py
βββ config/ # Configuration
β βββ settings.py # Unified configuration (web + API)
βββ utils/ # Utility functions
β βββ api_helper.py # API testing utilities
βββ examples/ # Example scripts
β βββ api_usage_example.py
βββ reports/ # Test reports and screenshots
β βββ allure_results/
β βββ screenshots/
βββ conftest.py # Pytest fixtures and hooks
βββ pytest.ini # Pytest configuration
βββ requirements.txt # Python dependencies
βββ run_api_tests.sh # API test runner script
βββ .env # Environment variables (not in git)
βββ *.md # Documentation files
# Clone the repository
cd py-automation-testing
# Create virtual environment
python3 -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Install Playwright browsers
playwright install chromiumCreate .env file in project root:
# Browser settings
BROWSER=chromium
HEADLESS=true
PORTAL_BASE_URL=https://dev-aisoc-fe.qualgo.dev
# Test credentials (optional)
TEST_EMAIL=your.email@domain.com
TEST_PASSWORD=YourPassword123
TEST_MFA_CODE=123456# Run all tests with Behave
behave
# Run tests with HTML report
behave --format html --outfile reports/behave_report.html --format pretty
# Run tests with Allure report
behave --format allure_behave.formatter:AllureFormatter --outfile reports/allure_results --format pretty
# Run with specific tags
behave --tags=@smoke
behave --tags=@OV_03
# Run specific tags with Allure report
behave --tags=@smoke --format allure_behave.formatter:AllureFormatter --outfile reports/allure_results --format pretty
# Using Make commands (recommended)
make test # Run all tests
make test-html # Run with HTML report
make test-allure # Run with Allure report
make test-report # Run with both reports
# View generated reports
make report # Open HTML report
make allure-serve # Serve Allure reportFeature File (features/api/test.feature):
Feature: Keycloak Authentication API
@api @authentication @positive
Scenario: Successful authentication with valid credentials
Given the Keycloak token endpoint is "https://nonprod-common-keycloak.qualgo.dev/..."
And I have the following authentication credentials:
| field | value |
| client_id | be-admin |
| client_secret | your-secret |
| username | user@example.com |
| password | Password123@ |
| grant_type | password |
When I send a POST request to the token endpoint
Then the response status code should be 200
And the response should contain "access_token"
And the access token should be a valid JWT tokenRun API Tests:
# Or using pytest directly
pytest features/web/test.feature -vFeature File (features/web/login.feature):
Feature: User Login
@web @smoke
Scenario: Successful login
Given I am on the sign-in page
When I click the "Sign in with Microsoft" button
And I enter my email "user@example.com"
And I click the "Next" button
Then I should see the dashboardStep Definition (steps/web/test_login_steps.py):
@given("I am on the sign-in page")
def navigate_to_sign_in(login_page):
login_page.goto_sign_in()
@when('I click the "Sign in with Microsoft" button')
def click_sso(login_page):
login_page.click_sign_in_with_microsoft()Run BDD Tests:
pytest steps/common/test_login_steps.py -vTest File (tests/web/test_login.py):
def test_successful_login(login_page, test_credentials):
login_page.goto_sign_in()
login_page.complete_full_login(
email=test_credentials["email"],
password=test_credentials["password"],
mfa_code=test_credentials["mfa_code"]
)
assert login_page.verify_dashboard_heading("Dashboard")Run pytest Tests:
pytest tests/common/test_login.py -vAll page objects inherit from BasePage which provides automatic wait management:
from pages.base_page import BasePage
class MyPage(BasePage):
def __init__(self, page, base_url):
super().__init__(page, base_url)
def click_button(self):
# Automatically waits for navigation/load
self.click_and_wait("button#submit")
def navigate_to_page(self):
# Automatically waits for page load
self.navigate(f"{self.base_url}/page")home = HomePage(page, base_url)
home.goto() # Navigate with auto-wait
home.click_sign_in() # Click and wait
title = home.get_title() # Get page titlelogin = LoginPage(page, base_url)
login.goto_sign_in()
login.complete_full_login(email, password, mfa_code)
assert login.verify_dashboard_heading("Dashboard")# BDD tests
pytest steps/ -v
# Traditional tests
pytest tests/ -v
# All tests
pytest -v# Smoke tests (quick validation)
pytest -m smoke -v
# Regression tests (comprehensive)
pytest -m regression -v
# Web tests
pytest -m common -v
# Login tests
pytest -m login -v
# Combination
pytest -m "web and smoke" -v# Specific BDD test
pytest steps/common/test_login_steps.py -v -k "Successful login"
# Specific pytest test
pytest tests/common/test_login.py::TestLogin::test_successful_login_with_mfa -v# Firefox
BROWSER=firefox pytest -m common -v
# WebKit (Safari)
BROWSER=webkit pytest -m common -v
# Chromium (default)
BROWSER=chromium pytest -m common -vRun tests in parallel using pytest-xdist:
# Run with 4 workers
pytest -m common -n 4 -v
# Run with auto-detection
pytest -m common -n auto -v# Set in .env
HEADLESS=false
# Or via environment variable
HEADLESS=false pytest -m smoke -vThis framework supports two powerful reporting formats:
# Using Make commands (Recommended)
make test-html # HTML report
make test-allure # Allure report
make test-report # Both reports
# Using Behave directly
behave --format allure_behave.formatter:AllureFormatter --outfile reports/allure_results --format pretty
behave --tags=@OV_03 --format allure_behave.formatter:AllureFormatter --outfile reports/allure_results --format pretty
# View reports
make report # Open HTML report
make allure-serve # Serve Allure interactively
allure serve reports/allure_results # Serve Allure directly# Run with HTML report
python run_tests.py --html
# Run with Allure report
python run_tests.py --allure
# Run with both reports
python run_tests.py --both
# Run specific tags with reports
python run_tests.py --tags @smoke --html
python run_tests.py --tags @OV_03 --allure# HTML report only
behave --format html --outfile reports/behave_report.html --format pretty
# Allure report only
behave --format allure_behave.formatter:AllureFormatter --outfile reports/allure_results --format pretty
# Both reports
behave --format html --outfile reports/behave_report.html \
--format allure_behave.formatter:AllureFormatter --outfile reports/allure_results \
--format pretty
# With specific tags
behave --tags=@smoke --format allure_behave.formatter:AllureFormatter --outfile reports/allure_results --format pretty
behave --tags=@High --format allure_behave.formatter:AllureFormatter --outfile reports/allure_results --format pretty
# Exclude tags
behave --tags=~@skip --format allure_behave.formatter:AllureFormatter --outfile reports/allure_results --format pretty
# Multiple tags (AND)
behave --tags=@SecurityPosture --tags=@High --format allure_behave.formatter:AllureFormatter --outfile reports/allure_results --format pretty- β Single, self-contained HTML file
- β Easy to share via email
- β No additional tools needed
- β Shows pass/fail status
- β Step details and timing
- β Error messages and tracebacks
# Generate and open HTML report
make test-html
make report- β Rich, interactive web interface
- β Test history and trends
- β Categories and severity
- β Screenshots attached on failure
- β Playwright traces attached
- β Timeline visualization
- β Detailed test analytics
# Generate Allure results (Make)
make test-allure
# Generate Allure results (Behave)
behave --format allure_behave.formatter:AllureFormatter --outfile reports/allure_results --format pretty
# Serve interactively (recommended)
make allure-serve
# or
allure serve reports/allure_results
# Or generate static report
make allure-report
make allure-open# Example 1: Run specific test scenario with Allure report
behave --tags=@OV_03 --format allure_behave.formatter:AllureFormatter --outfile reports/allure_results --format pretty
allure serve reports/allure_results
# Example 2: Run smoke tests with HTML report
behave --tags=@smoke --format html --outfile reports/behave_report.html --format pretty
open reports/behave_report.html # macOS
# Example 3: Run high priority tests with both reports
behave --tags=@High --format html --outfile reports/behave_report.html \
--format allure_behave.formatter:AllureFormatter --outfile reports/allure_results \
--format pretty
# Example 4: Run Security Posture tests with Allure
behave --tags=@SecurityPosture --format allure_behave.formatter:AllureFormatter --outfile reports/allure_results --format pretty
# Example 5: Run all tests except skipped ones
behave --tags=~@skip --format allure_behave.formatter:AllureFormatter --outfile reports/allure_results --format pretty
# Example 6: Run specific feature file with Allure
behave features/web/overview.feature --format allure_behave.formatter:AllureFormatter --outfile reports/allure_results --format prettyThe framework automatically captures and attaches:
- πΈ Full page screenshots (PNG)
- π Playwright traces (ZIP) - viewable with
playwright show-trace - π Page information (URL, title, status)
- π₯οΈ Console logs (in debug mode)
All artifacts are:
- Saved to
reports/directory - Attached to Allure reports automatically
- Timestamped for easy identification
reports/
βββ behave_report.html # HTML report
βββ allure_results/ # Allure raw results
βββ allure_report/ # Generated Allure report
βββ screenshots/ # Failure screenshots
βββ traces/ # Playwright traces
# HTML Report
make report # Opens HTML report in browser
# Allure Report
make allure-serve # Serve interactively
make allure-report # Generate static report
make allure-open # Open generated reportmake clean-reports # Remove all reports
make allure-clean # Remove Allure artifacts onlyπ For detailed reporting documentation, see: docs/REPORTING_GUIDE.md
This includes:
- Complete usage guide
- CI/CD integration examples
- Troubleshooting tips
- Best practices
Configure in pytest.ini:
@smoke- Quick sanity tests (~5-10 min)@regression- Full test suite@web- Web UI tests@api- API tests@login- Login-specific tests@authentication- Authentication tests@positive- Positive test cases@negative- Negative test cases@validation- Validation tests
@pytest.mark.web
@pytest.mark.smoke
def test_quick_check():
pass
@pytest.mark.web
@pytest.mark.regression
def test_detailed_check():
passRun specific markers:
pytest -m smoke -vPORTAL_BASE_URL=https://your-app-url.comBROWSER=chromium # chromium, firefox, webkit
HEADLESS=true # true, false
TEST_EMAIL=user@test.com
TEST_PASSWORD=password123
TEST_MFA_CODE=123456β Old Way (manual waits):
page.click("button")
page.wait_for_load_state("networkidle")
page.wait_for_selector("#element")β New Way (automatic):
page_object.click_and_wait("button") # Waits automatically!- BasePage handles all waits
- Every navigation waits for networkidle
- Every click waits for load completion
- Consistent across all page objects
β
Less flaky tests
β
Cleaner code
β
Consistent behavior
β
No forgotten waits
name: Automated Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.14'
- name: Install dependencies
run: |
pip install -r requirements.txt
playwright install chromium
- name: Run smoke tests
env:
PORTAL_BASE_URL: ${{ secrets.PORTAL_BASE_URL }}
TEST_EMAIL: ${{ secrets.TEST_EMAIL }}
TEST_PASSWORD: ${{ secrets.TEST_PASSWORD }}
run: pytest -m smoke -v --html=reports/report.html
- name: Upload reports
if: always()
uses: actions/upload-artifact@v3
with:
name: test-reports
path: reports/- Check if page loaded completely
- Verify locator is correct
- Increase timeout if needed
- Run in headed mode to debug:
HEADLESS=false
- MFA codes expire quickly (30-60 seconds)
- Use test account without MFA
- Or integrate with authenticator API (pyotp)
- See
LOGIN_TEST_GUIDE.mdfor details
- Ensure
.envfile has read permissions - Check file is in project root
- Verify not ignored by .gitignore
# Reinstall browsers
playwright install chromium- README.md (this file) - Project overview and quick start
- BEHAVE_COMMANDS.md - Behave commands quick reference
- docs/REPORTING_GUIDE.md - Complete reporting documentation
- docs/REPORTING_WORKFLOW.md - Reporting workflow diagrams
- run_tests.py - Python test runner with reporting options
- Makefile - Make commands for test execution and reports
- behave.ini - Behave test runner configuration
- config/settings.py - Environment and browser settings
- features/environment.py - Test hooks and setup
- BEHAVE_COMMANDS.md - All Behave commands with examples
- Common patterns and use cases
- Tag-based execution examples
- Report generation commands
β
Encapsulate page logic in page objects
β
Keep tests clean and readable
β
Leverage BasePage methods
β
No manual waits in tests
β
Clear test names
β
Meaningful assertions
β
BDD for business-readable tests
β
pytest for technical tests
β
Reusable setup/teardown
β
Centralized test data
- Create feature branch
- Write tests (BDD or pytest style)
- Ensure all tests pass
- Create pull request
For questions or issues:
- Check documentation files
- Review example tests
- Check conftest.py for fixtures
- Review page objects for available methods
[Your License Here]
- v1.0 - Initial framework with POM and automatic waits
- v1.1 - Added BDD support with pytest-bdd
- v1.2 - Added login feature with Microsoft SSO
- v1.3 - Enhanced reporting and CI/CD support
- v1.4 - Added comprehensive API testing for Keycloak authentication