This framework is designed to help you perform robust end-to-end testing for web applications and APIs. It supports advanced features like parallel test execution, configurable environments (local or cloud), cross-browser testing, and uses the Page Object Model (POM) for maintainable UI tests. Additionally, it offers simple and reusable utilities for both UI actions and API interactions.
-
Cross-Browser Testing (Local & Cloud):
- Local execution supports multiple browsers (Chrome, Firefox, Edge).
- Cloud integration with BrowserStack or Sauce Labs is available via configuration.
- Command-line options override config file values for quick switching.
-
Parallel Test Execution:
- Use pytest-xdist to run tests concurrently on multiple browsers.
-
Beautiful HTML Reporting:
- Automatic HTML report generation using pytest-html.
- The report includes a dedicated column to display the browser on which each test ran.
- Reports are generated by default at
report.html
if no option is provided.
-
Rerun Failed Tests:
- Failed tests are automatically rerun using pytest‑rerunfailures.
- By default, tests are retried up to 2 times with a 5‑second delay between attempts.
-
Page Object Model (POM):
- Centralized page objects to encapsulate UI interactions.
- Example page (
pages/example_page.py
) demonstrates a basic use case.
-
Separation of Concerns:
- UI Tests: Use Selenium WebDriver wrapped inside common actions (
utils/ui_actions.py
). - API Tests: Utilize a common API actions utility based on
requests
(utils/api_actions.py
).
- UI Tests: Use Selenium WebDriver wrapped inside common actions (
-
Flexible Configuration:
- All key settings (execution mode, browser list, URLs, cloud credentials) are maintained in a YAML file (
config/config.yaml
). - Command-line arguments (e.g.,
--execution
,--browsers
) take precedence over configuration file values.
- All key settings (execution mode, browser list, URLs, cloud credentials) are maintained in a YAML file (
-
Test Organization:
- Tests for UI and API are separated into different directories (
tests/test_ui.py
andtests/test_api.py
). - Auto-attaching of the UI fixtures to test classes (only for tests marked with
@pytest.mark.ui
) so that API tests aren't affected.
- Tests for UI and API are separated into different directories (
automation_framework/
├── config/
│ └── config.yaml # Main configuration file
├── pages/
│ ├── base_page.py # Base class for all page objects
│ └── example_page.py # Example page using the POM
├── tests/
│ ├── test_ui.py # UI tests using Selenium
│ └── test_api.py # API tests using requests
├── utils/
│ ├── browser_setup.py # Logic to instantiate WebDriver (local/cloud)
│ ├── ui_actions.py # Common UI actions (open URL, quit browser)
│ └── api_actions.py # Common API actions (GET, POST, etc.)
├── conftest.py # Pytest fixtures and test parameterization
├── pytest.ini # Pytest configuration file (markers, etc.)
└── requirements.txt # Python dependencies
-
Clone the Repository:
git clone https://github.com/a-suraj-bhatti/PythonSeleniumFramework.git cd automation_framework
-
Create a Python Virtual Environment:
python -m venv venv
-
Activate the Virtual Environment:
- On Windows:
venv\Scripts\activate
- On macOS/Linux:
source venv/bin/activate
- On Windows:
-
Install Dependencies:
pip install -r requirements.txt
Edit the config/config.yaml
file to set your environment:
execution: local # Options: "local" or "cloud"
browsers:
- chrome
- firefox
base_url: "https://www.saucedemo.com/"
api_base_url: "http://api.example.com"
cloud_provider:
name: "browserstack" # Options: "browserstack" or "saucelabs"
username: "your_username"
access_key: "your_access_key"
- Local Execution: The tests will run on your specified browsers.
- Cloud Execution: Change
execution
tocloud
for remote runs (make sure yourcloud_provider
section is properly populated).
You can override any of these using command-line options when running tests (e.g., pytest --execution=cloud --browsers="chrome,firefox"
).
UI tests are written under tests/test_ui.py
and are marked with @pytest.mark.ui
. They run on multiple browsers as defined in the configuration or by CLI overrides.
Run UI tests with:
pytest -m ui
For parallel execution (if you have pytest-xdist installed):
pytest -m ui -n 4
API tests are written under tests/test_api.py
and use the api_setup
fixture.
Run API tests with:
pytest -m api
Note: These tests are not influenced by the UI-specific fixtures, so no browsers will be launched for API tests.
-
Default HTML Report:
An HTML report is automatically generated atreport.html
if you do not specify another output file. This report includes a column indicating which browser was used for each UI test. -
To Generate a Report Manually:
Simply run:pytest --html=report.html
The report will be created and saved as "report.html".
-
Re-run Failed Tests:
Failed tests will be retried up to 2 times (with a 5-second delay between attempts). This is configured globally inpytest.ini
. -
Customization:
You can override the defaults from the command line:pytest --reruns 3 --reruns-delay 2
-
Command-Line Overrides:
- Override the execution type:
pytest --execution=cloud
- Provide a custom list of browsers:
pytest --browsers="chrome,firefox,edge"
- Override URLs:
pytest --base_url="https://new-url.com" --api_base_url="http://new-api.com"
- Override the execution type:
-
Parallel Execution: With
pytest-xdist
, you can run tests in parallel across multiple processes:pytest -m ui -n 4
-
Cloud Testing: Ensure your cloud environment credentials are correctly set in
config/config.yaml
if running in cloud mode.
Happy Testing!