An iOS automation test project for API, UI, performance, reporting, and platform integration
iOS-Automation-Framework is a complete mobile automation test project for the Yunlu Mall iOS app. It combines API tests, iOS UI tests, performance tests, Allure reports, CI configuration, and a local demo Web UI.
In the broader platform system, this repository is the test-code carrier and first integration sample. It owns how tests are written and executed: pytest/Appium cases, Page Objects, test data, assertions, and report output. A platform such as MeteorTest owns scheduling, executor status, task metadata, and result collection.
- Background
- Core Capabilities
- Execution Loop
- Architecture
- Project Structure
- Quick Start
- Platform Integration
- Local Demo Console
- Test Coverage
- Implementation Notes
- Validation and CI
- Roadmap
- License
- Maintainer
Mobile automation projects often start as simple scripts, then gradually become hard to maintain:
- UI locators are scattered across test cases.
- API, UI, and performance tests use different conventions.
- Test data, environment configuration, and report output are not standardized.
- CI can run tests, but local debugging and report inspection are inconvenient.
- A central platform can schedule work, but each test repository still needs a clear contract for suites and commands.
This project addresses those problems by keeping test implementation, local demo tooling, and platform integration metadata in one repository.
- Page Object Model for iOS UI automation.
- Data-driven API tests using YAML data and pytest parametrization.
- API, UI, and performance test suites in one test repository.
- Allure report output for local runs, CI runs, and platform-triggered runs.
- GitHub Actions and Fastlane/Jenkins configuration examples.
- Local Web UI for code browsing, controlled test execution, real-time logs, Allure reports, and AI Q&A.
- Platform integration contract through
meteortest.yml.
flowchart LR
Contract[meteortest.yml<br/>suite metadata]
Platform[MeteorTest / Local Agent<br/>task scheduling]
Pytest[pytest execution<br/>API / UI / Performance]
Reports[Reports<br/>logs / Allure / screenshots]
Feedback[Analysis<br/>local UI / platform reports]
Contract --> Platform
Platform --> Pytest
Pytest --> Reports
Reports --> Feedback
flowchart TB
Pytest[pytest execution layer]
subgraph Suites[Test suites]
UI[UI_Automation<br/>Appium + XCUITest]
API[API_Automation<br/>requests + pytest]
Perf[Performance<br/>Locust]
end
subgraph Infra[Infrastructure]
Config[config<br/>env settings]
Utils[utils<br/>logging / HTTP / assertions / screenshots]
Reports[Reports<br/>Allure output]
end
subgraph Tooling[Tooling]
CI[CI<br/>GitHub Actions / Fastlane / Jenkins]
WebUI[Local demo console<br/>FastAPI + Alpine.js]
Contract[meteortest.yml<br/>platform suite contract]
end
Pytest --> UI
Pytest --> API
Pytest --> Perf
UI --> Infra
API --> Infra
Perf --> Infra
Infra --> CI
Infra --> WebUI
Contract --> Pytest
Pytest --> Reports
iOS-Automation-Framework/
├── API_Automation/
├── UI_Automation/
├── Performance/
├── config/
├── utils/
├── tools/webui/
├── docs/
├── CI/
├── Reports/
├── meteortest.yml
├── requirements.txt
├── pytest.ini
└── conftest.py
By responsibility:
API_Automation/: API wrappers, test cases, and YAML test data.UI_Automation/: Appium UI automation using Page Object Model and XCUITest.Performance/: Locust performance test scripts.config/: environment configuration, local settings template, and global settings.utils/: logging, HTTP client, assertions, and screenshot utilities.tools/webui/: local demo console for browsing files, running tests, viewing logs, and opening reports.docs/: design notes and platform integration documentation.CI/: Jenkins and Fastlane examples.Reports/: generated reports and run artifacts; this directory is git-ignored.meteortest.yml: suite contract for MeteorTest or another Local Agent.
- Python 3.9+
- Node.js 18+ for Appium 2.x
- Appium 2.x
- Xcode 14+ for iOS simulators
- Allure command line tool, optional but recommended for report generation
Install Appium and the XCUITest driver:
npm install -g appium
appium driver install xcuitestgit clone https://github.com/JunchenMeteor/iOS-Automation-Framework.git
cd iOS-Automation-Framework
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
cp config/local.yml.example config/local.ymlOn Windows:
python -m venv venv
.\venv\Scripts\activate
pip install -r requirements.txt
copy config\local.yml.example config\local.ymlEdit config/local.yml with your device name, app path, and test account settings.
pytest API_Automation/cases -v --alluredir=./Reports/api-results
pytest API_Automation/cases/test_user.py -v
allure generate ./Reports/api-results -o ./Reports/api-report --clean
allure open ./Reports/api-reportStart Appium in a separate terminal:
appiumRun UI tests serially:
pytest UI_Automation/Tests -v -n 0 --alluredir=./Reports/ui-resultscd Performance/locust_scripts
locust -f locustfile.py --host=https://api-dev.yunlu.comA platform or Local Agent should read meteortest.yml at the repository root, select a suite from a task, and run the declared command.
Example:
python -m pytest API_Automation/cases -v -n 0 --alluredir=Reports/platform/local-demo-001/allure-results
python -m pytest UI_Automation/Tests -v -n 0 --alluredir=Reports/platform/local-demo-001/allure-resultsPlatform-triggered API suites use -n 0 to run serially. The project-level pytest.ini enables pytest-xdist with -n auto for normal local runs, but serial execution is more stable for Windows Local Agent runs and avoids temporary-directory permission failures.
Platform-triggered runs should write artifacts under:
Reports/platform/{task_id}/
├── logs.txt
├── allure-results/
├── allure-report/
└── screenshots/
The tested .ipa or .app should be passed by the platform task as app_path or app_url. This repository does not build the app and does not own general-purpose task scheduling.
API smoke suites require API_BASE_URL to point to the target service. Without it, the API integration tests are collected successfully but skipped intentionally. When it is set, it overrides the api.base_url value from config/environments.yaml:
$env:TEST_ENV="staging"
$env:API_BASE_URL="https://your-staging-api.example.com"
.venv\Scripts\python.exe -m pytest API_Automation\cases -v -n 0 -m smokeFor MeteorTest Local Agent runs, set API_BASE_URL in the same shell before starting the Agent so the suite subprocess inherits it.
For public-safe local validation, this repository includes a small mock API that covers the current -m smoke API cases. It lets the smoke suite produce real pass/fail results without depending on a private staging backend.
Start the mock API:
.venv\Scripts\python.exe -m tools.mock_api.server --host 127.0.0.1 --port 8010In another shell, run the smoke suite against it:
$env:API_BASE_URL="http://127.0.0.1:8010"
.venv\Scripts\python.exe -m pytest API_Automation\cases -v -n 0 -m smokeBoundary: the mock API is deterministic local test infrastructure. It is not the real product backend and should not be used to claim production API coverage.
The repository includes a local Web UI for debugging and demonstration. It is useful for browsing code, running whitelisted tests, viewing real-time logs, opening Allure reports, and trying project-aware AI Q&A.
It is not a general test platform and is not intended for production deployment.
Start it with:
python -m uvicorn tools.webui.app:app --host 127.0.0.1 --port 8000Open:
http://127.0.0.1:8000
Prepare local settings:
cp tools/webui/.env.example tools/webui/.envImportant settings:
| Variable | Default | Description |
|---|---|---|
AI_PROVIDER |
mock |
mock or claude |
AI_MODEL |
claude-sonnet-4-6 |
AI model ID |
AI_API_KEY |
empty | Claude API key, not needed in mock mode |
ALLURE_BIN |
allure |
Allure command path |
MAX_CONCURRENT_RUNS |
1 |
Maximum concurrent runs |
Current sample coverage is organized around Yunlu Mall.
| Module | Scope | Cases |
|---|---|---|
| Login | phone login, verification code, password login | 15 |
| Home | banner, category navigation, recommended products | 12 |
| Category | category list, filtering, sorting, product cards | 10 |
| Product Detail | image preview, spec selection, add to cart | 18 |
| Cart | quantity update, delete, checkout | 14 |
| Order | submit order, payment, order list | 20 |
| Total | 89 |
| Module | APIs | Cases |
|---|---|---|
| User | 8 | 32 |
| Product | 12 | 48 |
| Cart | 6 | 24 |
| Order | 10 | 40 |
| Total | 36 | 144 |
Page Object Model keeps UI locators and page operations in page classes, while test cases focus on business flow. When UI changes, the corresponding page class can be updated without rewriting every test case.
| Area | Choice | Reason |
|---|---|---|
| UI automation | Appium | mature ecosystem, XCUITest support, cross-platform option |
| Test framework | pytest | fixtures, parametrization, plugins |
| Reports | Allure | visual reports, trends, shareable artifacts |
| Data-driven testing | YAML + pytest parametrization | separates data from test logic |
- Prefer explicit waits over
sleep(). - Use multiple locator strategies: Accessibility ID, XPath, Predicate, and Class Chain.
- Retry flaky failures with
pytest-rerunfailures. - Capture screenshots and logs on failure.
- Keep test data isolated between cases.
Install dependencies:
pip install -r requirements.txtRun focused validation:
python -m pytest API_Automation/cases -q
python -m pytest UI_Automation/Tests -q -n 0
python -m pytest Performance -qCI examples live under:
.github/
CI/
flowchart LR
MVP[MVP<br/>API / UI / performance suites]
Platform[Platform integration<br/>meteortest.yml / Local Agent]
WebUI[Local demo console<br/>logs / reports / AI Q&A]
Stable[Stability<br/>fixtures / retry / device handling]
MVP --> Platform --> WebUI --> Stable
MIT License © 2024
Maintained by Meteor. This project records a practical mobile automation engineering workflow, from Page Object design and API layering to CI/CD and platform integration.