The purpose of this module is to allow tests to run against the installed Electron run app.
Slices of the tests will be selected as candidates for automation and then performed against the Opentrons run app executable on [Windows,Mac,Linux] and various robot configurations [Real Robot, Emulation, No Robot].
- This folder is not plugged into the global Make ecosystem of the Opentrons mono repository. This is intentional, the tools in this folder are independent and will likely be used by only a few and are in no way a dependency of any other part of this repository.
- Have python installed per CONTRIBUTING.md
- Install the Opentrons application on your machine.
- https://opentrons.com/ot-app/
- This could also be done by building the installer on a branch and installing the App.
- Install Chromedriver
- in the app-testing directory
sudo ./ci-tools/mac_get_chromedriver.sh 13.1.8
per the version of electron in the root package.json for electron- if you experience
wget: command not found
- brew install wget and try again
- if you experience
- when you run
chromedriver --version
- It should work
- It should output the below. The chromedriver version must match Electron version we build into the App.
- ChromeDriver 91.0.4472.164 (6c672af59118e1b9f132f26dedbd34fdce3affb1-refs/heads/master@{#883390})
- in the app-testing directory
- Create .env from example.env
cp example.env .env
- Fill in values (if there are secrets)
- Make sure the paths work on your machine
- Install pipenv globally against the python version you are using in this module.
- pip install -U pipenv
- In the app-testing directory (make, python, pipenv required on your path)
make teardown
make setup
- Run all tests
make test
- Run specific test(s)
pipenv run python -m pytest -k test_initial_load_no_robot
- Once there is a mass of tests to see the patterns to abstract:
- Abstract env variables and config file setup into data structures and functions instead of inline?
- Extend or change the reporting output?
- Mac and Windows github action runners?
- Caching in github action runners?
- Add the option/capability to 'build and install' instead of 'download and install' on runners.
- Define steps for a VM/docker locally for linux runs?
- Define steps for a VM locally for windows runs?
- Better injection of dependencies to relieve import bloat?
- Test case objects describing setup, "test data", test case meta data for tracking?
- Test execution history into DataDog
use xdist
pipenv run pytest -n3
run black, mypy, and flake8
make check
python 3.10.2 - manage python with pyenv pipenv
Using the python REPL we can launch the app and in real time compose element locator strategies. Then we can execute them, proving they work. This alleviates having to run tests over and over to validate element locator strategies.
From the app-testing directory
pipenv run python -i locators.py