Skip to content

Contribution Guide (draft)

Vasily Ryabov edited this page Oct 13, 2016 · 8 revisions

Contributions to the project are very welcome. There are several recommendations that would help to merge your pull request faster.

Run Unit Tests on local machine

When you are done with a bug fix or with a feature implementation, make sure the changes do not crash the existing tests. The following steps will help to run the tests:

  1. Install the required dependencies with pip install -U -r dev-requirements.txt
  2. Run nosetests --with-coverage --cover-html --cover-html-dir=Coverage_report pywinauto\unittests

Known problems

If your OS Windows interface language is not English, you may experience some failures because several tests still use a language-dependent notepad.exe as an app under test. Reducing the number of such tests would also be a valuable contribution.

The whole test suite runs from 10 to 14 minutes (on one Python version). If your changes are very local, you may run smaller test suites like so:

C:\Python27\python.exe .\pywinauto\unittests\test_application.py

Setup Continuous Integration Services for Tests and Code Checks

Every push to the fork could be checked automatically with cloud CI services (AppVeyor runs all the tests on many Python versions, QuantifiedCode and LandScape.io perform code health checks, codecov.io aggregates code coverage for all Pythons). While you're driving or sleeping, CI services do the job.

It's easy to set them up for your fork.

  1. Login to a CI service with your GitHub account and create your CI project for the fork (if desired).
  2. Go to the CI project settings and find the webhook URL.
  3. Go to the GitHub settings of your fork and open the Webhooks and services page.
  4. Add webhook URLs for every CI service (LandScape can be added in the Services section without step 2).

Make sure that your commit doesn't break the tests and doesn't introduce new code style warnings. Every pull request triggers execution of the connected web hooks. The statuses of unit tests and the code validations can be found under the section "Checks" at the bottom of a pull request.

We still have several flaky unit tests (example). As a result, sometimes Appveyor detects a couple of failed tests for one or two Python versions (most of the tests are green but for 1 or 2 Python versions there are 1-2 red tests). Don't worry too much in this case. We can re-start CI tests if it's really not on your side. We don't ignore this problem. Deep efforts in this direction would also be highly appreciated.

Write Unit Tests for Every New Functionality

Please try covering all new classes and methods by unit tests. We try to keep the total coverage no less than ~95%.

To be done: explain possible problems and best coding practices for the unit tests.