CI Tests
Pages 44
- Home
- API best practices
- Azure provider design implementation notes
- Blocking bugs
- Bootstrap Internals (WIP)
- Boring Techniques
- Bug fixes and patching
- CI Tests
- Code Review Checklists
- Containers
- Creating New Repos
- Debugging Juju
- Debugging Juju Misc Topics Pointers
- Debugging Races
- Diagnosing MongoDB Performance
- Faster LXD
- Feature Documentation
- Getting Mongo dump
- Guidelines for writing workers
- Hacking upload tools
- howto: implement effective config structs
- Implementing environment providers
- Incomplete Transactions and MgoPurge
- Interactive Commands
- Intermittent failures
- Juju Logging
- Juju Scripts
- Juju Storage (WIP)
- Juju User Documentation
- KVM instance creation
- Login into MongoDB
- Managing complexity
- mgo txn example
- MgoPurgeTool
- Misc Topics
- Mongo Issues
- MongoDB and Consistency
- pprof facility
- Reviewboard Tips and Tricks
- Stop Start Machine Agent
- Stress Test
- Triaging Bugs
- Update Launchpad Dependency
- Writing Unit Tests
- Show 29 more pages…
Navigation
Testing
Releases
Documentation
Development
- READ BEFORE CODING
- Blocking bugs process
- Bug fixes and patching
- Contributing
- Code Review Checklists
- Creating New Repos
-
MongoDB and Consistency
- [mgo/txn Example] (https://github.com/juju/juju/wiki/mgo-txn-example)
- Scripts
- Update Launchpad Dependency
- Writing workers
- Reviewboard Tips
Debugging and QA
- Debugging Juju
- [Faster LXD] (https://github.com/juju/juju/wiki/Faster-LXD)
Clone this wiki locally
Juju has many Continuous Integration (CI) tests that run whenever new code is checked in. These are long-running integration tests that do actual deploys to clouds, to ensure that juju works in real-world environments.
Local Environment Setup
These instructions assume you're running on Ubuntu, they may be different on other platforms.
First we need to get the code. The CI tests are python scripts that are hosted on launchpad, so we'll use bzr to check them out.
sudo apt-get install bzrNow we can get the CI tests:
bzr branch lp:juju-ci-tools
cd juju-ci-tools
bzr branch lp:juju-ci-tools/repositoryAnd once we have the code, we can run the Makefile to install all the deps you need to run the tests.
make install-depsjuju-ci-tools is where the actual tests live, the repository repo is a collection of charms used by some of the tests.
The Scripts
The scripts under juju-ci-tools are generally divided into three categories:
- The CI tests themselves, that get run by jenkins and show pass/fail. These scripts have the prefix "assess", i.e. assess_recovery.py.
- Unit tests of code in the CI tests and the helper scripts. These scripts are in the 'tests' subdirectory and have the prefix "test", i.e. tests/test_assess_recovery.py.
- Helper scripts used by the CI tests. These are generally any file without one of the aforementioned prefixes.
Running Unit Tests (tests of the CI testing code)
The unit tests are written using python's unittest module.
To run all the tests, run make test. To run the tests for a particular test file, run python -m unittest <module_name>. For example, to run the unit tests in test_assess_recovery.py, run python -m unittest tests.test_assess_recovery.
Running CI Tests
The CI tests are just normal python files. Their return value indicates success or failure (0 for success, nonzero for failure). You can just run the file, and it'll tell you the arguments it expects. In general, the tests will expect that you have a working juju binary and an environments.yaml file with usable environments. Most of the scripts ask for the path to your local juju binary and the name of an environment in your environments.yaml. The script will use these to bootstrap the indicated environment and run its tests.
If the test needs to deploy a test charm, you'll need to set the JUJU_REPOSITORY environment variable to the path where you checked out lp:juju-ci-tools/repository.
Help can be printed for any test script, for example `./assess_log_rotation.py --help'. To run the assess_log_rotation CI test using the local environment on your machine, the incantation would look like this (note that the './logs' directory must already exist):
~/juju-ci-tools$ mkdir logs; JUJU_REPOSITORY=./repository ./assess_log_rotation.py local $GOPATH/bin/ ./logs local_temp machine
This will bootstrap an environment, deploy a test charm, call some actions on the charm, and then assess the results.
That's it. You've just run your first juju CI test. That's really about it.
Creating a New CI Test
If this is your first time, consider asking one of the QA team to pair-program on it with you.
Start by making a copy of template_assess.py.tmpl, and don't forget unit tests!
Run make lint early and often. (You may need to do sudo apt-get install python-flake8). If you forget, you can run autopep8 to fix certain issues. Please use --ignore E24,E226,E123 with autopep8. Code that's been hand-written to follow PEP8 is generally more readable than code which has been automatically reformatted after the fact. By running make lint often, you'll absorb the style and write nice PEP8-compliant code.
Please avoid creating diffs longer than 400 lines. If you are writing a new test, that may mean creating it as a series of branches. You may find bzr-pipeline to be a useful tool for managing a series of branches.
If your tests require new charms, please write them in Python.
If you have questions or need help, Aaron Bentley from Juju QA has volunteered as a contact person (aaron.bentley@canonical.com or abentley on IRC).
Test Requirements
By using template_assess.py.templ as a base, many of these requirements will be satisfied automatically.
("must" and "should" are used in the RFC2199 sense.)
Compatibility
Tests should be compatible with all versions of under development, including those that are in maintenance and only receiving bugfixes.
Exit status
Tests must exit with 0 on success, nonzero on failure.
Juju binary
Tests must accept a path to the juju binary under test. A path including the binary name (e.g. mydir/bin/juju) is expected. (Some older tests use a path to the directory, but this is deprecated.)
Environment name
Tests that use an environment must accept an environment name to use, so that they can be run on different substrates by specifying different environments.
Runtime environment name
Tests that use an environment must permit a temporary runtime environment name to be supplied, so that multiple tests using the same substrate can be run at the same time.
Test mode
Tests must run juju with test-mode: True by default, so that they do not artificially inflate statistics. This is handled automatically by jujupy.temp_bootstrap_env
agent-url
Tests should allow an agent url to be specified, so that a person manually testing a Juju QA revision build does not need to update the agent-url in their config in order to use the testing streams.
upload-tools
Tests should allow --upload-tools to be specified, so that a person manually testing a Juju QA or personal build can do so without needing streams.
series
Tests whose results could vary by series should allow default-series to be specified.
Environment variables
Tests should depend only on standard JUJU environment variables such as JUJU_HOME and JUJU_REPOSITORY. They should not depend on feature flags. Feature flags should only be provided to the juju versions that require them. Ideally, only operations that require feature flags should have them. This means that test code should not supply feature flags. The only code that should be aware of feature flags should be jujupy.EnvJujuClient and its subclasses.
Coding guidelines
The Juju QA general coding guidelines are here: https://docs.google.com/document/d/1dL3xdw_UwpH6GpXJIvlwqm9MZ8dY5yABKlfpFw2vLmA/edit
Landing your code
Push your code to Launchpad, create a merge proposal, and ask a member of Juju QA to review it. When you get their review, remember to check for inline comments. When you have addressed all the review comments, push your code to Launchpad and add a comment to the merge proposal.
Once the code has been approved, a member of Juju QA can land it.
Integrating your test into CI testing
This will not happen automatically. The Jenkins config must be updated. Generally you can ask your reviewer to do this.
Your test will probably be implemented as a non-voting test initially, until it clear that the test is reliable enough that its failure should curse a build. If the QA team determines that the test is not reliable enough, they may ask you to update it.