Robot Framework acceptance tests
Acceptance tests for Robot Framework are naturally created using Robot Framework itself. This folder contains all those acceptance tests and other test data they need.
- A script for running acceptance tests. See Running acceptance tests for further instructions.
Script to generate acceptance test runners based on test data files.
atest/genrunner.py atest/testdata/path/data.robot [atest/robot/path/runner.robot]
- Contains actual acceptance test cases. See Test data section for details.
- Resources needed by acceptance tests in the
- Contains test cases that are run by actual acceptance tests in the
robotfolder. See Test data section for details.
- Contains resources needed by test cases in the
testdatafolder. Some of these resources are also used by unit tests.
- The place for test execution results. This directory is generated when
acceptance tests are executed. It is in
.gitignoreand can be safely deleted any time.
Robot Framework's acceptance tests are executed using the run.py
script. It has two mandatory arguments, the Python interpreter to use
when running tests and path to tests to be executed, and it accepts also
all same options as Robot Framework. The script itself should always be
executed with Python. Run it with
--help or see documentation in its
source code for more information.
To run all the acceptance tests, execute the
entirely using the selected interpreter:
atest/run.py python atest/robot atest/run.py jython atest/robot
The commands above will execute all tests, but you typically want to skip
Telnet tests and tests requiring manual interaction. These tests are marked
no-ci tag and can be easily excluded:
atest/run.py python --exclude no-ci atest/robot
A sub test suite can be executed simply by running the folder or file containing it. On modern machines running all acceptance tests ought to take less than ten minutes with Python, but with Jython the execution time is considerably longer. This is due to Jython being somewhat slower than Python in general, but the main reason is that the JVM is started by acceptance dozens of times and that always takes few seconds.
Before a release tests should be executed separately using Python, Jython, IronPython and PyPy to verify interoperability with all supported interpreters. Tests should also be run using different interpreter versions (when applicable) and on different operating systems.
The results of the test execution are written into an interpreter specific
directory under the
atest/results directory. Temporary outputs created
during the execution are created under the system temporary directory.
The test data is divided to two, test data part (
atest/testdata folder) and
running part (
atest/robot folder). Test data side contains test cases for
different features. Running side contains the actual acceptance test cases
that run the test cases on the test data side and verify their results.
The basic mechanism to verify that a test case in the test data side is
executed as expected is setting the expected status and possible error
message in its documentation. By default tests are expected to pass, but
FAIL (this and subsequent markers are case sensitive) in the
documentation changes the expectation. The text after the
is the expected error message, which, by default, must match the actual
error exactly. If the error message starts with
STARTS:, the expected error is considered to be a regexp or glob pattern
matching the actual error, or to contain the beginning of the error. All
other details can be tested also, but that logic is in the running side.
The tests on the running side (
atest/robot) contains tags that are used
to include or exclude them based on the platform and required dependencies.
Selecting tests based on the platform is done automatically by the run.py
script, but additional selection can be done by the user to avoid running
tests with `precondtions`_ that are not met.
- Require manual interaction from user. Used with Dialogs library tests.
- Require a telnet server with test account running at localhost. See Telnet tests for details.
- Tests which are not executed at continuous integration. Contains all tests
- require-yaml, require-docutils, require-pygments, require-lxml, require-screenshot, require-tools.jar
- Require specified Python module or some other external tool to be installed.
See Preconditions for details and exclude like
--exclude require-lxmlif needed.
- require-windows, require-jython, ...
- Tests that require certain operating system or Python interpreter. Excluded automatically outside these platforms.
- no-windows, no-osx, no-jython, no-ipy, ...
- Tests to be excluded on certain operating systems or Python interpreters. Excluded automatically on these platforms.
# Exclude tests requiring manual interaction or running telnet server. atest/run.py python --exclude no-ci atest/robot # Same as the above but also exclude tests requiring docutils and lxml atest/run.py python -e no-ci -e require-docutils -e require-lxml atest/robot # Run only tests related to Java integration. This is considerably faster # than running all tests on Jython. atest/run.py jython --include require-jython atest/robot
Certain Robot Framework features require optional external modules or tools to be installed, and naturally tests related to these features require same modules/tools as well. This section lists what preconditions are needed to run all tests successfully. See Test tags for instructions how to avoid running certain tests if all preconditions are not met.
These Python modules need to be installed:
- docutils is needed with tests related to parsing test data in reStructuredText format and with Libdoc tests for documentation in reST format.
- Pygments is needed by Libdoc tests for syntax highlighting.
- PyYAML is required with tests related to YAML variable files.
- lxml is needed with XML library tests. Not compatible with Jython or IronPython.
It is possible to install the above modules using
individually or by using the provided requirements.txt file:
# Install individually pip install 'docutils>=0.9' pip install pygments pip install pyyaml pip install lxml # Install using requirements.txt pip install -r atest/requirements.txt
Notice that the lxml module may require compilation on Linux, which in turn
may require installing development headers of lxml dependencies. Alternatively
lxml can be installed using a system package manager like
sudo apt-get install python-lxml.
Because lxml is not compatible with Jython or IronPython, tests requiring it are excluded automatically when using these interpreters.
Screenshot library tests require a platform dependent module or tool that can take screenshots. See Screenshot library documentation for details.
tools.jar, which is part of the standard JDK installation,
to be in
CLASSPATH when reading library documentation from Java source
files. In addition to setting
CLASSPATH explicitly, it is possible to
tools.jar into the
ext-lib directory in the project root and
CLASSPATH is set automatically.
Running telnet tests requires some extra setup. Instructions how to run them
can be found from testdata/standard_libraries/telnet/README.rst.
If you don't want to run an unprotected telnet server on your machine, you can
always skip these tests by excluding tests with a tag
All content in the
atest folder is under the following copyright:
Copyright 2008-2015 Nokia Networks Copyright 2016- Robot Framework Foundation Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.