Do you have a bunch of scripts that test your applications and services at runtime?
Do you wish to have a unified interface for running those tests instead of using the command line?
Do you require evidence of test runs?
TR enables you to do all of this by providing an unified way of running tests & storing results through a convinient web UI and REST API!
Every change to your test and scripts is kept in a local GIT repository. This allows to have full transparency, ensuring that for every test result it is clear what exactly was tested, providing a chain of evidence of your test results.
A test is a collection of one or more tasks.
+------+
| Test |
+------+
|
+---------------+
| | |
v v v
Task1 Task2 Task3
A task defines the path to the exectuable (the actual test). TR will run all tasks subsequently while capturing their output.
A task knows two states, success or failure. However, there are different reasons why a task can fail:
- The task ran longer than the defined timeout and was stopped
- The task exited a non zero exit code
If one task failes, the whole test will be marked as failed.
A test where all tasks succeeded, will look as such:
Every task may have a success and/or a failure hook. A hook is a path to a file, that will be executed upon the task failing or succeeding.
Note: When using groups, the last hooks in order will be used. i.E. The group consists of three tests: "1", "2", "3".
"1" defines a success hook
"2" defines a success hook and a failure hook
"3" defines a failure hook
This means the group success hook is from "2" and the failure hook from "3".
For every task you can provide a path pointing to an arbitrary file. If the task is done running, the file under said path is copied and linked into the result and therefore archived by TestRunner. This is helpful if your task does not print all results to stdout/err but as an example into a HTML report file.
Test Groups allow you to run multiple tests in one go, resulting in a combined result.
+-------+
| Group |
+---+---+
|
+-----------+-----------+
| |
+-----+-----+ +-----+-----+
| Mail Test | | Auth Test |
+-----+-----+ +-----+-----+
| |
+---------------+ +---------------+
| | | | | |
v v v v v v
Task1 Task2 Task3 Auth1 Auth2 Auth3
In order to structure your tests, you may create categories that will visually group them in your web UI.
Different roles exist in TR:
The role "r" - READ can only view results, "rx" READEXECUTE can additionally run the defined tests.
The role "rwx" READWRITEEXECUTE can additionally edit (write) test and testgroups.
The role "a" ADMIN can additionally administer users.
Under Windows, TR will create a folder called "TR" in your %APPDATA% folder. Under Linux, TR will create a folder called "/var/lib/TR". This is later referenced as "base path". All configuration is persisted as JSON files. Manual editing of the files is discouraged, please use the web UI or the API.
You can build your own WAR file using maven or you can download the latest binary from the projects GitHub page under the tab "Releases". Then deploy the WAR in your Tomcat (v10+) webapps folder. Now TR should be up and running under localhost:8080/TR/frontend/index.html - When running TR for the first time, a user "admin" with the password "letmein" will be created for you - please change the password asap.
I recommend adding a webserver acting as a reverse proxy infront of your tomcat. This is a an example configuration for lighttpd:
$HTTP["host"] == "testrunner.your.domain.com" {
proxy.server = ( "" => ( ( "host" => "127.0.0.1", "port" => "8080" ) ) )
setenv.add-environment = ("fqdn" => "true")
url.redirect = ("^/$" => "/TR/frontend/index.html" )
$HTTP["scheme"] == "http" {
$HTTP["host"] =~ ".*" {
url.redirect = (".*" => "https://%0$0")
}
}
$SERVER["socket"] == ":443" {
ssl.engine = "enable"
ssl.pemfile = "/etc/lighttpd/certs/tr.pem"
ssl.honor-cipher-order = "enable"
ssl.cipher-list = "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH"
ssl.use-compression = "disable"
ssl.use-sslv2 = "disable"
ssl.use-sslv3 = "disable"
}
}
Extensive logs are saved in the basePath/logs folder. The latest logs are visible too in the web UI for all administrators:
You can find the swagger documentation under https://github.com/ozzi-/TestRunner/blob/master/WebContent/swagger.json or during runtime under http://127.0.0.1:8080/testrunner/api/swagger.json
Authentication is performed by providing a sessionIdentifier. In order to generate the identifier, perform a login by sending your username & password as a POST to /user/login
{
"username": "string",
"password": "string"
}
The response will contain a sessionIdentifier as well as further metadata:
{
"username": "string",
"sessionIdentifier": "string",
"created": 0,
"valid": true,
"roleID": "string"
}
Now you can perform requests by providing the X-TR-Session-ID header with the value of the sessionIdentifier.
After deploying TestRunner, you may follow this guide to understand the basic usage. First of all, use the default admin user to login (admin:letmein).
Lets start by changing the admin password as well as creating a second user.
Navigate to the admin settings by clicking on the cogwheel icon in the right upper corner:
The admin settings allow you to change your own password, administer users as well as viewing the TestRunner log.
When editing a user, you may change their password, role, deleting their open session as well as deleting the user.
In order to create a test, you will first need a script. If your current user has write privileges, you may create a script as following:
There you may either upload a file:
Or you may use the integrated editor to type in your script:
In order to better organize scripts, you can create folder structures:
After creating the first script, it is time to create a test case.
In the test create form, provide all required fields as well as add one or more tasks (= executing a specific script with parameters):
You may always edit the test later:
Now you are ready to run your tests:
After the test has completed, you will see its results:
Furthermore, when navigating to a test, you will be able to see all previous test runs as well as their results
Custom runs allow you to add further command line arguments and/or tagging the test run.
In the test overview, you will then find the tag labels:
Categories help you to organize your tests.
Click on the cogwheel next to the tests:
First, create a new category:
Now add a test to the newly created category:
The settings will now show you the current category groupings:
On the dasboard, all tests are now grouped by their according categories:
Groups allow you to group multiple tests into one test group. This not only runs multiple tests through one click, but aggregates the results too.
For this, click on the cogwheel next to the groups:
First create a new group:
Now you may start adding tests to the group:
You can find the group on the dashboard:
TestRunner keeps track on all things happening, click on "Change History":
Here you will see all changes (called commits, as this feature is based on Git):
Clicking on a commit will show you the diff:
When navigating to a specific test, you may see only the changes of the test itself:
You are able to revet any change of a test or script at any time, using the "Revert to this commit" button