Galasa is an open source deep integration test framework for teams looking to give more power to their testers. What makes Galasa a deep integration test framework is its ability to support tests that cross system boundaries and reach into remote layers inaccessible to other test tools.
Galasa has been architected to ensure that the routine tasks of writing and executing tests are straightforward. The more complex parts of tests (such as provisioning) are abstracted into other components that can be written by experts and easily distributed to the team.
Which of these problems do you recognize?
If you've ever struggled to implement automated testing across a complex technology stack, you might recognize some of the same symptoms we identified during our design process. Expand the following headings to see how Galasa solves these challenges:
Many organizations have little or no automated testing. Where such systems are used, they are often cited as unreliable because of poor data, unstable test environments or timing clashes with other people's work.
Galasa provides the capability to run reliable, repeatable tests and minimizes conflicts around the availability of test environments. When run in containers, Galasa tests offer horizontal scalability and resilience. Multiple logically-isolated tests can run in parallel for each test instance, leading to the accumulation of improvements in rigor and quality as your test catalog grows.
Too much manual intervention
Running and re-running manual tests is laborious, time consuming and not exactly the best use of a tester's skills or time.
With Galasa you can automate and automatically schedule these repetitive regression tests and use the time saved to free up testers to spend their time designing test cases that are more likely to find important defects.
Once written, a Galasa test is available 24x7 for reuse.
Lack of a unified picture
Manual tests are often split across teams and reported separately, with no single, consistent view of the test plan.
With Galasa you can store related tests within a shared test catalog, from which tests can be automatically selected to run for any given change set. Automated regression test suites can be created for new software versions so you can run a specified set of tests for automated baselining of a new environment installation, such as a hardware migration.
Unreliable test data
Test data is often in a state of flux, resulting in the breaking of existing tests and difficulty in snapshotting and data integrity.
Galasa enables you to provision your own test data from scratch or find valid test data within an existing data lake. Test data is locked within the Galasa framework whilst in use, so that it cannot be corrupted by other test runs.
You can integrate Galasa tests with your existing tooling, allowing you to share data between tools within the same test.
Test artifacts are stored in lots of different repositories, making it time-consuming and difficult to locate the right information to help you root out the cause of a failure.
Galasa automatically stores all test artifacts in a single, central repository, making diagnostics quicker and easier. You can also debug tests using a local instance of Galasa, so you can examine every line of code.
Manual testing involves a significant amount of human intervention, which means tests can take too long to write and are hard to understand and maintain.
Galasa makes tests quicker to write and easier to maintain by extracting the boilerplate code out of the tests. Just import the components you need from within your test code to access the abstracted functionality, gaining the benefit of the expertise of the person who wrote them and the productivity introduced by their simple use.
Test results are often stored in spreadsheets and manually approved by product owners before changes are promoted. This makes it difficult to understand the tests that have been run, and the manual intervention required as part of the sign-off process can delay delivery.
Galasa's dashboard will integrate all of your test results in one place, making reporting and reviewing between test phases easy and consistent.
Manual testing and unreliable data add up to a development cycle that can’t be sped up without loss of confidence and increase of risk. Application changes can take many months to reach production, and emergency fixes are often promoted with only a sub-set of suitable tests exercised.
Galasa tests are not limited to testing mainframe-specific applications, and can extend to encompass entire families of hybrid cloud applications. Most manual tests can be automated with Galasa, helping you to deliver changes to your core applications quickly and successfully.