Skip to content

How to do Catrobat language tests

Oskar W edited this page Feb 7, 2022 · 1 revision

Catrobat Language Tests (CLTs)

Since Pocket Code lets the user create Programs in our own Program language, we also want to test the different bricks and functions for their functionality. Therefore we use the so called Catrobat Language Tests, this tests are written and executed within the App and are a important part of our quality assurance. CLTs are Files with the ".catrobat"-ending and can only be read in the Pocket Code application.

Creating, Running and debugging CLTs

In order to create CLTs, we need to tick the Testing extension in the Settings of the app, this will active a dedicated Testing Brick category with different bricks like Asserts or Finish tests which will be explained later on.

Testing_category


For each Test we consensed on a few requirements you have to follow: * Tests must have a "Finish stage" brick at the end. (In most cases right after the Assertion brick) * Tests have to start with "test", eg. "testSinusFunction.catrobat" * Project name must be the same as the file name, eg. "testSinusFunction" & "testSinusFunction.catrobat" * Tests in repository reside in Catroid/catroid/src/androidTest/assets/catrobatTests/ * Test runner is Catroid/catroid/src/androidTest/java/org/catrobat/catroid/catrobattestrunner/CatrobatTestRunner.kt * CatrobatTestRunnerTest.kt verifies and tests the correct behavior of the testing bricks and test runner itself, those tests are loacted in assets/catrobatTestRunnerTests * The other testing bricks "Single tap at" and "Wait until other scripts have stopped" do pretty much what their name suggests and have no side effects. * If you use multiple "Wait until other scripts have stopped" in different script you will create deadlocks (since both will wait for the other to end) * "Wait" bricks should be used very sparingly, since they slow down test execution and could create race conditions when used in multiple scripts. Most of the times broadcasts and the "Wait until other scripts have stopped" brick are the more suitable way to solve such problems. * Create for each testcase a new CLT instead of using multiple asserts * Depending on the usecase use different asserts

Clean Code!

We at catrobat follow the clean code principle which also applies to the CLTs, so remember:

"Single Concept per Test" - [...] we want to test a single concept in each test function. We don’t want long test functions that go testing one miscellaneous thing after another." (p.131, "Clean Code - A Handbook of Agile Software Craftmanship", Robert C. Martin)

Assert Bricks

All assert bricks may stop the CLT. If the test does not succeed, a toast visualizes the missmatch(see On fail in the brick description below).

Assert:

assert


The basic assert compares two parameters that are passed into it.

Failes if: provided parameters missmatch
On fail: prints provided and expected value

Assert lists

assert lists


The list assert compares two lists that are passed into it

Failes if: provided lists missmatch or differentiate in size On fail: prints all failing positions in the list

Tipp: comparing multiple elements is better then chaining multiple comparisons inside a single assert (e.g. test1=1 and test2=1.... → [test1, test2] == [1,1])

For each item of ... stored in variable with the same name

The parameterized assert stores for every iteration the current element of the provided lists into newly created variables with the same name(of the provided lists) and compares the provided parameter with the current element of the expected list

for eacht item of ... stored in variable with the same name


Failes if: there is a iteration where the comparision failes On fail: prints the failing and succeding testcases with the values of each iteration

Tipp: simplify repetetive testcases and provide more informations then the other two asserts

Tipp: It can also be used to assert lists

Tipp: define a csv and use it to load each list, this makes it easier to add new testcases and verify the CLT itself

Creating a small example test:

  1. Create a new project in PocketCode (code style guideline: all test project names start with "test", eg. "testSinusFunction.catrobat"
  2. Go to script view of the background sprite of the new project.
  3. Add an "Assert equals" brick to the "When scene starts" script. (from the "Testing" category on the add brick fragment) as well as an "Finish tests" brick.

screenshot


  1. Use the formula editor to set the "actual" and "expected" fields to the values that are to be compared.

screenshot


  1. When we run the Project and we get a little toast with "Success" as a feedback, so we now know that the function square root actually works on our local device.

screenshot


Exporting the test to run in Android studio

On the device in Pocket Code

  1. Enter the Overflow menu of the project
  2. Choose Project options
  3. Scroll down and press on Export project
  4. Your file explorer will open automatically
  5. Choose your desired save location and press Save In Android Studio
  6. Move the ".catrobat" file into a suitable package in catroid/src/androidTest/assets/catrobatTests/

Tipp: If you are using an Emulator, you can find the exported project in the Device File explorer at /storage/0/*file location*.

  1. Make sure your test is named correctly directly in pocket code, since renaming the ".catrobat" file manually after exporting may result in failing test cases.

Running tests in Android Studio

To run all ".catrobat" files in Catroid/catroid/src/androidTest/assets/catrobatTests/ in Android Studio you can run Catroid/catroid/src/androidTest/java/org/catrobat/catroid/catrobattestrunner/CatrobatTestRunner.kt.

Loading existing tests into PocketCode

  1. Move the catrobat file to anywhere on your phones sd card or internal storage.
  2. Open pocket code.
  3. Go to Project list activity.
  4. Click on the overflow menu and choose Import project.
  5. Select the catrobat file you want to import.
  6. The imported project can be executed normally.