Skip to content
jeremy.baker@northwestern.edu edited this page Feb 29, 2024 · 1 revision

All of our automated tests use the ScalaTest test framework.

The full suite of tests takes a long time to run. You can see the list in the main.yml file which defines the GitHub Actions workflow for testing when branches are pushed.

Most of the unit tests are stored in netlogo-core/src/test, netlogo-gui/src/test, and parser-core/src/test, and they are named after the class or feature that they are testing. The main tests are all tagged as either fast, medium, or slow, and can be run as so: netlogo/Test/fast. Or you can run a specific test suite: netlogo/testOnly netlogo/testOnly org.nlogo.workspace.AbstractWorkspaceTests. Or you can run a specific test within a suite by name filter: netlogo/testOnly org.nlogo.workspace.AbstractWorkspaceTests -- -z AttachModelDir1

Which tests do I run?

How can you be confident that your changes haven't broken anything? Our first line of defense is the compiler. A build that fails to compile is a broken build. Our last line of defense is GitHub Actions, which does some fairly exhaustive checking of the NetLogo internals to ensure that the build is good in corner cases that your tests may not cover. However, you can save a lot of time by testing the appropriate components before pushing to GitHub.

Ideally, the right way to get confidence in your changes is by writing a test that fails before you write any code, then write and/or change the code so that the test passes. This ensures that the test is correct and any future changes which break your code will result in a failing test. In general, when cleaning up or making small changes to existing code, you should run locally netlogo/Test/fast and all appropriate tests for the particular unit of code you changed with netlogo/testOnly.

The exceptions to this rule are changes which alter NetLogo primitives or the operation of the NetLogo engine. These have a bit more testing around them to ensure they work correctly. They are tested in three primary ways:

  1. They are tested by a suite of Language tests (tc, tr, etc.). Language tests greatly ease the task of implementing primitives, even for NetLogo extensions.
  2. They are tested by model checksums, which ensure that each model produces the same output. These can be updated by running netlogo/dump all. Note that (unless the models library has been updated) changes to the benchmarks file usually mean that you have broken something. There are exceptions, but you should understand why the benchmarks are changing and have confidence about why they're being changed.
  3. They are tested to make sure the generated source for them is suitable for use in automatically generated primitives. This is done by dumping the JVM-generated source of a few benchmark files into the test/benchdumps folder. These benchdumps can be updated with netlogo/dump bench for the GUI and headless/dump bench for headless. You can use netlogo/testOnly *TestCompileBenchmarks to check the GUI results and headless/testOnly *TestCompileBenchmarks to check headless. Only changes to primitives or the code generator should result in changes to the benchdump files, and these changes should be checked with caution to ensure correctness and performance.
Clone this wiki locally