This repository is a resource for testing system dynamics software and translation tools. It provides a standard set of simple test cases in various formats, with a proposed canonical output for that test.
Folders within the Test directory contain models that exercise a minimal amount of functionality (such as lookup tables) for doing unit-style testing on translation and simulation pathways.
Folders within the Samples directory contain complete models that can be used for integration tests, benchmarking, and demos.
Each model folder contains:
- a single model concept, with its canonical output (named
output.csv
oroutput.tab
) containing (at least) the stock values over the standard timeseries in the model files - Model files that produce said output (.mdl, .xmile, .stmx(stella), pysd, etc)
- A text file entitled
README.md
containing: - The purpose of the test model (what functionality it executes)
- The version of software that the canonical output was originally prepared by
- The author of the test model and contact info
- Submission Date
- Screenshots of model construction programs (optional)
For a demonstration, see the teacup example
All members of the SD community are invited to contribute to this repository. To do so, create a fork, add your contribution using one of the following methods, add yourself to the AUTHORS file, then submit a pull request.
To request that a specific test be added, create an issue on the issues page of this repository.
Many of these cases have model files for some modeling formats but not others. To add a model file
in another format, check that your model output replicates the 'canonical example' to reasonable
fidelity, preferably using identical variabale names, and add an entry to the contributions table
in the directory's README.md
file.
To add a new case, in your local clone add a folder in either the tests
or benchmarks
directory
as appropriate, copy an example README.md
file from another folder, and edit to suit your needs.
To simplify tools and scripts around model validation, canonical output files should be UTF8 tab-separated or comma-separated files. Each row represents model results at a single timestep (rather than each row representing a single variable's results for every timestep).
The following process ensures that output files end up in the format expected by tools that interact with this repository.
- Open the model in STELLA or iThink
- Run the model (choose Run from the Run menu)
- From the Edit menu, choose Export Data
- In the
Export Data
modal dialog, chooseOne Time
as the Export Type - In the
Export Data Source
, make sure both theExport all model variables
andEvery DT - Export every intermediate value during the run
are selected - For
Export Destination
, choose Browse and name the fileoutput.csv
, and make sure the left-most checkbox below Browse is selected. You may have to create an empty file namedoutput.csv
manually beforehand in your operating system's file browser. Ensure that of the twoData
styles (columnar on the left, horizontal on the right) the left-most (columnar results) is selected. This is the default. - Click
OK
at the bottom right to perform the export
- Open the model in Vensim
- Run the model
- From the
Model
menu, chooseExport Dataset...
- Choose the run you just performed (by default it is the file named
Current.vdf
- At the top of the dialog, in the text-box next to the
Export To
button, change the name of the export file fromCurrent
(or whatever your run name was) tooutput
- Under
Export As
choosetab
- Under
Time Running
choosedown
- Click `OK at the bottom left to perform the export
- Open the resulting
output.tab
in excel or whatever, and make sure that the values of constant terms are propagated down the column for each timestep.
There are 2 scripts in the top level of this repo to aid in debugging,
compare.py and regression-test.py.
compare.py
expects paths to two CSV or TSV files, and will compare the
results of the two files, with some amount of smartness/fuzziness
around floating point comparisions.
regression-test.py
can be used to compare a specific modeling tool's
output against the accepted, canonical output for a given model (which
is stored in output.csv
in all the subdirectories of this
repository). It can be run with external tools with the current
working directory as the root of this project:
$ ./regression-test.py ~/src/libsd/mdl .
$ ./regression-test.py ~/src/sd.js/mdl.js .
And it can also be run from outside of this project, for example when
this test-models
repo is included as a git
submodule in
another project:
$ test/test-models/regression-test.py ./mdl test/test-models
The main requirement is that the given command
(mdl
and
mdl.js
above)
accept the path to a model as an argument, and output model results to
stdout
in either TSV or CSV format. If your tool requires
additional commandline args, you can specify them with quoting:
$ ./regression-test.py "~/path/to/tool --arg1 --arg2" .
And if you have a tool that simulates Vensim models or Stella v10 models rather than xmile, you can change the model-file suffix:
# test Vensim model files
$ ./regression-test.py --ext mdl ~/path/to/tool .
# test Stella v10 XMILE-variant model files
$ ./regression-test.py --ext stmx ~/path/to/tool .