The goal of this project is to collect benchmark problems for different models of automata, transducers, and related logics. This will allow to have standardized sets of benchmarks for research in these topics.
The file formats of each model are described in the readme file of the corresponding folder. We also provide parsers for LTL and tree automata. We are working on building more.
AutomataArk contains benchmark from the following sources.
- Alaska TACAS08 experiments, [http://lit2.ulb.ac.be/alaska/experiments.html]
- Lukas Holik's page, [http://www.fit.vutbr.cz/~holik/]
- Limi CAV15 experiments [https://github.com/thorstent/Limi]
- LibVata library [http://www.fit.vutbr.cz/research/groups/verifit/tools/libvata/]
- RegexLib [http://www.regexlib.com/]
- Snort [https://www.snort.org/]
- Becchi et al. "A workload for evaluating deep packet inspection architectures" [http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4636093&tag=1]
- Strand [http://www.fit.vutbr.cz/research/groups/verifit/tools/dWiNA/eval/strand.html]
- Regsy [http://lara.epfl.ch/w/regsy]
- dWiNA [http://www.fit.vutbr.cz/research/groups/verifit/tools/dWiNA/index.html]
You are encouraged to contribute more benchmark problems, parsers, or help in any way you want. Please contact me at loris@cs.wisc.edu or issue a pull request. If you are the owner of some of these benchmarks and don't want me to share them please contact me.
Many DFAs, NFAs, regular expressions of smaller size can be found at [https://github.com/AutomataTutor/automatatutor-data]. We do not include them in this project.