This repository provides an auditable and comprehensive analysis of the data from the Model Checking Contest (MCC), an annual competition held as part of the Petri Nets conference. The goal of this project is to offer an accessible and transparent analysis of the published results, enabling researchers, practitioners, and developers to better understand the performance and capabilities of various model checking tools and techniques.
By analyzing the data from the MCC, we aim to highlight the strengths and weaknesses of different model checking tools and identify areas where further research and development are needed. This in-depth analysis can help drive the development of new tools and techniques, ultimately advancing the state-of-the-art in model checking.
The repository contains code, data, and analysis results, all available through an easy-to-navigate website. It presents the analysis in the form of tables, charts, and Venn diagrams, offering a clear and concise view of the performance of participating model checking tools across various categories, such as State Space, Global Properties, Reachability, CTL, LTL, and Upper Bounds.
As a participant in the MCC not involved in the organization, this project represents an independent effort to contribute to the community's understanding and to promote transparency in the evaluation of model checking tools.
To view the analysis results, visit the published website at https://yanntm.github.io/MCC-analysis/index.html.
If you want to run the analysis locally, there is a GitHub Actions workflow in the .github
folder. The website/
directory will contain the final static HTML pages generated by the analysis. The GitHub workflow ensures reproducibility, and you can browse the build.yml file to see the dependencies and setup steps for a standard Ubuntu machine.
With the dependencies available, you can run the runAnalysis.sh
script to build the HTML pages in the website/
folder, which will contain the results of the analysis.
The repository contains the following files and directories:
analyzeAnswers.R
: R script for processing and analyzing the raw data from the Model Checking Contest.buildFinalPages.py
: Python script for generating the final HTML pages with the analysis results and Venn diagrams.buildPages.py
: Python script for converting raw CSV files to nice-looking HTML tables.runAnalysis.sh
: Shell script for running the entire analysis pipeline.templates/
: Directory containing the Jinja2 HTML templates and CSS file used for generating the final website.category.html
: Template for the individual category pages.index.html
: Template for the main index page.styles.css
: CSS file for styling the website.
website/
: Directory where the final generated website files will be stored after running the analysis.
Feel free to open issues if you have any suggestions, questions, or issues related to the project.
This project is licensed under the GNU General Public License v3.0 or later.
- Model Checking Contest (MCC)
- Yann Thierry-Mieg, LIP6, Sorbonne Université & CNRS
- ChatGPT, an AI language model by OpenAI