Skip to content

YounesB-McGill/modeling-assistant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Modeling Assistant

linting: pylint

Overview

This repository contains the source code and documentation for an interactive domain modeling assistant developed at McGill University. This Modeling Assistant teaches students the fundamentals of domain modeling by asking them to model the concepts of a problem domain and their relationships given a natural language problem statement, and it can be extended to support other model types in the future, eg, state machines, as described in one of our papers.

This system compares a student solution to one provided by the instructor, detects mistakes using the Mistake Detection System, and returns feedback to the student with the Feedback Mechanism.

A UI mockup of the Modeling Assistant can be found below.

Modeling Assistant UI Mockup

System Architecture and Repository Structure

The Modeling Assistant application has the following structure.

Modeling Assistant Architecture

The concepts of the application are defined using Ecore metamodels (models of models) found in the modelingassistant/model folder. For more information, see the README in that folder.

Learning Corpus

The feedback shown to the student is obtained from a Learning Corpus which consists of a mistake type hierarchy with multiple feedback levels for each mistake type, ranging from highlighting a UI element to parametrized responses and responses with learning resources such as quizzes.

The overall structure of the Learning Corpus is defined using its metamodel. Its contents are defined using an internal Python/PyEcore DSL in corpusdefinition.py. The createcorpus.py script uses this DSL to generate the default Learning Corpus instance as well as transformations to source code (Python, Java) and human-readable output (Markdown, LaTeX).

Learning Corpus Model Transformation

Mistake Detection System

The Mistake Detection System (MDS) maps domain model elements between the student and instructor solutions according to features such as type, name, and structural similarities. It then compares these elements and detects new mistakes in them depending on their context. It also updates the properties of existing mistakes which remain unresolved in the student solution.

The MDS supports some legitimate variation in student solutions by allowing for synonyms, eg, "Booking" is considered to be equivalent to "Reservation," and design pattern alternatives, eg, using a 2-item enumeration instead of a boolean attribute and four variations of the Player-Role pattern.

The MDS is implemented in Java and can be run as a Spring Boot REST application using the helper script, which is written in Python to be cross-platform. To build the MDS, first perform the setup instructions described here.

Mistake Detection System

Feedback Mechanism

The Feedback Mechanism is used by the application to provide students with meaningful progressive feedback on their mistakes. This feedback is generated from the Learning Corpus as described above. For a conceptual overview, see the relevant material below.

The Feedback Mechanism is implemented in Python and can be run as a Flask REST application using the following command within an activated virtual environment:

python modelingassistant/pythonapp/flaskapp.py

Before running this for the first time, be sure to perform the setup instructions described here.

Other Components

The Modeling UI Frontend (Unity, JavaScript), WebCORE, and TouchCORE are all developed in separate repositories.

People

Project Supervisor: Prof Gunter Mussbacher (@gmussbacher)

Main Developers: Younes Boubekeur (@YounesB-McGill) and Prabhsimran Singh (@Prabhsimran-Singh)

Contributors: Fatma Alfalahi (@fatmaalfalahi), Jasneet Kaur (@Jasneet-Kaur-heer), Rijul Saini (@Rijul5), Thomas Woodfine-MacPherson (@r2d2117)

Publications and Awards

Theses

Papers

  • Younes Boubekeur, Prabhsimran Singh, and Gunter Mussbacher. 2022. A DSL and Model Transformations to Specify Learning Corpora for Modeling Assistants. In ACM/IEEE 25th International Conference on Model Driven Engineering Languages and Systems (MODELS '22 Companion), October 23–28, 2022, Montréal, QC, Canada. ACM, 8 pages. https://doi.org/10.1145/3550356.3556502
  • Prabhsimran Singh, Younes Boubekeur, and Gunter Mussbacher. 2022. Detecting Mistakes in a Domain Model. In ACM/IEEE 25th International Conference on Model Driven Engineering Languages and Systems (MODELS '22 Companion), October 23–28, 2022, Montréal, QC, Canada. ACM, 10 pages. https://doi.org/10.1145/3550356.3561583
  • Younes Boubekeur and Gunter Mussbacher. 2020. Towards a Better Understanding of Interactions with a Domain Modeling Assistant. In ACM/IEEE 23rd International Conference on Model Driven Engineering Languages and Systems (MODELS '20 Companion), October 18–23, 2020, Virtual Event, Canada. ACM, 10 pages. https://doi.org/10.1145/3417990.3418742. Video presentation

Awards

  • 🏆 Best 3 Minute Thesis (3MT) Presentation: Younes Boubekeur and Gunter Mussbacher. A Learning Corpus for a Domain Modeling Assistant to Teach Requirements Modeling. Poster, Graduate Student Event, 27th International Working Conference on Requirements Engineering: Foundation for Software Quality (REFSQ '21), Essen, Germany, April 2021.

Related Projects


Notes:

  • This document reuses material from the publications listed above with the permission of the authors.
  • In December 2022, the repo history was rewritten to remove sensitive data before making it public, which affects commit hashes, pull requests diffs, and repo tags. If you require this information, please contact the Project Supervisor to determine if it can be made available to you.

About

Interactive domain modeling assistant

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published