-
Notifications
You must be signed in to change notification settings - Fork 1
Scalability Evaluation
The scalability evaluation of the Model Compiler is performed using the following workflow.
You can run the com.incquerylabs.emdw.cpp.performance.test as JUnit Plugin test.
You only need to create properties file and models in this project root (or anywhere else) and setup the attributes values correctly in the tests. You can use sample properties file (with corrected uml model path) which is in the samples folder.
You can run the product from the Eclipse but you need to add these following arguments: <config.file.path> <run.index> <target.folder.path> <relative.root.path>
Help for arguments:
-
<config.file.path>
: the properties file path which contains the necessary parameters for the run -
<run.index>
: this argument can differentiate the runnings -
<target.folder.path>
: the folder's path where the log and result json will be generated -
<relative.root.path>
: if you use relative path in the config file for the uml model, which root is different than the actual folder (optional)
First of all you need to export the RCP App. For this open the performance_test.product file and use the Eclipse Product export wizard under the Exporting at the Overview page.
After the exporting you can run the RCP App.
If you use Windows you can start the app from the command window with the following command (you need the eclipse.exe full path if you are not in its folder):
eclipse.exe <config.file.path> <run.index> <target.folder.path> <relative.root.path>
The meaning of arguments is the same as in the previous chapter. If you want to see the console output of the running program you need an extra argument (-console
) at the end of the command.
For result diagrams of execution of EATF and PingPong see the results page.
You can configure the performance test through a properties file, which should contain the path of an uml model used for the tests, the parameters of the model multiplication and the parameters of the model modifications during the thest.
For details see the properties file wiki page.
The performance test uses scenarios for scalability evaluation, which consist of multiple phases. The execution time and memory usage of all phases are measured during the test. The results can be viewed with the MONDO-SAM Shiny application.
For details of the scenarios and test phases see the benchmark description wiki page
During the initialization phase of the performance test the specified UML model is loaded and multiplied according to the provided properties file
The multiplication is executed on two levels:
- The top level components are copied to create more components
- The elements contained by the top level components are copied to create larger components
Note: the multiplication of model elements can lead to errors, for details see the model multiplicator wiki page
In the modification phase of the performance test model elements are added, removed or modified according to the configuration provided in the properties file.
There are low-level modifications affecting only sub-component elements (e.g. attributes or transitions) and high-level modifications affecting container elements like components and packages. For details see the [model modification wiki page](https://github.com/IncQueryLabs/EMDW-MC/wiki/Scalability-Evaluation---Model modification)
Use the following command from the cmd/terminal:
python full_path_of_mondo-sam\reporting\report.py --source full_path_of_results_csv --output full_path_of_output_folder --config full_path_of_config_json
You can found more information about mondo-sam, config json and shiny here.
The following animation shows the evaluation workflow step-by-step to better represent the difference between the initial compilation and the incremental part: