Skip to content
Switch branches/tags
This branch is 39 commits ahead of marcusvnac:master.

Latest commit


Git stats


Failed to load latest commit information.

ArgoUML SPL Benchmark

A feature location benchmark for single systems and for families of systems. We include the ground-truth, different scenarios and a program to calculate the feature location metrics.

Read the description of the benchmark in: J. Martinez, N. Ordoñez, X. Tërnava, T. Ziadi, J. Aponte, E. Figueiredo and M. T. Valente. Feature Location Benchmark with ArgoUML SPL. 22nd International Systems and Software Product Line Conference (SPLC 2018) Challenges Track. Gothenburg, Sweden, 10-14 Sept 2018

Video explaining the benchmark:

SPLC Variability challenges website with current solutions:

Abstract: Feature location is a traceability recovery activity to identify the implementation elements associated to a characteristic of a system. Besides its relevance for software maintenance of a single system, feature location in a collection of systems received a lot of attention as a first step to re-engineer system variants (created through clone-and-own) into a Software Product Line (SPL). In this context, the objective is to unambiguously identify the boundaries of a feature inside a family of systems to later create reusable assets from these implementation elements. Among all the case studies in the SPL literature, variants derived from ArgoUML SPL stands out as the most used one (see However, the use of different settings, or the omission of relevant information (e.g., the exact configurations of the variants or the way the metrics are calculated), makes it difficult to reproduce or benchmark the different feature location techniques even if the same ArgoUML SPL is used. With the objective to foster the research area on feature location, we provide a set of common scenarios using ArgoUML SPL and a set of utils to obtain metrics based on the results of existing and novel feature location techniques.


  1. Download this repository by clicking here and unzip it somewhere in your computer.

  2. We will call the Benchmark functionality directly from the Java source code of the benchmark so first you need an Integrated Development Environment (IDE) where you can run Java source code and Apache Ant scripts. But do not worry if you are not expert on them because you will not need to modify anything there, you will need just to launch programs and we will show you how to do it. We will explain the steps using Eclipse but you can use any IDE supporting Java and ant if you know how to do it.

Download Eclipse from (We tested with Eclipse Oxygen and Eclipse Neon):

And then, select the Java Developers package. This package will have everything you need, if you select another, or you want to use one Eclipse that you have in your computer, you might have problems.

You will also need to have Java installed in your computer (at least Java 1.6). You can check it opening the Command Prompt (cmd) and entering “java -version” (as shown in the next image). You need to have a Java Development Kit (JDK) installed, and not just the Java Runtime Environment (JRE), otherwise you will have errors during the build (for example "tools.jar not found"). Also, check that an environment variable called JAVA_HOME is defined in your system pointing to the JDK path. For example JAVA_HOME=D:\jdk1.8

  1. Run Eclipse and Import the projects. In the main menu, File -> import -> General -> Existing projects into workspace.

In Select root directory, “browse” and select the folder where you have the unzipped content of this repository.

From the list select exactly these projects (Selecting others can cause problems, do not import other nested projects that might be suggested by Eclipse import):

  • argouml-app
  • argouml-build
  • argouml-core-diagrams-sequence2
  • argouml-core-infra
  • argouml-core-model
  • argouml-core-model-euml
  • argouml-core-model-mdr
  • argouml-core-tools
  • ArgoUMLSPLBenchmark
  • org.splevo.casestudy.argoumlspl.generator

Select the option "Copy projects into workspace".

The 8 projects starting with argoumlspl- which is the argouml-spl code base. The project org.splevo.casestudy.argoumlspl.generator which is a helper to create variants from SPLEvo. The project ArgoUMLSPLBenchmark is the Benchmark that you will need to use.

At the end of these steps, your Eclipse workspace should look like this:

Do not worry about the errors in the argouml- projects. The benchmark will work with them.

Getting ready

Generating the scenarios

In the ArgoUMLSPLBenchmark project, there is a folder called “scenarios” containing the predefined scenarios defined in the Benchmark. This step will allow to create the variants associated to each of these scenarios. There is a “configs” folder in each scenario with a list of config files that contains the list of features of each config. To start with, open the “ScenarioOriginalVariant” folder, then right click the build.xml file and click on Run As -> Ant Build

The console will start showing the progress of the generation of the variants.

And it will tell you when it will be finished.

Once the build is finished. Refresh the folder of the scenario (right click the folder and refresh, or select the folder and press F5). You will have a folder called "variants" with a set of folders (each folder contains a variant).

In the case of this scenario, there is only one variant.

Repeat this process for each scenario in the “scenarios” folder. Notice that, the scenario for example, ScenarioTraditionalVariants with 10 variants might take around half an hour. You will only need to do this process once for each scenario. At least, create the one for the ScenarioTraditionalVariants as it will be needed for the example presented in this document.

Some scenarios will have more than one build file. For example, the one that will generate all possible variants contains 6 parts. You will need to launch the 6 parts.

We separated it in parts to try to avoid memory problems.

Troubleshooting (Out of memory error): You might have an out of memory error after the generation of several variants. It happens in a laptop after more than 170 generated variants.

[...] java.lang.OutOfMemoryError: Compressed class space
Total time: 503 minutes 42 seconds

We suggest to launch the build.xml of an scenario and then restart Eclipse. It is also important, if a build fail, remove the variants folder of the scenario just to prevent that there are incomplete variants when the build failed.

Basic information

This section explains what you need to do to use your feature location technique in this benchmark. There are some important folders in the ArgoUMLSPLBenchmark project that you need to know:

  • featuresInfo: It contains a features.txt file with the feature ids, feature names (separated by comma as there are synonyms or alternative namings) and the description. Id, names and description are separated by the symbol “;”. You might want to use the information there to create queries for feature location techniques based on information retrieval. scenarios: The benchmark predefined scenarios, you should provide the results for each of them. In each scenario you have the “variants” folder with the source code of each variant (now that you have created the scenarios) and a “configs” folder where you have information of the features present in each variant. You might want to use the information in the configs folder for intersection-based techniques. In the featuresInfo there is also the featureModel.txt which is a simple textual representation of the feature model of ArgoUML SPL using the feature ids.

  • groundTruth: A set of 24 txt files containing the traces of the feature, feature combinations and feature negations of ArgoUML SPL. Obviously, you cannot use this ground-truth information inside your feature location technique. yourResults: This is the folder where you need to put your results (either manually or automatically, as you prefer). The results must be in the same format as the ground-truth.

  • yourResultsMetrics: Once you put your results in the “yourResults” folder, you can launch the metrics calculation program to get a csv file in this “yourResultsMetrics” folder. We will show how in the next sections.

A complete example showing the whole process

We have prepared an example of a feature location technique to show you the process. Before starting the example, create the scenario ScenarioTraditionalVariants as described in the instructions to build the scenarios. The feature location technique that we provide as a very basic example will output the results in the “yourResults” folder as the benchmark is expecting. Remember that you can do this automatically, or you can just put the results there manually. The technique is in the ArgoUMLSPLBenchmark project, in the src/techniqueExample package. Right click the java class there -> Run as -> Java Application

The console will output the process (and the technique itself also calculated and reports the time measure, remember to measure also the time of your feature location technique).

Then refresh “yourResults” folder (select the folder and press F5).

In the case of your feature location technique, the results will be the ones created from your technique. In fact, your feature location technique does not need to be in Java, you can use whatever you want and then put the results there.

Then, we launch the program to get the metrics. It is in the src/metricsCalculation package. Right click the java file, Run as -> Java application.

You can see the progress in the console.

Then, refresh “yourResultsMetrics” and you will have this csv file with all the metrics.

In the console output you have also a gnuplot script that you can copy and paste in gnuplot. You can download gnuplot here: (Tested with gnuplot 5.2).

And then click enter and you have the graph below. You can use as example to graphically report the metrics of a given scenario.

Utils for feature location techniques' developers

In the src/utils package of ArgoUMLSPLBenchmark project, you have some Util classes that might be useful if you are using Java to develop your feature location technique. However, you can still use the benchmark without using them. We present them just in case you want to take them.

FeatureUtils is helpful to get the information about features and configurations that you can use for each scenario.

TraceIdUtils can be helpful to create the id of the traces needed for “yourResults” files. If you want to use it, this class expects you to use the JDT Java parser as the parameter types belongs to JDT.

Finally, FileUtils has standard helpful methods to manipulate files, write in files etc.

Launching an ArgoUML variant (if you want to do it for some reason)

If for some reason you want to launch a specific variant: in Eclipse, File -> import -> existing projects and select the folder of the generated variant. Now you will have this variant as an Eclipse project. Then, right click the file ArgoUML.launch that exists in the variant and click on Run as -> ArgoUML. The ArgoUML tool will be executed. Alternatively, you can right click the Main class of ArgoUML located in src/org/argouml/application/ and click on Run as -> Java application

Troubleshooting (ArgoUML variant is not launching). If it is not executed and you have an error in the Eclipse console

[...] java.lang.Error: Unresolved compilation problem.

you should set java compiler compatibility with Java 1.6. For this, right click the imported project -> Properties -> Java Compiler and set the Compiler compliance level to 1.6.

Technical documentation about the benchmark (no needed for feature location techniques’ developers)

Ground-Truth extractor has a main method used to create the txt files in the groundTruth folder based on a parsing of the ArgoUML SPL source code. It launch some JUnit tests that are in the extractor.tests package.

Utils for creating the scenarios has a main method used to define the random scenarios. has a main method used to create the build files of each scenario based on the content of the configs folder of each scenario.

Ground-truth clarifications

The format of the ground-truth is explained in the challenge case description, however, there is a special case that it is important to mention. The involved class is org.argouml.profile.UserDefinedProfile.UserDefinedProfile. To illustrate this case, you can find below the code of a class with a constructor method where a parameter is only present in case of FEATUREA.

package myPackage;
public class MethodParameters {

public MethodParameters(String a, String b, 
        //#if defined(FEATUREA)
    		String c,
    		String d) {
        // do something

The trace in the ground-truth will be the method with all the parameters and the refinement tag:

myPackage.MethodParameters MethodParameters(String,String,String,String) Refinement

This can be considered as arbitrary. Also correct would be "myPackage.MethodParameters MethodParameters(String,String,String) Refinement" for not_FEATUREA. Or even it can be considered two separate methods (one for FEATUREA and another for not_FEATUREA) with their corresponding parameters and without the refinement tag. For the moment, please use the method with all the parameters and the refinement tag for these cases.


Feature Location Benchmark with ArgoUML SPL



No releases published


No packages published