Skip to content

SpencerLabAQ/icpe-data-challenge-delag

Repository files navigation

Datasets for the ICPE 2024 Data Challenge track

Information about the track: https://icpe2024.spec.org/tracks-and-submissions/data-challenge-track

The dataset comprises 560 CSV files containing pre-processed execution traces. These traces originate from two open-source microservices systems: Train-Ticket and E-Shopper.

The entire dataset is organized within the data folder. Each CSV file corresponds to a unique scenario, involving specific performance anomalies injected into one or more Remote Procedure Calls (RPCs). The rows in these CSV files represent individual end-to-end requests, while the columns detail the cumulative response time (in milliseconds) for certain RPCs within these requests. When an RPC is invoked multiple times within a single request, the response times from all these invocations are summed. The Latency column denotes the response time for the root RPC, reflecting the end-to-end response time of a request. The anomaly column serves as a marker for performance anomalies. A 0 indicates no performance issues. On the other hand, values of 1 or 2 represent specific performance problems. Each scenario features two separate performance issue types (i.e., 1 and 2 ), where each type affects a different subset of RPCs. For a detailed explanation of the dataset creation process, refer to Section 5.3 of (Traini and Cortellessa, 2023). The CSV files in this repository are used for the paper evaluation in the context of RQ1, RQ2, and RQ3.

Questions about the dataset can be asked by opening issues on this repository, or by sending an e-mail to icpe2024-data@easychair.org.

Usage example

The Jupyter notebooks Example-Supervised.ipynb and Example-Unsupervised.ipynb provide illustrative examples of how to use the dataset. The Example-Supervised.ipynb notebook employs a Decision Tree to predict the presence of performance issues in requests. Example-Unsupervised.ipynb uses the KMeans algorithm to cluster requests, and assess how these clusters align with the original classification. To run the notebooks, ensure you have pandas, scikit-learn, seaborn, and notebook installed. You can set up the environment and install these packages using the following commands:

python3 -m venv .env
source .env/bin/activate
pip install -r requirements.txt

The dataset was originally created for the paper:

L. Traini and V. Cortellessa, DeLag: Using Multi-Objective Optimization to Enhance the Detection of Latency Degradation Patterns in Service-Based Systems, in IEEE Transactions on Software Engineering, vol. 49, no. 6, pp. 3554-3580, 1 June 2023, DOI: 10.1109/TSE.2023.3266041.

@ARTICLE{delag,
  author={Traini, Luca and Cortellessa, Vittorio},
  journal={IEEE Transactions on Software Engineering}, 
  title={DeLag: Using Multi-Objective Optimization to Enhance the Detection of Latency Degradation Patterns in Service-Based Systems}, 
  year={2023},
  volume={49},
  number={6},
  pages={3554-3580},
  doi={10.1109/TSE.2023.3266041}}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published