Skip to content

This repository contains the results of automatic glossary terms extraction and their clustering considering two important qualitative attributes, i.e. feature and benefit of the original CrowdRE requirement specifications dataset. In the original CrowdRE dataset, each entry has 6 attributes, i.e., role, feature, benefit, domain, tags and date-t…

SibaMishra/Clustering-Glossary-Terms-Extracted-from-Large-Sized-Software-Requirements-using-FastText

Repository files navigation

Clustering-Glossary-Terms-Extracted-from-Large-Sized-Software-Requirements-using-FastText

Overview

This repository contains the results of automatic glossary terms extraction and their clustering considering two important qualitative attributes, i.e. feature and benefit of the original CrowdRE requirement specifications dataset. In the original CrowdRE dataset, each entry has 6 attributes, i.e., role, feature, benefit, domain, tags and date-time of creation. Since, we are interested in extracting domain-specific terms from this dataset, we only focus on feature and benefit attributes of this dataset. The dataset used in our experiments containing only the feature and benefit attributes of the original CrowdRE dataset can be viewed in the file named "CrowdRE Requirements Dataset.csv". However, the original CrowdRE dataset is devloped by P. K. Murukannaiah et al. and can be accessed from "The smarthome crowd requirements dataset", https://crowdre.github.io/murukannaiah-smarthome-requirements-dataset/, April, 2017.

Approach

We have computed and reported the ground truth set for a random subset of 100 requirement specifications of the used CrowdRE dataset. In total, we have manually identified a total of 120 ground truth glossary terms with 30 overlapping clusters. The ground truth glossary terms have been calculated from the best intuition of the people (s) involved in this project in an unbiased manner, as there exists no benchmark or gold standard related to the ground truth extraction and clustering for the CrowdRE dataset. The file named "Ground Truth Clusters.docx" shows the ground truth glossary terms along with the manually formulated semantically similar clusters. Note: the clusters are separated with (######) symbol in the file. Further, the manually identified 120 glossary terms in the ground truth set are shown in the third column of the file named as "Extracted Glossary Terms (With and Without WordNet Removal) and Ground Truth Glossary Terms.csv".

We have extracted a total of 143 and 292 glossary terms from the CrowdRE dataset with or without removing some words specified in the WordNet lexical database (https://wordnet.princeton.edu/) using a mature text chunking approach. The results are shown in the first and second column of the file named "Extracted Glossary Terms (With and Without WordNet Removal) and Ground Truth Glossary Terms.csv". The extracted glossary terms are trained with the help of a domain specific corpora that is most related to used CrowdRE dataset, i.e. (Wikipedia Home Automation Category for a maximum depth of two, "https://en.wikipedia.org/wiki/Category:Home_automation") and with a pre-trained word vectors UMBC webbase corpus and statmt.org news dataset trained with subwords information in wikipedia 2017 (T. Mikolov, E. Grave, P. Bojanowski, C. Puhrsch, A. Joulin. Advances in Pre-Training Distributed Word Representations) using FastText word embedding vectors (https://fasttext.cc/docs/en/english-vectors.html). The main purpose of the training is to deduce the clusters by forming a the similarity matrix for the extracted glossary terms. For this, we have used two clustering algorithms, viz. K-Means and EM clustering algorithms. The similarity matrix have been developed using the computed semantic similarity scores (cosine similarity) between the word vectors using the word embedding based FastText model. The results in terms of automated formulated clusters for the random subset of 100 requirement specifications of the CrowdRE dataset for which the ground truth glossary terms are calculated are shown in the files named "Automated Ideal (Ground Truth) Clusters.docx" and "Automated Extraction and Clustering.docx" respectively. Note: there exists a maximum of n/2 clusters for n glossary terms.

Evaluation

For evaluating the efficacy of the clustering algorithms, we used some commonly used performance evaluation metrics like (precision, recall, f-scores). The evaluation graphs utilizing the area under curve plots (AUC) and evaluating the normalized AUC scores for all the used clustering algorithms are trained on two different datasets and the evaluation results are shown in the two separate files namely, "Cluster Plots.docx" and "Extraction +Clustering Plots.docx" respectively.

Publication Details

If you find the above mentioned details useful for your research, please cite the following paper.

Kushagra Bhatia, Siba Mishra, Arpit Sharma. Clustering Glossary Terms Extracted from Large-Sized Software Requirements using FastText. In Proceedings of the 13th Innovations in Software Engineering Conference (ISEC'2020) (Formerly known as India Software Engineering Conference), Jabalpur, Madhya Pradesh, India. Article 5, 1–11. DOI: https://doi.org/10.1145/3385032.3385039

About

This repository contains the results of automatic glossary terms extraction and their clustering considering two important qualitative attributes, i.e. feature and benefit of the original CrowdRE requirement specifications dataset. In the original CrowdRE dataset, each entry has 6 attributes, i.e., role, feature, benefit, domain, tags and date-t…

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published