This research investigates the efficacy of Small Language Models (SLMs) for Code in generating high-quality docstrings by assessing accuracy, conciseness, and clarity. We benchmark the performance of leading CodeSLMs in docstring generation quantitatively through mathematical formulas and qualitatively through human evaluation using Likert scale. We also release DocuMint as a large-scale supervised fine-tuning dataset with 100,000 samples. Lastly, make use of the dataset to fine-tune CodeGemma 2B model using LoRA. The dataset and the fine-tuned model can be found in HuggingFace
In quantitative experiments,Llama3 8B achieved the best performance across all metrics, with conciseness and clarity scores of 0.605 and 64.88, respectively.
Under qualitative human evaluation, CodeGemma 7B achieved the highest overall score with an average of 8.3 out of 10 across all metrics.
Hyperparameter | Value |
---|---|
Fine-tuning Method | LoRA |
Epochs | 4 |
Batch Size | 8 |
Gradient Accumulation Steps | 16 |
Initial Learning Rate | 2e-4 |
LoRA Parameters | 78,446,592 |
Training Tokens | 185,040,896 |
Fine-tuning hyperparameters.
Loss curve during fine-tuning of the CodeGemma 2B base model.
Fine-tuning the CodeGemma 2B model using the DocuMint dataset led to significant improvements in performance across all metrics, with gains of up to 22.5% in conciseness.
@article{poudel2024documint,
title={DocuMint: Docstring Generation for Python using Small Language Models},
author={Poudel, Bibek and Cook, Adam and Traore, Sekou and Ameli, Shelah},
journal={arXiv preprint arXiv:2405.10243)},
year={2024}
}
We would like to thank Dr. Audris Mockus for his guidance on the project and help with World of Code. We would also like to thank the Fluidic City Lab for providing the compute resources.