Skip to content

Large Language Models for Software Engineering: A Systematic Literature Review

Notifications You must be signed in to change notification settings

xinyi-hou/LLM4SE_SLR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 

Repository files navigation

LLM4SE_SLR

This repository extends from our recent work, "Large Language Models for Software Engineering: A Systematic Literature Review". It includes necessary information for our research and a curated collection of LLM4SE papers and other resources (datasets, tutorials, etc.). The focus is primarily on papers that use Large Language Models (LLM) in Software Engineering (SE) research.

Large Language Models (LLMs) have significantly impacted numerous domains, including Software Engineering (SE). Many recent publications have explored LLMs applied to various SE tasks. Nevertheless, a comprehensive understanding of the application, effects, and possible limitations of LLMs on SE is still in its early stages. To bridge this gap, we conducted a systematic literature review (SLR) on LLM4SE, with a particular focus on understanding how LLMs can be exploited to optimize processes and outcomes. We collect and analyze 395 research papers from 2017 to January 2024 to answer four key research questions (RQs). In RQ1, we categorize different LLMs that have been employed in SE tasks, characterizing their distinctive features and uses. In RQ2, we analyze the methods used in data collection, preprocessing, and application, highlighting the role of well-curated datasets for successful LLM for SE implementation. RQ3 investigates the strategies employed to optimize and evaluate the performance of LLMs in SE. Finally, RQ4 examines the specific SE tasks where LLMs have shown success to date, illustrating their practical contributions to the field. From the answers to these RQs, we discuss the current state-of-the-art and trends, identifying gaps in existing research, and flagging promising areas for future study.

Please feel free to send a pull request to add any papers and relevant content that are not listed here. We have uploaded our comprehensive lists of papers and the detailed information related to the four research questions (RQs) to Google Drive, which also includes specifics about the open-source status of code and tools associated with these papers.

Contents

Papers

Requirements engineering

  • A Deep Context-wise Method for Coreference Detection in Natural Language Requirements(detecting coreferent entities in natural language requirements) (2020), RE, Wang, Yawen; Shi, Lin; Li, Mingyang; Wang, Qing; Yang, Yun.
  • Automated Handling of Anaphoric Ambiguity in Requirements: A Multi-Solution Study (2022), ICSE, Ezzini S,Abualhaija S,Arora C,Sabetzadeh M.
  • ChatGPT as a tool for User Story Quality Evaluation: Trustworthy Out of the Box? (2023), arXiv, Ronanki, Krishna and Cabrero-Daniel, Beatriz and Berger, Christian.
  • chatgpt prompt patterns for improving code quality, refactoring, requirements elicitation, and software design (2023), arXiv, White, Jules; Hays, Sam; Fu, Quchen; Spencer-Smith, Jesse; Schmidt, Douglas C.
  • ChatGPT: A Study on its Utility for Ubiquitous Software Engineering Tasks (2023), arXiv, Sridhara, Giriprasad; G., Ranjani H.; Mazumdar, Sourav.
  • Experimenting a New Programming Practice with LLMs (2024), arXiv, Zhang, Simiao; Wang, Jiaping; Dong, Guoliang; Sun, Jun; Zhang, Yueling; Pu, Geguang.
  • Few-shot learning for sentence pair classification and its applications in software engineering (2023), arXiv, Helmeczi, Robert Kraig; Cevik, Mucahit; Yıldırım, Savas.
  • Formalizing Natural Language Intent into Program Specifications via Large Language Models (2023), arXiv, Endres, Madeline; Fakhoury, Sarah; Chakraborty, Saikat; Lahiri, Shuvendu K.
  • Identification of intra-domain ambiguity using transformer-based machine learning (2022), ICSE, Moharil, Ambarish; Sharma, Arpit.
  • Impact of Large Language Models on Generating Software Specifications (2023), arXiv, Xie, Danning; Yoo, Byungwoo; Jiang, Nan; Kim, Mijung; Tan, Lin; Zhang, Xiangyu; Lee, Judy S.
  • Leveraging Transformer-based Language Models to Automate Requirements Satisfaction Assessment (2023), arXiv, Poudel, Amrit; Lin, Jinfeng; Cleland-Huang, Jane.
  • NoRBERT: Transfer Learning for Requirements Classification (2020), RE, Hey, Tobias; Keim, Jan; Koziolek, Anne; Tichy, Walter F.
  • PRCBERT: Prompt Learning for Requirement Classification using BERT-based Pretrained Language Models (2022), ASE, Luo, Xianchang; Xue, Yinxing; Xing, Zhenchang; Sun, Jiamou.
  • SpecGen: Automated Generation of Formal Program Specifications via Large Language Models (2024), arXiv, Ma, Lezhi; Liu, Shangqing; Li, Yi; Xie, Xiaofei; Bu, Lei.
  • TABASCO: A transformer based contextualization toolkit (2023), SCP, Moharil, Ambarish; Sharma, Arpit.
  • Traceability transformed: Generating more accurate links with pre-trained bert models (2021), ICSE, Lin, Jinfeng; Liu, Yalin; Zeng, Qingkai; Jiang, Meng; Cleland-Huang, Jane.
  • Which AI Technique Is Better to Classify Requirements? An Experiment with SVM, LSTM, and ChatGPT (2023), arXiv, El-Hajjami, Abdelkarim; Fafin, Nicolas; Salinesi, Camille.

Software design

  • chatgpt prompt patterns for improving code quality, refactoring, requirements elicitation, and software design (2023), arXiv, White, Jules; Hays, Sam; Fu, Quchen; Spencer-Smith, Jesse; Schmidt, Douglas C.
  • Data-driven prototyping via natural-language-based gui retrieval (2023), ASE_J, Kolthoff, Kristian; Bartelt, Christian; Ponzetto, Simone Paolo.
  • Experimenting a New Programming Practice with LLMs (2024), arXiv, Zhang, Simiao; Wang, Jiaping; Dong, Guoliang; Sun, Jun; Zhang, Yueling; Pu, Geguang.
  • Large Language Models Based Automatic Synthesis of Software Specifications (2023), arXiv, Mandal, Shantanu; Chethan, Adhrik; Janfaza, Vahid; Mahmud, S. M. Farabi; Anderson, Todd A.; Turek, Javier; Tithi, Jesmin Jahan; Muzahid, Abdullah.

Software development

  • A Closer Look at Different Difficulty Levels Code Generation Abilities of ChatGPT (2023), ASE, Yan, Dapeng; Gao, Zhipeng; Liu, Zhiming.
  • A Comparative Study of Code Generation using ChatGPT 3.5 across 10 Programming Languages (2023), arXiv, Buscemi, Alessio.
  • A Lightweight Framework for High-Quality Code Generation (2023), arXiv, Mohammed Latif Siddiq, Beatrice Casey, Joanna C. S. Santos.
  • A Novel Approach for Rapid Development Based on ChatGPT and Prompt Engineering (2023), arXiv, Li, Youjia; Shi, Jianjun; Zhang, Zheng.
  • A Prompt Learning Framework for Source Code Summarization (2023), TOSEM, Sun, Weisong; Fang, Chunrong; You, Yudu; Chen, Yuchen; Liu, Yi; Wang, Chong; Zhang, Jian; Zhang, Quanjun; Qian, Hanwei; Zhao, Wei; Liu, Yang; Chen, Zhenyu.
  • A Static Evaluation of Code Completion by Large Language Models (2023), arXiv, Ding, Hantian and Kumar, Varun and Tian, Yuchen and Wang, Zijian and Kwiatkowski, Rob and Li, Xiaopeng and Ramanathan, Murali Krishna and Ray, Baishakhi and Bhatia, Parminder and Sengupta, Sudipta and others.
  • A Syntax-Guided Multi-Task Learning Approach for Turducken-Style Code Generation (2023), arXiv, Yang, Guang; Zhou, Yu; Chen, Xiang; Zhang, Xiangyu; Xu, Yiran; Han, Tingting; Chen, Taolue.
  • A systematic evaluation of large language models of code (2022), PLDI, Xu, Frank F.; Alon, Uri; Neubig, Graham; Hellendoorn, Vincent Josua.
  • A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark Datasets (2023), ACL, Laskar, Md Tahmid Rahman; Bari, M. Saiful; Rahman, Mizanur; Bhuiyan, Md Amran Hossen; Joty, Shafiq; Huang, Jimmy Xiangji.
  • Adaptive Intellect Unleashed: The Feasibility of Knowledge Transfer in Large Language Models (2023), arXiv, Huang, Qing; Wu, Yishun; Xing, Zhenchang; Jiang, He; Cheng, Yu; Jin, Huan.
  • AgentCoder: Multi-Agent-based Code Generation with Iterative Testing and Optimisation (2023), arXiv, Huang, Dong; Bu, Qingwen; Zhang, Jie M.; Luck, Michael; Cui, Heming.
  • AI Chain on Large Language Model for Unsupervised Control Flow Graph Generation for Statically-Typed Partial Code (2023), arXiv, Huang, Qing; Zou, Zhou; Xing, Zhenchang; Zuo, Zhenkang; Xu, Xiwei; Lu, Qinghua.
  • AI for Low-Code for AI (2023), arXiv, Rao, Nikitha; Tsay, Jason; Kate, Kiran; Hellendoorn, Vincent J.; Hirzel, Martin.
  • Aligning Offline Metrics and Human Judgments of Value of AI-Pair Programmers (2022), ACL, Dibia, Victor; Fourney, Adam; Bansal, Gagan; Poursabzi-Sangdeh, Forough; Liu, Han; Amershi, Saleema.
  • An empirical study on code comment completion (2021), ICSME, Mastropaolo, Antonio; Aghajani, Emad; Pascarella, Luca; Bavota, Gabriele.
  • An empirical study on the usage of transformer models for code completion (2021), TSE, Ciniselli, Matteo; Cooper, Nathan; Pascarella, Luca; Mastropaolo, Antonio; Aghajani, Emad; Poshyvanyk, Denys; Di Penta, Massimiliano; Bavota, Gabriele.
  • An extensive study on pre-trained models for program understanding and generation (2022), ISSTA, Zeng, Zhengran; Tan, Hanzhuo; Zhang, Haotian; Li, Jing; Zhang, Yuqun; Zhang, Lingming.
  • Analysis of ChatGPT on Source Code (2023), arXiv, Sadik, Ahmed R; Ceravola, Antonello; Joublin, Frank; Patra, Jibesh.
  • ANPL: Compiling Natural Programs with Interactive Decomposition (2023), arXiv, Huang, Di; Nan, Ziyuan; Hu, Xing; Jin, Pengwei; Peng, Shaohui; Wen, Yuanbo; Zhang, Rui; Du, Zidong; Guo, Qi; Pu, Yewen; Chen, Yunji.
  • API Entity and Relation Joint Extraction from Text via Dynamic Prompt-tuned Language Model (2023), arXiv, Huang, Qing; Sun, Yanbang; Xing, Zhenchang; Yu, Min; Xu, Xiwei; Lu, Qinghua.
  • APIDocBooster: An Extract-Then-Abstract Framework Leveraging Large Language Models for Augmenting API Documentation (2023), arXiv, Yang, Chengran; Liu, Jiakun; Xu, Bowen; Treude, Christoph; Lyu, Yunbo; He, Junda; Li, Ming; Lo, David.
  • APIGen: Generative API Method Recommendation (2024), SANER, Chen, Yujia; Gao, Cuiyun; Zhu, Muyijie; Liao, Qing; Wang, Yong; Xu, Guoai.
  • ART: Automatic multi-step reasoning and tool-use for large language models (2023), arXiv, Paranjape, Bhargavi; Lundberg, Scott; Singh, Sameer; Hajishirzi, Hannaneh; Zettlemoyer, Luke; Ribeiro, Marco Tulio.
  • Assemble foundation models for automatic code summarization (2022), SANER, Gu, Jian; Salza, Pasquale; Gall, Harald C.
  • Assessing and Improving Syntactic Adversarial Robustness of Pre-trained Models for Code Translation (2023), arXiv, Yang, Guang; Zhou, Yu; Zhang, Xiangyu; Chen, Xiang; Han, Tingting; Chen, Taolue.
  • Assessing the Promise and Pitfalls of ChatGPT for Automated Code Generation (2023), arXiv, Khan, Muhammad Fawad Akbar; Ramsdell, Max; Falor, Erik; Karimi, Hamid.
  • Attention, Compilation, and Solver-based Symbolic Analysis are All You Need (2023), arXiv, Jana, Prithwish; Jha, Piyush; Ju, Haoyang; Kishore, Gautham; Mahajan, Aryan; Ganesh, Vijay.
  • Automatic Code Summarization via ChatGPT: How Far Are We? (2023), arXiv, Weisong Sun, Chunrong Fang, Yudu You, Yun Miao, Yi Liu, Yuekang Li, Gelei Deng, Shenghan Huang, Yuchen Chen, Quanjun Zhang, Hanwei Qian, Yang Liu, Zhenyu Chen.
  • Automatic detection and analysis of technical debts in peer-review documentation of r packages (2022), SANER, Khan, Junaed Younus; Uddin, Gias.
  • Automatic Detection of Five API Documentation Smells: Practitioners’ Perspectives (2021), SANER, Khan, Junaed Younus; Tawkat Islam Khondaker, Md.; Uddin, Gias; Iqbal, Anindya.
  • Automatic Model Selection with Large Language Models for Reasoning (2023), arXiv, Zhao, Xu; Xie, Yuxi; Kawaguchi, Kenji; He, Junxian; Xie, Qizhe.
  • Automatic recognizing relevant fragments of APIs using API references (2024), ASE_J, Wu, Di; Feng, Yang; Zhang, Hongyu; Xu, Baowen.
  • Automatic Semantic Augmentation of Language Model Prompts (for Code Summarization) (2024), ICSE, Ahmed, Toufique; Pai, Kunal Suresh; Devanbu, Premkumar; Barr, Earl T.
  • Automating Method Naming with Context-Aware Prompt-Tuning (2023), ICPC, Zhu, Jie; Li, Lingwei; Yang, Li; Ma, Xiaoxiao; Zuo, Chun.
  • Benchmarking and Explaining Large Language Model-based Code Generation: A Causality-Centric Approach (2023), arXiv, Ji, Zhenlan; Ma, Pingchuan; Li, Zongjie; Wang, Shuai.
  • Benchmarking Language Models for Code Syntax Understanding (2022), EMNLP, Shen, Da; Chen, Xinyun; Wang, Chenguang; Sen, Koushik; Song, Dawn.
  • BEQAIN: An Effective and Efficient Identifier Normalization Approach with BERT and the Question Answering System (2022), TSE, Zhang J,Liu S,Gong L,Zhang H,Huang Z,Jiang H.
  • Binary Code Summarization: Benchmarking ChatGPT/GPT-4 and Other Large Language Models (2023), arXiv, Jin, Xin; Larson, Jonathan; Yang, Weiwei; Lin, Zhiqiang.
  • BLOOM: A 176B-Parameter Open-Access Multilingual Language Model (2022), arXiv, Workshop, BigScience; Scao, Teven Le; Fan, Angela; Akiki, Christopher; Pavlick, Ellie; et al.
  • Can ChatGPT replace StackOverflow? A Study on Robustness and Reliability of Large Language Model Code Generation (2023), arXiv, Zhong, Li; Wang, Zilong.
  • Can Large Language Models Write Parallel Code? (2024), arXiv, Nichols, Daniel; Davis, Joshua H.; Xie, Zhaojun; Rajaram, Arjun; Bhatele, Abhinav.
  • Capturing failures of large language models via human cognitive biases (2022), NeurIPS, Jones, Erik; Steinhardt, Jacob.
  • Cctest: Testing and repairing code completion systems (2023), ICSE, Li, Zongjie; Wang, Chaozheng; Liu, Zhibo; Wang, Haoxuan; Chen, Dong; Wang, Shuai; Gao, Cuiyun.
  • CERT: Continual Pre-training on Sketches for Library-oriented Code Generation (2022), IJCAI, Zan, Daoguang; Chen, Bei; Yang, Dejian; Lin, Zeqi; Kim, Minsu; Guan, Bei; Wang, Yongji; Chen, Weizhu; Lou, Jian-Guang.
  • ChatCoder: Chat-based Refine Requirement Improves LLMs' Code Generation (2023), arXiv, Wang, Zejun; Li, Jia; Li, Ge; Jin, Zhi.
  • ChatGPT is a Remarkable Tool—For Experts (2023), arXiv, Amos Azaria, Rina Azoulay, Shulamit Reches.
  • ChatGPT: A Study on its Utility for Ubiquitous Software Engineering Tasks (2023), arXiv, Sridhara, Giriprasad; G., Ranjani H.; Mazumdar, Sourav.
  • ClarifyGPT: Empowering LLM-based Code Generation with Intention Clarification (2023), arXiv, Mu, Fangwen; Shi, Lin; Wang, Song; Yu, Zhuohao; Zhang, Binquan; Wang, Chenxue; Liu, Shichao; Wang, Qing.
  • ClassEval: A Manually-Crafted Benchmark for Evaluating LLMs on Class-level Code Generation (2023), arXiv, Du, Xueying; Liu, Mingwei; Wang, Kaixin; Wang, Hanlin; Liu, Junwei; Chen, Yixuan; Feng, Jiayi; Sha, Chaofeng; Peng, Xin; Lou, Yiling.
  • CLEAR: COntrastive LeArning for API REcommendation (2022), ICSE, Wei M,Harzevili NS,Huang Y,Wang J,Wang S.
  • Clover: Closed-Loop Verifiable Code Generation (2023), arXiv, Sun, Chuyue; Sheng, Ying; Padon, Oded; Barrett, Clark.
  • Coarse-Tuning Models of Code with Reinforcement Learning Feedback (2023), arXiv, Jain, Abhinav; Adiole, Chima; Chaudhuri, Swarat; Reps, Thomas; Jermaine, Chris.
  • Code generation tools (almost) for free? a study of few-shot, pre-trained language models on code (2022), arXiv, Bareiß, Patrick; Souza, Beatriz; d'Amorim, Marcelo; Pradel, Michael.
  • Code Generation with AlphaCodium: From Prompt Engineering to Flow Engineering (2024), arXiv, Ridnik, Tal; Kredo, Dedy; Friedman, Itamar.
  • CodeAgent: Enhancing Code Generation with Tool-Integrated Agent Systems for Real-World Repo-level Coding Challenges (2024), arXiv, Zhang, Kechi; Li, Jia; Li, Ge; Shi, Xianjie; Jin, Zhi.
  • CodeBERTScore: Evaluating Code Generation with Pretrained Models of Code (2023), EMNLP, Zhou, Shuyan; Alon, Uri; Agarwal, Sumit; Neubig, Graham.
  • CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules (2023), arXiv, Le, Hung; Chen, Hailin; Saha, Amrita; Gokul, Akash; Sahoo, Doyen; Joty, Shafiq.
  • CodeCompose: A Large-Scale Industrial Deployment of AI-assisted Code Authoring (2023), arXiv, Murali, Vijayaraghavan; Maddila, Chandra; Ahmad, Imad; Bolin, Michael; Cheng, Daniel; Ghorbani, Negar; Fernandez, Renuka; Nagappan, Nachiappan.
  • CodeCoT and Beyond: Learning to Program and Test like a Developer (2023), arXiv, Huang, Dong; Bu, Qingwen; Cui, Heming.
  • CodeEditor: Learning to Edit Source Code with Pre-trained Models (2023), TOSEM, Li, Jia; Li, Ge; Li, Zhuo; Jin, Zhi; Hu, Xing; Zhang, Kechi; Fu, Zhiyi.
  • Codefill: Multi-token code completion by jointly learning from structure and naming sequences (2022), ICSE, Izadi, Maliheh; Gismondi, Roberta; Gousios, Georgios.
  • Codegeex: A pre-trained model for code generation with multilingual evaluations on humaneval-x (2023), arXiv, Zheng, Qinkai; Xia, Xiao; Zou, Xu; Dong, Yuxiao; Wang, Shan; Xue, Yufei; Wang, Zihan; Shen, Lei; Wang, Andi; Li, Yang; Su, Teng; Yang, Zhilin; Tang, Jie.
  • CodeGen2: Lessons for Training LLMs on Programming and Natural Languages (2023), arXiv, Nijkamp, Erik; Hayashi, Hiroaki; Xiong, Caiming; Savarese, Silvio; Zhou, Yingbo.
  • CodeIE: Large Code Generation Models are Better Few-Shot Information Extractors (2023), ACL, Li, Peng; Sun, Tianxiang; Tang, Qiong; Yan, Hang; Wu, Yuanbin; Huang, Xuanjing; Qiu, Xipeng.
  • CodePlan: Repository-level Coding using LLMs and Planning (2023), arXiv, Bairi, Ramakrishna; Sonwane, Atharv; Kanade, Aditya; C, Vageesh D.; Iyer, Arun; Parthasarathy, Suresh; Rajamani, Sriram; Ashok, B.; Shet, Shashank.
  • Coder reviewer reranking for code generation (2023), ICML, Zhang, Tianyi; Yu, Tao; Hashimoto, Tatsunori B.; Lewis, Mike; Yih, Wen-tau; Fried, Daniel; Wang, Sida I.
  • CoderEval: A Benchmark of Pragmatic Code Generation with Generative Pre-trained Models (2023), arXiv, Yu, Hao; Shen, Bo; Ran, Dezhi; Zhang, Jiaxin; Zhang, Qi; Ma, Yuchi; Liang, Guangtai; Li, Ying; Xie, Tao; Wang, Qianxiang.
  • CodeScore: Evaluating Code Generation by Learning Code Execution (2023), arXiv, Dong, Yihong; Ding, Jiazheng; Jiang, Xue; Li, Ge; Li, Zhuo; Jin, Zhi.
  • Codet5+: Open code large language models for code understanding and generation (2023), arXiv, Wang, Yue; Le, Hung; Gotmare, Akhilesh Deepak; Bui, Nghi D. Q.; Li, Junnan; Hoi, Steven C. H.
  • CodeTF: One-stop Transformer Library for State-of-the-art Code LLM (2023), arXiv, Bui, Nghi D. Q.; Le, Hung; Wang, Yue; Li, Junnan; Gotmare, Akhilesh Deepak; Hoi, Steven C. H.
  • CodeTransOcean: A Comprehensive Multilingual Benchmark for Code Translation (2023), EMNLP, Yan, Weixiang; Tian, Yuchen; Li, Yunzhe; Chen, Qian; Wang, Wen.
  • Coffee: Boost Your Code LLMs by Fixing Bugs with Feedback (2023), arXiv, Moon, Seungjun; Song, Yongho; Chae, Hyungjoo; Kang, Dongjin; Kwon, Taeyoon; Ong, Kai Tzu-iunn; Hwang, Seung-won; Yeo, Jinyoung.
  • CoLadder: Supporting Programmers with Hierarchical Code Generation in Multi-Level Abstraction (2023), arXiv, Yen, Ryan; Zhu, Jiawen; Suh, Sangho; Xia, Haijun; Zhao, Jian.
  • Communicative Agents for Software Development (2023), arXiv, Chen Qian, Xin Cong, Cheng Yang, Weize Chen, Yusheng Su, Juyuan Xu, Zhiyuan Liu, Maosong Sun.
  • Comparing Software Developers with ChatGPT: An Empirical Investigation (2023), arXiv, Nascimento, Nathalia; Alencar, Paulo; Cowan, Donald.
  • Constructing Effective In-Context Demonstration for Code Intelligence Tasks: An Empirical Stud (2023), arXiv, Gao, Shuzheng; Wen, Xin-Cheng; Gao, Cuiyun; Wang, Wenxuan; Lyu, Michael R.
  • ContraBERT: Enhancing Code Pre-Trained Models via Contrastive Learning (2023), ICSE, Liu, Shangqing; Wu, Bozhi; Xie, Xiaofei; Meng, Guozhu; Liu, Yang.
  • Copilot for Xcode: Exploring AI-Assisted Programming by Prompting Cloud-based Large Language Models (2023), arXiv, Tan, Chee Wei; Guo, Shangxin; Wong, Man Fai; Hang, Ching Nam.
  • Cross-Modal Contrastive Learning for Code Search (2022), ICSME, Shi, Zejian; Xiong, Yun; Zhang, Xiaolong; Zhang, Yao; Li, Shanshan; Zhu, Yangyong.
  • DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions (2023), arXiv, Wu, Fangzhou; Liu, Xiaogeng; Xiao, Chaowei.
  • DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence (2024), arXiv, Guo, Daya; Zhu, Qihao; Yang, Dejian; Xie, Zhenda; Dong, Kai; Zhang, Wentao; Chen, Guanting; Bi, Xiao; Wu, Y.; Li, Y. K.; Luo, Fuli; Xiong, Yingfei; Liang, Wenfeng.
  • De-Hallucinator: Iterative Grounding for LLM-Based Code Completion (2024), arXiv, Eghbali, Aryaz; Pradel, Michael.
  • Demystifying GPT Self-Repair for Code Generation (2023), arXiv, Olausson, Theo X.; Inala, Jeevana Priya; Wang, Chenglong; Gao, Jianfeng; Solar-Lezama, Armando.
  • DevEval: Evaluating Code Generation in Practical Software Projects (2024), arXiv, Li, Jia; Li, Ge; Zhao, Yunfei; Li, Yongmin; Jin, Zhi; Zhu, Hao; Liu, Huanyu; Liu, Kaibo; Wang, Lecheng; Fang, Zheng; Wang, Lanshen; Ding, Jiazheng; Zhang, Xuanming; Dong, Yihong; Zhu, Yuqi; Gu, Bin; Yang, Mengfei.
  • Discriminating Human-authored from ChatGPT-Generated Code Via Discernable Feature Analysis (2023), arXiv, Li Ke and Hong Sheng and Fu Cai and Zhang Yunhe and Liu Ming.
  • Do Pre-trained Language Models Indeed Understand Software Engineering Tasks? (2022), arXiv, Li, Yao; Zhang, Tao; Luo, Xiapu; Cai, Haipeng; Fang, Sen; Yuan, Dawei.
  • Domain Adaptive Code Completion via Language Models and Decoupled Domain Databases (2023), ASE, Tang, Ze, Jidong Ge, Shangqing Liu, Tingwei Zhu, Tongtong Xu, Liguo Huang, and Bin Luo.
  • DS-1000: A natural and reliable benchmark for data science code generation (2023), arXiv, Lai, Yuhang; Li, Chengxi; Wang, Yiming; Zhang, Tianyi; Zhong, Ruiqi; Zettlemoyer, Luke; Yih, Scott Wen-tau; Fried, Daniel; Wang, Sida; Yu, Tao.
  • Enabling Programming Thinking in Large Language Models Toward Code Generation (2023), arXiv, Li, Jia; Li, Ge; Li, Yongmin; Jin, Zhi.
  • Enhancing Code Intelligence Tasks with ChatGPT(An Empirical Study on Distilling ChatGPT for Advancing Code Intelligence Tasks) (2023), arXiv, Yang, Kang; Mao, Xinjun; Wang, Shangwen; Zhang, Tanghaoran; Lin, Bo; Wang, Yanlin; Qin, Yihao; Zhang, Zhang; Mao, Xiaoguang.
  • Entity-Augmented Code Generation (2023), arXiv, Shapkin, Anton; Litvinov, Denis; Zharov, Yaroslav; Bogomolov, Egor; Galimzyanov, Timur; Bryksin, Timofey.
  • Evaluating AIGC Detectors on Code Content (2023), arXiv, Wang, Jian; Liu, Shangqing; Xie, Xiaofei; Li, Yi.
  • Evaluating and improving transformers pre-trained on asts for code completion (2023), SANER, Ochs, Marcel; Narasimhan, Krishna; Mezini, Mira.
  • Evaluating ChatGPT and GPT-4 for Visual Programming (2023), arXiv, Adish Singla.
  • Evaluating In-Context Learning of Libraries for Code Generation (2023), arXiv, Patel, Arkil; Reddy, Siva; Bahdanau, Dzmitry; Dasigi, Pradeep.
  • evaluating large language models trained on code (2021), arXiv, Chen, Mark; Tworek, Jerry; Jun, Heewoo; Yuan, Qiming; Pinto, Henrique Ponde de Oliveira; et al.
  • Evaluating the Code Quality of AI-Assisted Code Generation Tools: An Empirical Study on GitHub Copilot, Amazon CodeWhisperer, and ChatGPT (2023), arXiv, Yetiştiren, Burak; Özsoy, Işık; Ayerdem, Miray; Tüzün, Eray.
  • Examining zero-shot vulnerability repair with large language models (2021), S&P, Pearce, Hammond; Tan, Benjamin; Ahmad, Baleegh; Karri, Ramesh; Dolan-Gavitt, Brendan.
  • Experimenting a New Programming Practice with LLMs (2024), arXiv, Zhang, Simiao; Wang, Jiaping; Dong, Guoliang; Sun, Jun; Zhang, Yueling; Pu, Geguang.
  • Exploring Distributional Shifts in Large Language Models for Code Analysis (2023), arXiv, Arakelyan, Shushan; Das, Rocktim Jyoti; Mao, Yi; Ren, Xiang.
  • Exploring Parameter-Efficient Fine-Tuning Techniques for Code Generation with Large Language Models (2023), arXiv, Weyssow, Martin; Zhou, Xin; Kim, Kisub; Lo, David; Sahraoui, Houari.
  • Exploring the Robustness of Large Language Models for Solving Programming Problems (2023), arXiv, Shirafuji, Atsushi; Watanobe, Yutaka; Ito, Takumi; Morishita, Makoto; Nakamura, Yuki; Oda, Yusuke; Suzuki, Jun.
  • Extending Source Code Pre-Trained Language Models to Summarise Decompiled Binarie (2023), SANER, Al-Kaswan, Ali; Ahmed, Toufique; Izadi, Maliheh; Sawant, Anand Ashok; Devanbu, Premkumar; van Deursen, Arie.
  • Extending the Frontier of ChatGPT: Code Generation and Debugging (2023), arXiv, Sakib, Fardin Ahsan; Khan, Saadat Hasan; Karim, A. H. M. Rezaul.
  • FlexGen: High-Throughput Generative Inference of Large Language Models with a Single GPU (2023), arXiv, Sheng, Ying; Zheng, Lianmin; Yuan, Binhang; Li, Zhuohan; Ryabinin, Max; Fu, Daniel Y.; Xie, Zhiqiang; Chen, Beidi; Barrett, Clark; Gonzalez, Joseph E.; Liang, Percy; Ré, Christopher; Stoica, Ion; Zhang, Ce.
  • from copilot to pilot: towards ai supported software development (2023), arXiv, Pudari, Rohith; Ernst, Neil A.
  • From Misuse to Mastery: Enhancing Code Generation with Knowledge-Driven AI Chaining (2023), ASE, Ren, Xiaoxue; Ye, Xinyuan; Zhao, Dehai; Xing, Zhenchang; Yang, Xiaohu.
  • Function-constrained Program Synthesis (2023), NeurIPS, Hajali, Patrick; Budvytis, Ignas.
  • generating data for symbolic language with large language models (2023), arXiv, Ye, Jiacheng; Li, Chengzu; Kong, Lingpeng; Yu, Tao.
  • generation-augmented query expansion for code retrieval (2022), arXiv, Li, Dong; Shen, Yelong; Jin, Ruoming; Mao, Yi; Wang, Kuan; Chen, Weizhu.
  • gorilla: large language model connected with massive apis (2023), arXiv, Patil, Shishir G.; Zhang, Tianjun; Wang, Xin; Gonzalez, Joseph E.
  • GPT2SP: A Transformer-Based Agile Story Point Estimation Approach (2022), TSE, Fu M,Tantithamthavorn C.
  • Grace: Language Models Meet Code Edits (2023), ESEC/FSE, Gupta, Priyanshu; Khare, Avishree; Bajpai, Yasharth; Chakraborty, Saikat; Gulwani, Sumit; Kanade, Aditya; Radhakrishna, Arjun; Soares, Gustavo; Tiwari, Ashish.
  • How Abilities in Large Language Models are Affected by Supervised Fine-tuning Data Composition (2023), arXiv, Dong, Guanting; Yuan, Hongyi; Lu, Keming; Li, Chengpeng; Xue, Mingfeng; Liu, Dayiheng; Wang, Wei; Yuan, Zheng; Zhou, Chang; Zhou, Jingren.
  • improving chatgpt prompt for code generation (2023), arXiv, Liu, Chao; Bao, Xuanlin; Zhang, Hongyu; Zhang, Neng; Hu, Haibo; Zhang, Xiaohong; Yan, Meng.
  • improving code example recommendations on informal documentation using bert and query-aware lsh: a comparative study (2023), arXiv, Rahmani, Sajjad; Naghshzan, AmirHossein; Guerrouj, Latifa.
  • improving code generation by training with natural language feedback (2023), arXiv, Chen, Angelica; Scheurer, Jérémy; Korbak, Tomasz; Campos, Jon Ander; Chan, Jun Shern; Bowman, Samuel R.; Cho, Kyunghyun; Perez, Ethan.
  • Improving Requirements Completeness: Automated Assistance through Large Language Models (2023), arXiv, Luitel, Dipeeka; Hassani, Shabnam; Sabetzadeh, Mehrdad.
  • In-IDE Generation-based Information Support with a Large Language Model (2023), ICSE, Nam, Daye; Macvean, Andrew; Hellendoorn, Vincent; Vasilescu, Bogdan; Myers, Brad.
  • Interactive Code Generation via Test-Driven User-Intent Formalization (2022), arXiv, Lahiri, Shuvendu K.; Fakhoury, Sarah; Naik, Aaditya; Sakkas, Georgios; Chakraborty, Saikat; Musuvathi, Madanlal; Choudhury, Piali; von Veh, Curtis; Inala, Jeevana Priya; Wang, Chenglong; Gao, Jianfeng.
  • Is AI the better programming partner? Human-Human Pair Programming vs. Human-AI pAIr Programming (2023), arXiv, Wu, Tongshuang and Koedinger, Kenneth and others.
  • Is ChatGPT the Ultimate Programming Assistant -- How far is it? (2023), arXiv, Tian, Haoye; Lu, Weiqi; Li, Tsz On; Tang, Xunzhu; Cheung, Shing-Chi; Klein, Jacques; Bissyandé, Tegawendé F.
  • Is GPT-4 a Good Data Analyst? (2023), arXiv, Cheng, Liying; Li, Xingxuan; Bing, Lidong.
  • Is Model Attention Aligned with Human Attention? An Empirical Study on Large Language Models for Code Generation (2023), arXiv, Kou, Bonan; Chen, Shengmai; Wang, Zhijie; Ma, Lei; Zhang, Tianyi.
  • Is this Snippet Written by ChatGPT? An Empirical Study with a CodeBERT-Based Classifier (2023), arXiv, Nguyen, Phuong T.; Di Rocco, Juri; Di Sipio, Claudio; Rubei, Riccardo; Di Ruscio, Davide; Di Penta, Massimiliano.
  • Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation (2023), arXiv, Liu, Jiawei; Xia, Chunqiu Steven; Wang, Yuyao; Zhang, Lingming.
  • Jigsaw: Large language models meet program synthesis (2022), ICSE, Jain, Naman; Vaidyanath, Skanda; Iyer, Arun; Natarajan, Nagarajan; Parthasarathy, Suresh; Rajamani, Sriram; Sharma, Rahul.
  • L2CEval: Evaluating Language-to-Code Generation Capabilities of Large Language Models (2023), arXiv, Ni, Ansong; Yin, Pengcheng; Zhao, Yilun; Riddell, Martin; Feng, Troy; Shen, Rui; Yin, Stephen; Liu, Ye; Yavuz, Semih; Xiong, Caiming; Joty, Shafiq; Zhou, Yingbo; Radev, Dragomir; Cohan, Arman.
  • Language models of code are few-shot commonsense learners (2022), EMNLP, Madaan, Aman; Zhou, Shuyan; Alon, Uri; Yang, Yiming; Neubig, Graham.
  • Large Language Model Programs (2023), arXiv, Schlag, Imanol; Sukhbaatar, Sainbayar; Celikyilmaz, Asli; Yih, Wen-tau; Weston, Jason; Schmidhuber, Jürgen; Li, Xian.
  • Large Language Models are Few-Shot Summarizers: Multi-Intent Comment Generation via In-Context Learning (2024), ICSE, Geng, Mingyang; Wang, Shangwen; Dong, Dezun; Wang, Haotian; Li, Ge; Jin, Zhi; Mao, Xiaoguang; Liao, Xiangke.
  • Large Language Models Are Human-Level Prompt Engineers (2023), arXiv, Zhou, Yongchao; Muresanu, Andrei Ioan; Han, Ziwen; Paster, Keiran; Pitis, Silviu; Chan, Harris; Ba, Jimmy.
  • Large Language Models Are State-of-the-Art Evaluators of Code Generation(ICE-Score: Instructing Large Language Models to Evaluate Code) (2023), arXiv, Zhuo, Terry Yue.
  • Large Language Models of Code Fail at Completing Code with Potential Bugs (2024), NeurIPS, Dinh, Tuan; Zhao, Jinman; Tan, Samson; Negrinho, Renato; Lausen, Leonard; Zha, Sheng; Karypis, George.
  • Learning and evaluating contextual embedding of source code (2020), ICML, Kanade, Aditya; Maniatis, Petros; Balakrishnan, Gogul; Shi, Kensen.
  • Learning in the Wild: Towards Leveraging Unlabeled Data for Effectively Tuning Pre-trained Code Models (2024), ICSE, Gao, Shuzheng; Mao, Wenxin; Gao, Cuiyun; Li, Li; Hu, Xing; Xia, Xin; Lyu, Michael R.
  • Learning Performance-Improving Code Edits (2023), arXiv, Shypula, Alexander; Madaan, Aman; Zeng, Yimeng; Alon, Uri; Gardner, Jacob; Hashemi, Milad; Neubig, Graham; Ranganathan, Parthasarathy; Bastani, Osbert; Yazdanbakhsh, Amir.
  • Learning to Predict User-Defined Types (2022), TSE, Jesse, Kevin; Devanbu, Premkumar T.; Sawant, Anand.
  • Less is More: Summary of Long Instructions is Better for Program Synthesis (2022), EMNLP, Kuznia, Kirby; Mishra, Swaroop; Parmar, Mihir; Baral, Chitta.
  • LeTI: Learning to Generate from Textual Interactions (2023), arXiv, Wang, Xingyao; Peng, Hao; Jabbarvand, Reyhaneh; Ji, Heng.
  • Leveraging Print Debugging to Improve Code Generation in Large Language Models (2024), arXiv, Hu, Xueyu; Kuang, Kun; Sun, Jiankai; Yang, Hongxia; Wu, Fei.
  • LLM is Like a Box of Chocolates: the Non-determinism of ChatGPT in Code Generation (2023), arXiv, Ouyang, Shuyin; Zhang, Jie M.; Harman, Mark; Wang, Meng.
  • LLM4TDD: Best Practices for Test Driven Development Using Large Language Models (2023), arXiv, Piya, Sanyogita; Sullivan, Allison.
  • LLM-Assisted Code Cleaning For Training Accurate Code Generators (2023), arXiv, Jain, Naman; Zhang, Tianjun; Chiang, Wei-Lin; Gonzalez, Joseph E.; Sen, Koushik; Stoica, Ion.
  • LLMatic: Neural Architecture Search via Large Language Models and Quality-Diversity Optimization (2023), arXiv, Nasir, Muhammad U.; Earle, Sam; Togelius, Julian; James, Steven; Cleghorn, Christopher.
  • LongCoder: A Long-Range Pre-trained Language Model for Code Completion (2023), ICML, Guo, Daya; Xu, Canwen; Duan, Nan; Yin, Jian; McAuley, Julian.
  • Magicoder: Source Code Is All You Need (2023), arXiv, Wei, Yuxiang; Wang, Zhe; Liu, Jiawei; Ding, Yifeng; Zhang, Lingming.
  • Making the most of small Software Engineering datasets with modern machine learning (2021), TSE, Prenner, Julian Aron Aron; Robbes, Romain.
  • Measuring and Mitigating Constraint Violations of In-Context Learning for Utterance-to-API Semantic Parsing (2023), arXiv, Wang, Shufan; Jean, Sebastien; Sengupta, Sailik; Gung, James; Pappas, Nikolaos; Zhang, Yi.
  • Measuring coding challenge competence with apps (2021), NeurIPS, Hendrycks, Dan; Basart, Steven; Kadavath, Saurav; Mazeika, Mantas; Arora, Akul; Guo, Ethan; Burns, Collin; Puranik, Samir; He, Horace; Song, Dawn; Steinhardt, Jacob.
  • MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework (2023), arXiv, Hong, Sirui; Zhuge, Mingchen; Chen, Jonathan; Zheng, Xiawu; Cheng, Yuheng; Zhang, Ceyao; Wang, Jinlin; Wang, Zili; Yau, Steven Ka Shing; Lin, Zijuan; Zhou, Liyang; Ran, Chenyu; Xiao, Lingfeng; Wu, Chenglin; Schmidhuber, Jürgen.
  • MoTCoder: Elevating Large Language Models with Modular of Thought for Challenging Programming Tasks (2023), arXiv, Li, Jingyao; Chen, Pengguang; Jia, Jiaya.
  • Multilingual Adapter-based Knowledge Aggregation on Code Summarization for Low-Resource Languages (2023), arXiv, Iman Saberi, Fatemeh Fard, Fuxiang Chen.
  • Multilingual Code Co-Evolution Using Large Language Models (2023), arXiv, Zhang, Jiyang; Nie, Pengyu; Li, Junyi Jessy; Gligoric, Milos.
  • MultiPL-E: A Scalable and Polyglot Approach to Benchmarking Neural Code Generation (2023), TSE, Cassano, Federico; Gouwar, John; Nguyen, Daniel; Nguyen, Sydney; Phipps-Costin, Luna; Pinckney, Donald; Yee, Ming-Ho; Zi, Yangtian; Anderson, Carolyn Jane; Feldman, Molly Q; Guha, Arjun; Greenberg, Michael; Jangda, Abhinav.
  • Multi-task learning based pre-trained language model for code completion (2020), ASE, Liu, Fang; Li, Ge; Zhao, Yunfei; Jin, Zhi.
  • Natural Language Commanding via Program Synthesis (2023), arXiv, Gandhi, Apurva; Nguyen, Thong Q.; Jiao, Huitian; Steen, Robert; Bhatawdekar, Ameya.
  • Natural Language Generation and Understanding of Big Code for AI-Assisted Programming: A Review (2023), arXiv, Wong, Man Fai; Guo, Shangxin; Hang, Ching Nam; Ho, Siu Wai; Tan, Chee Wei.
  • Natural Language to Code: How Far Are We? (2023), ESEC/FSE, Wang, Shangwen; Geng, Mingyang; Lin, Bo; Sun, Zhensu; Wen, Ming; Liu, Yepang; Li, Li; Bissyandé, Tegawendé F.; Mao, Xiaoguang.
  • No Need to Lift a Finger Anymore? Assessing the Quality of Code Generation by ChatGPT (2023), arXiv, Liu, Zhijie; Tang, Yutian; Luo, Xiapu; Zhou, Yuming; Zhang, Liang Feng.
  • Nova$^+$: Generative Language Models for Binaries (2023), arXiv, Jiang, Nan; Wang, Chengxiao; Liu, Kevin; Xu, Xiangzhe; Tan, Lin; Zhang, Xiangyu.
  • Novel Preprocessing Technique for Data Embedding in Engineering Code Generation Using Large Language Model(Applications of Large Language Models in Data Processing: Innovative Approaches to Segmenting and Renewing Information) (2023), arXiv, Lin, Yu-Chen; Kumar, Akhilesh; Chang, Norman; Zhang, Wenliang; Zakir, Muhammad; Apte, Rucha; He, Haiyang; Wang, Chao; Jang, Jyh-Shing Roger.
  • On Contrastive Learning of Semantic Similarity forCode to Code Search (2023), arXiv, Saieva, Anthony; Chakraborty, Saikat; Kaiser, Gail.
  • On the Effectiveness of Large Language Models in Domain-Specific Code Generation (2023), TOSEM, Chen, Meng; Zhang, Hongyu; Wan, Chengcheng; Wei, Zhao; Xu, Yong; Wang, Juhong; Gu, Xiaodong.
  • On the effectiveness of transfer learning for code search (2022), TSE, Salza, Pasquale; Schwizer, Christoph; Gu, Jian; Gall, Harald C.
  • On the Reliability and Explainability of Language Models for Program Generation (2024), TOSEM, Liu, Yue; Tantithamthavorn, Chakkrit; Liu, Yonghui; Li, Li.
  • On the Robustness of Code Generation Techniques: An Empirical Study on GitHub Copilot (2023), ICSE, Mastropaolo, Antonio; Pascarella, Luca; Guglielmi, Emanuela; Ciniselli, Matteo; Scalabrino, Simone; Oliveto, Rocco; Bavota, Gabriele.
  • On the transferability of pre-trained language models for low-resource programming languages (2022), ICPC, Chen, Fuxiang; Fard, Fatemeh H.; Lo, David; Bryksin, Timofey.
  • On the Usage of Continual Learning for Out-of-Distribution Generalization in Pre-trained Language Models of Code (2023), ESEC/FSE, Weyssow, Martin; Zhou, Xin; Kim, Kisub; Lo, David; Sahraoui, Houari.
  • One Adapter for All Programming Languages? Adapter Tuning for Code Search and Summarization (2023), ICSE, Wang, Deze; Chen, Boxing; Li, Shanshan; Luo, Wei; Peng, Shaoliang; Dong, Wei; Liao, Xiangke.
  • OOP: Object-Oriented Programming Evaluation Benchmark for Large Language Models (2024), arXiv, Wang, Shuai; Ding, Liang; Shen, Li; Luo, Yong; Du, Bo; Tao, Dacheng.
  • Outline, then details: Syntactically guided coarse-to-fine code generation (2023), ICML, Zheng, Wenqing; Sharan, S. P.; Jaiswal, Ajay Kumar; Wang, Kevin; Xi, Yihan; Xu, Dejia; Wang, Zhangyang.
  • PAC Prediction Sets for Large Language Models of Code (2023), arXiv, Khakhar, Adam; Mell, Stephen; Bastani, Osbert.
  • Piloting Copilot and Codex: Hot Temperature, Cold Prompts, or Black Magic? (2022), arXiv, Döderlein, Jean-Baptiste; Acher, Mathieu; Khelladi, Djamel Eddine; Combemale, Benoit.
  • Pop Quiz! Do Pre-trained Code Models Possess Knowledge of Correct API Names? (2023), arXiv, Zhuo, Terry Yue; Du, Xiaoning; Xing, Zhenchang; Sun, Jiamou; Quan, Haowei; Li, Li; Zhu, Liming.
  • Private-Library-Oriented Code Generation with Large Language Models (2023), arXiv, Zan, Daoguang; Chen, Bei; Gong, Yongshun; Cao, Junzhi; Zhang, Fengji; Wu, Bingchao; Guan, Bei; Yin, Yilong; Wang, Yongji.
  • Prompt Engineering or Fine Tuning: An Empirical Assessment of Large Language Models in Automated Software Engineering Tasks (2023), arXiv, Shin, Jiho; Tang, Clark; Mohati, Tahmineh; Nayebi, Maleknaz; Wang, Song; Hemmati, Hadi.
  • PTM-APIRec: Leveraging Pre-trained Models of Source Code in API Recommendation (2023), TOSEM, Li, Zhihao; Li, Chuanyi; Tang, Ze; Huang, Wanhong; Ge, Jidong; Luo, Bin; Ng, Vincent; Wang, Ting; Hu, Yucheng; Zhang, Xiaopeng.
  • QIGen: Generating Efficient Kernels for Quantized Inference on Large Language Models (2023), arXiv, Pegolotti, Tommaso; Frantar, Elias; Alistarh, Dan; Püschel, Markus.
  • Rapid: Zero-shot Domain Adaptation for Code Search with Pre-trained Models (2024), TOSEM, Fan, Guodong; Chen, Shizhan; Gao, Cuiyun; Xiao, Jianmao; Zhang, Tao; Feng, Zhiyong.
  • ReCode: Robustness Evaluation of Code Generation Models (2022), arXiv, Wang, Shiqi; Li, Zheng; Qian, Haifeng; Yang, Chenghao; Wang, Zijian; Shang, Mingyue; Kumar, Varun; Tan, Samson; Ray, Baishakhi; Bhatia, Parminder; Nallapati, Ramesh; Ramanathan, Murali Krishna; Roth, Dan; Xiang, Bing.
  • Refining ChatGPT-Generated Code: Characterizing and Mitigating Code Quality Issues (2023), arXiv, Liu, Yue; Le-Cong, Thanh; Widyasari, Ratnadira; Tantithamthavorn, Chakkrit; Li, Li; Le, Xuan-Bach D.; Lo, David.
  • RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems (2023), arXiv, Liu, Tianyang; Xu, Canwen; McAuley, Julian.
  • Representation Learning for Stack Overflow Posts: How Far are We? (2023), arXiv, He, Junda; Xin, Zhou; Xu, Bowen; Zhang, Ting; Kim, Kisub; Yang, Zhou; Thung, Ferdian; Irsan, Ivana; Lo, David.
  • Rewriting the Code: A Simple Method for Large Language Model Augmented Code Search (2024), arXiv, Li, Haochen; Zhou, Xin; Shen, Zhiqi.
  • Selective annotation makes language models better few-shot learners (2022), arXiv, Su, Hongjin; Kasai, Jungo; Wu, Chen Henry; Shi, Weijia; Wang, Tianlu; Xin, Jiayi; Zhang, Rui; Ostendorf, Mari; Zettlemoyer, Luke; Smith, Noah A.; Yu, Tao.
  • Self-collaboration Code Generation via ChatGPT (2023), arXiv, Dong, Yihong; Jiang, Xue; Jin, Zhi; Li, Ge.
  • Self-Edit: Fault-Aware Code Editor for Code Generation (2023), arXiv, Zhang, Kechi; Li, Zhuo; Li, Jia; Li, Ge; Jin, Zhi.
  • SelfEvolve: A Code Evolution Framework via Large Language Models (2023), arXiv, Jiang, Shuyang; Wang, Yuhao; Wang, Yu.
  • Self-planning code generation with large language model (2023), arXiv, Jiang, Xue; Dong, Yihong; Wang, Lecheng; Shang, Qiwei; Li, Ge.
  • Self-Supervised Query Reformulation for Code Search (2023), ESEC/FSE, Mao, Yuetian; Wan, Chengcheng; Jiang, Yuze; Gu, Xiaodong.
  • Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation (2023), arXiv, Zelikman, Eric; Lorch, Eliana; Mackey, Lester; Kalai, Adam Tauman.
  • Semantic Compression With Large Language Models (2023), arXiv, Gilbert, Henry; Sandborn, Michael; Schmidt, Douglas C.; Spencer-Smith, Jesse; White, Jules.
  • SoTaNa: The Open-Source Software Development Assistant (2023), arXiv, Shi, Ensheng; Zhang, Fengji; Wang, Yanlin; Chen, Bei; Du, Lun; Zhang, Hongyu; Han, Shi; Zhang, Dongmei; Sun, Hongbin.
  • SparseCoder: Identifier-Aware Sparse Transformer for File-Level Code Summarization (2024), SANER, Wang, Yanlin; Huang, Yanxian; Guo, Daya; Zhang, Hongyu; Zheng, Zibin.
  • Spt-code: Sequence-to-sequence pre-training for learning source code representations (2022), ICSE, Niu, Changan; Li, Chuanyi; Ng, Vincent; Ge, Jidong; Huang, Liguo; Luo, Bin.
  • Stack Over-Flowing with Results: The Case for Domain-Specific Pre-Training Over One-Size-Fits-All Models (2023), arXiv, Mukherjee, Manisha; Hellendoorn, Vincent J.
  • SteloCoder: a Decoder-Only LLM for Multi-Language to Python Code Translation (2023), arXiv, Pan, Jialing; Sadé, Adrien; Kim, Jin; Soriano, Eric; Sole, Guillem; Flamant, Sylvain.
  • Structured Chain-of-Thought Prompting for Code Generation (2023), arXiv, Li, Jia; Li, Ge; Li, Yongmin; Jin, Zhi.
  • Structured Code Representations Enable Data-Efficient Adaptation of Code Language Models (2024), arXiv, Agarwal, Mayank; Shen, Yikang; Wang, Bailin; Kim, Yoon; Chen, Jie.
  • Studying the usage of text-to-text transfer transformer to support code-related tasks (2021), ICSE, Mastropaolo, Antonio; Scalabrino, Simone; Cooper, Nathan; Nader Palacio, David; Poshyvanyk, Denys; Oliveto, Rocco; Bavota, Gabriele.
  • SUT: Active Defects Probing for Transcompiler Models (2023), EMNLP, Qi, Mengnan; Huang, Yufan; Wang, Maoquan; Yao, Yongqiang; Liu, Zihan; Gu, Bin; Clement, Colin; Sundaresan, Neel.
  • SWE-bench: Can Language Models Resolve Real-World GitHub Issues? (2023), arXiv, Jimenez, Carlos E.; Yang, John; Wettig, Alexander; Yao, Shunyu; Pei, Kexin; Press, Ofir; Narasimhan, Karthik.
  • Teaching Code LLMs to Use Autocompletion Tools in Repository-Level Code Generation (2024), arXiv, Wang, Chong; Zhang, Jian; Feng, Yebo; Li, Tianlin; Sun, Weisong; Liu, Yang; Peng, Xin.
  • Teaching Large Language Models to Self-Debug (2023), arXiv, Chen, Xinyun; Lin, Maxwell; Schärli, Nathanael; Zhou, Denny.
  • Test-Case-Driven Programming Understanding in Large Language Models for Better Code Generation (2023), arXiv, Tian, Zhao; Chen, Junjie.
  • The FormAI Dataset: Generative AI in Software Security Through the Lens of Formal Verification (2023), arXiv, Tihanyi, Norbert; Bisztray, Tamas; Jain, Ridhi; Ferrag, Mohamed Amine; Cordeiro, Lucas C.; Mavroeidis, Vasileios.
  • The potential of LLMs for coding with low-resource and domain-specific programming languages (2023), arXiv, Artur Tarassow.
  • The Scope of ChatGPT in Software Engineering: A Thorough Investigation (2023), arXiv, Ma, Wei; Liu, Shangqing; Wang, Wenhan; Hu, Qiang; Liu, Ye; Zhang, Cen; Nie, Liming; Liu, Yang.
  • The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation (2023), arXiv, Manh, Dung Nguyen; Hai, Nam Le; Dau, Anh T. V.; Nguyen, Anh Minh; Nghiem, Khanh; Guo, Jin; Bui, Nghi D. Q.
  • Think Outside the Code: Brainstorming Boosts Large Language Models in Code Generation (2023), arXiv, Li, Xin-Ye; Xue, Jiang-Tian; Xie, Zheng; Li, Ming.
  • ToolCoder: Teach Code Generation Models to use API search tools (2023), arXiv, Zhang, Kechi; Zhang, Huangzhao; Li, Ge; Li, Jia; Li, Zhuo; Jin, Zhi.
  • Toward less hidden cost of code completion with acceptance and ranking models (2021), ICSME, Li, Jingxuan; Huang, Rui; Li, Wei; Yao, Kai; Tan, Weiguo.
  • Towards Efficient Fine-tuning of Pre-trained Code Models: An Experimental Study and Beyond (2023), ISSTA, Shi, Ensheng; Wang, Yanlin; Zhang, Hongyu; Du, Lun; Han, Shi; Zhang, Dongmei; Sun, Hongbin.
  • Towards Generating Functionally Correct Code Edits from Natural Language Issue Descriptions (2023), arXiv, Fakhoury, Sarah; Chakraborty, Saikat; Musuvathi, Madan; Lahiri, Shuvendu K.
  • Understanding Large Language Model Based Fuzz Driver Generation (2023), arXiv, Zhang, Cen; Bai, Mingqiang; Zheng, Yaowen; Li, Yeting; Xie, Xiaofei; Li, Yuekang; Ma, Wei; Sun, Limin; Liu, Yang.
  • Understanding Programs by Exploiting (Fuzzing) Test Cases (2023), arXiv, Zhao, Jianyu; Rong, Yuyang; Guo, Yiwen; He, Yifeng; Chen, Hao.
  • Understanding the effectiveness of large language models in code translation (2023), ICSE, Pan, Rangeet; Ibrahimzada, Ali Reza; Krishna, Rahul; Sankar, Divya; Wassi, Lambert Pouguem; Merler, Michele; Sobolev, Boris; Pavuluri, Raju; Sinha, Saurabh; Jabbarvand, Reyhaneh.
  • UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition (2023), arXiv, Wenxuan Zhou and Sheng Zhang and Yu Gu and Muhao Chen and Hoifung Poon.
  • Using Transfer Learning for Code-Related Tasks (2022), TSE, Mastropaolo, Antonio; Cooper, Nathan; Palacio, David Nader; Scalabrino, Simone; Poshyvanyk, Denys; Oliveto, Rocco; Bavota, Gabriele.
  • VeriGen: A Large Language Model for Verilog Code Generation (2023), arXiv, Thakur, Shailja; Ahmad, Baleegh; Pearce, Hammond; Tan, Benjamin; Dolan-Gavitt, Brendan; Karri, Ramesh; Garg, Siddharth.
  • What do they capture?: a structural analysis of pre-trained language models for source code (2022), ICSE, Wan, Yao; Zhao, Wei; Zhang, Hongyu; Sui, Yulei; Xu, Guandong; Jin, Hai.
  • what is the intended usage context of this model? an exploratory study of pre-trained models on various model repositories (2023), TOSEM, Gong, Lina; Zhang, Jingxuan; Wei, Mingqiang; Zhang, Haoxiang; Huang, Zhiqiu.
  • when language model meets private library (2022), EMNLP, Zan, Daoguang; Chen, Bei; Lin, Zeqi; Guan, Bei; Wang, Yongji; Lou, Jian-Guang.
  • When Neural Code Completion Models Size up the Situation: Attaining Cheaper and Faster Completion through Dynamic Model Inference (2024), ICSE, Sun, Zhensu; Du, Xiaoning; Song, Fu; Wang, Shangwen; Li, Li.
  • Which is a better programming assistant? A comparative study between chatgpt and stack overflow (2023), arXiv, Liu, Jinrun; Tang, Xinyu; Li, Linlin; Chen, Panpan; Liu, Yepang.
  • WizardCoder: Empowering Code Large Language Models with Evol-Instruct (2023), arXiv, Luo, Ziyang; Xu, Can; Zhao, Pu; Sun, Qingfeng; Geng, Xiubo; Hu, Wenxiang; Tao, Chongyang; Ma, Jing; Lin, Qingwei; Jiang, Daxin.
  • xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval (2023), arXiv, Khan, Mohammad Abdullah Matin; Bari, M. Saiful; Do, Xuan Long; Wang, Weishi; Parvez, Md Rizwan; Joty, Shafiq.
  • ZS4C: Zero-Shot Synthesis of Compilable Code for Incomplete Code Snippets using ChatGPT (2024), arXiv, Kabir, Azmain; Wang, Shaowei; Tian, Yuan; Tse-Hsun; Chen; Asaduzzaman, Muhammad; Zhang, Wenbin.

Software quality assurance

  • A New Era in Software Security: Towards Self-Healing Software via Large Language Models and Formal Verification (2023), arXiv, Charalambous, Yiannis; Tihanyi, Norbert; Jain, Ridhi; Sun, Youcheng; Ferrag, Mohamed Amine; Cordeiro, Lucas C.
  • A Preliminary Evaluation of LLM-Based Fault Localization (2023), arXiv, Kang, Sungmin; An, Gabin; Yoo, Shin.
  • Adaptive test generation using a large language model (2023), arXiv, Schäfer, Max; Nadi, Sarah; Eghbali, Aryaz; Tip, Frank.
  • Algo: Synthesizing algorithmic programs with generated oracle verifiers (2023), NeurIPS, Zhang, Kexun; Wang, Danqing; Xia, Jingtao; Wang, William Yang; Li, Lei.
  • An Empirical Evaluation of Using Large Language Models for Automated Unit Test Generation (2023), TSE, Schäfer M, Nadi S, Eghbali A, et al.
  • Augmenting Greybox Fuzzing with Generative AI (2023), arXiv, Hu, Jie; Zhang, Qian; Yin, Heng.
  • Automatic Generation of Test Cases based on Bug Reports: a Feasibility Study with Large Language Models (2023), arXiv, Plein, Laura; Ouédraogo, Wendkûuni C.; Klein, Jacques; Bissyandé, Tegawendé F.
  • Autonomous Large Language Model Agents Enabling Intent-Driven Mobile GUI Testing (2023), arXiv, Yoon, Juyeon; Feldt, Robert; Yoo, Shin.
  • Baldur: Whole-Proof Generation and Repair with Large Language Models (2023), ESEC/FSE, First, Emily; Rabe, Markus N.; Ringer, Talia; Brun, Yuriy.
  • Can large language models find and fix vulnerable software? (2023), arXiv, Noever, David.
  • Can Large Language Models Reason about Program Invariants? (2023), ICML, Pei, Kexin; Bieber, David; Shi, Kensen; Sutton, Charles; Yin, Pengcheng.
  • Can Large Language Models Write Good Property-Based Tests? (2023), arXiv, Vikram, Vasudev; Lemieux, Caroline; Padhye, Rohan.
  • ChatGPT and Human Synergy in Black-Box Testing: A Comparative Analysis (2024), arXiv, Kirinuki, Hiroyuki; Tanno, Haruto.
  • ChatGPT is a Remarkable Tool—For Experts (2023), arXiv, Amos Azaria, Rina Azoulay, Shulamit Reches.
  • ChatGPT vs SBST: A Comparative Assessment of Unit Test Suite Generation (2023), arXiv, Tang, Yutian and Liu, Zhijie and Zhou, Zhichao and Luo, Xiapu.
  • ChatUniTest: a ChatGPT-based automated unit test generation tool (2023), arXiv, Xie, Zhuokui; Chen, Yinghao; Zhi, Chen; Deng, Shuiguang; Yin, Jianwei.
  • ContraBERT: Enhancing Code Pre-Trained Models via Contrastive Learning (2023), ICSE, Liu, Shangqing; Wu, Bozhi; Xie, Xiaofei; Meng, Guozhu; Liu, Yang.
  • CSGVD: a deep learning approach combining sequence and graph embedding for source code vulnerability detection (2023), JSS, Tang, Wei; Tang, Mingwei; Ban, Minchao; Zhao, Ziguo; Feng, Mingjun.
  • Dataflow Analysis-Inspired Deep Learning for Efficient Vulnerability Detection (2024), ICSE, Steenhoek, Benjamin; Gao, Hongyang; Le, Wei.
  • Detecting Phishing Sites Using ChatGPT (2023), arXiv, Koide, Takashi; Fukushi, Naoki; Nakano, Hiroki; Chiba, Daiki.
  • DexBERT: Effective, Task-Agnostic and Fine-grained Representation Learning of Android Bytecode (2023), TSE, Sun, Tiezhu; Allix, Kevin; Kim, Kisub; Zhou, Xin; Kim, Dongsun; Lo, David; Bissyandé, Tegawendé F.; Klein, Jacques.
  • diversevul: a new vulnerable source code dataset for deep learning based vulnerability detection (2023), arXiv, Chen, Yizheng; Ding, Zhoujie; Chen, Xinyun; Wagner, David.
  • Domain Adaptation for Deep Unit Test Case Generation (2023), arXiv, Shin, Jiho; Hashtroudi, Sepehr; Hemmati, Hadi; Wang, Song.
  • E&V: Prompting Large Language Models to Perform Static Analysis by Pseudo-code Execution and Verification (2023), arXiv, Hao, Yu; Chen, Weiteng; Zhou, Ziqiao; Cui, Weidong.
  • Effective Test Generation Using Pre-trained Large Language Models and Mutation Testing (2023), arXiv, Dakhel, Arghavan Moradi; Nikanjam, Amin; Majdinasab, Vahid; Khomh, Foutse; Desmarais, Michel C.
  • Efficient Mutation Testing via Pre-Trained Language Models (2023), arXiv, Khanfir, Ahmed; Degiovanni, Renzo; Papadakis, Mike; Traon, Yves Le.
  • exploring the effectiveness of large language models in generating unit tests (2023), arXiv, Siddiq, Mohammed Latif; Santos, Joanna C. S.; Tanvir, Ridwanul Hasan; Ulfat, Noshin; Rifat, Fahmid Al; Lopes, Vinicius Carvalho.
  • Fast changeset-based bug localization with BERT (2022), ICSE, Ciborowska, Agnieszka; Damevski, Kostadin.
  • Fill in the Blank: Context-aware Automated Text Input Generation for Mobile GUI Testing (2023), ICSE, Liu, Zhe; Chen, Chunyang; Wang, Junjie; Che, Xing; Huang, Yuekai; Hu, Jun; Wang, Qing.
  • Finetuning Large Language Models for Vulnerability Detection (2024), arXiv, Shestov, Alexey; Cheshkov, Anton; Levichev, Rodion; Mussabayev, Ravil; Zadorozhny, Pavel; Maslov, Evgeny; Vadim, Chibirev; Bulychev, Egor.
  • Flakify: a black-box, language model-based predictor for flaky tests (2022), TSE, Fatima, Sakina; Ghaleb, Taher A.; Briand, Lionel.
  • Fuzz4All: Universal Fuzzing with Large Language Models (2024), ICSE, Xia, Chunqiu Steven; Paltenghi, Matteo; Tian, Jia Le; Pradel, Michael; Zhang, Lingming.
  • Harnessing the Power of LLM to Support Binary Taint Analysis (2023), arXiv, Liu, Puzhuo; Sun, Chengnian; Zheng, Yaowen; Feng, Xuan; Qin, Chuan; Wang, Yuncheng; Li, Zhi; Sun, Limin.
  • How Far Have We Gone in Vulnerability Detection Using Large Language Models (2023), arXiv, Gao, Zeyu; Wang, Hao; Zhou, Yuchen; Zhu, Wenyu; Zhang, Chao.
  • Large language models are edge-case fuzzers: Testing deep learning libraries via fuzzgpt (2023), arXiv, Deng, Yinlin; Xia, Chunqiu Steven; Yang, Chenyuan; Zhang, Shizhuo Dylan; Yang, Shujing; Zhang, Lingming.
  • Large Language Models are Zero-Shot Fuzzers: Fuzzing Deep-Learning Libraries via Large Language Models (2023), ISSTA, Deng, Yinlin; Xia, Chunqiu Steven; Peng, Haoran; Yang, Chenyuan; Zhang, Lingming.
  • Large Language Models for Test-Free Fault Localization (2024), arXiv, Yang, Aidan Z. H.; Martins, Ruben; Goues, Claire Le; Hellendoorn, Vincent J.
  • Large Language Models in Fault Localisation (2023), arXiv, Wu, Yonghao; Li, Zheng; Zhang, Jie M.; Papadakis, Mike; Harman, Mark; Liu, Yong.
  • Learning in the Wild: Towards Leveraging Unlabeled Data for Effectively Tuning Pre-trained Code Models (2024), ICSE, Gao, Shuzheng; Mao, Wenxin; Gao, Cuiyun; Li, Li; Hu, Xing; Xia, Xin; Lyu, Michael R.
  • LEVER: Learning to Verify Language-to-Code Generation with Execution (2023), ICML, Ni, Ansong; Iyer, Srini; Radev, Dragomir; Stoyanov, Ves; Yih, Wen-tau; Wang, Sida I; Lin, Xi Victoria.
  • LLM4Vuln: A Unified Evaluation Framework for Decoupling and Enhancing LLMs' Vulnerability Reasoning (2024), arXiv, Sun, Yuqiang; Wu, Daoyuan; Xue, Yue; Liu, Han; Ma, Wei; Zhang, Lyuye; Shi, Miaolei; Liu, Yang.
  • LLM-based Resource-Oriented Intention Inference for Static Resource Leak Detection(Boosting Static Resource Leak Detection via LLM-based Resource-Oriented Intention Inference) (2023), arXiv, Wang, Chong; Liu, Jianan; Peng, Xin; Liu, Yang; Lou, Yiling.
  • LmPa: Improving Decompilation by Synergy of Large Language Model and Program Analysis (2023), arXiv, Xu, Xiangzhe; Zhang, Zhuo; Feng, Shiwei; Ye, Yapeng; Su, Zian; Jiang, Nan; Cheng, Siyuan; Tan, Lin; Zhang, Xiangyu.
  • Natural Language Generation and Understanding of Big Code for AI-Assisted Programming: A Review (2023), arXiv, Wong, Man Fai; Guo, Shangxin; Hang, Ching Nam; Ho, Siu Wai; Tan, Chee Wei.
  • No More Manual Tests? Evaluating and Improving ChatGPT for Unit Test Generation (2023), arXiv, Yuan, Zhiqiang; Lou, Yiling; Liu, Mingwei; Ding, Shiji; Wang, Kaixin; Chen, Yixuan; Peng, Xin.
  • Nuances are the Key: Unlocking ChatGPT to Find Failure-Inducing Tests with Differential Prompting (2023), ASE, Li, Tsz-On; Zong, Wenxi; Wang, Yibo; Tian, Haoye; Wang, Ying; Cheung, Shing-Chi; Kramer, Jeff.
  • Pre-training Code Representation with Semantic Flow Graph for Effective Bug Localization (2023), ESEC/FSE, Du, Yali; Yu, Zhongxing.
  • Prompt-Enhanced Software Vulnerability Detection Using ChatGPT (2023), arXiv, Zhang, Chenyuan; Liu, Hao; Zeng, Jiutian; Yang, Kejing; Li, Yuhong; Li, Hui.
  • Prompting Is All Your Need: Automated Android Bug Replay with Large Language Models (2023), ICSE, Feng, Sidong; Chen, Chunyang.
  • Reinforcement Learning from Automatic Feedback for High-Quality Unit Test Generation (2023), arXiv, Steenhoek, Benjamin; Tufano, Michele; Sundaresan, Neel; Svyatkovskiy, Alexey.
  • Selene: Pioneering Automated Proof in Software Verification (2024), arXiv, Zhang, Lichen; Lu, Shuai; Duan, Nan.
  • Silent Vulnerable Dependency Alert Prediction with Vulnerability Key Aspect Explanation (2023), ICSE, Sun, Jiamou; Xing, Zhenchang; Lu, Qinghua; Xu, Xiwei; Zhu, Liming; Hoang, Thong; Zhao, Dehai.
  • SkipAnalyzer: An Embodied Agent for Code Analysis with Large Language Models(SkipAnalyzer: A Tool for Static Code Analysis with Large Language Models) (2023), arXiv, Mohajer, Mohammad Mahdi; Aleithan, Reem; Harzevili, Nima Shiri; Wei, Moshi; Belle, Alvine Boaye; Pham, Hung Viet; Wang, Song.
  • Testing the Limits: Unusual Text Inputs Generation for Mobile App Crash Detection with Large Language Model (2023), ICSE, Liu, Zhe; Chen, Chunyang; Wang, Junjie; Chen, Mengzhuo; Wu, Boyu; Che, Xing; Wang, Dandan; Wang, Qing.
  • The EarlyBIRD Catches the Bug: On Exploiting Early Layers of Encoder Models for More Efficient Code Classification (2023), ESEC/FSE, Grishina, Anastasiia; Hort, Max; Moonen, Leon.
  • The FormAI Dataset: Generative AI in Software Security Through the Lens of Formal Verification (2023), arXiv, Tihanyi, Norbert; Bisztray, Tamas; Jain, Ridhi; Ferrag, Mohamed Amine; Cordeiro, Lucas C.; Mavroeidis, Vasileios.
  • The Program Testing Ability of Large Language Models for Code (2023), arXiv, Xiong, Weimin; Guo, Yiwen; Chen, Hao.
  • Too Few Bug Reports? Exploring Data Augmentation for Improved Changeset-based Bug Localization (2023), arXiv, Ciborowska, Agnieszka; Damevski, Kostadin.
  • Transformer-based language models for software vulnerability detection (2022), ACSAC, Thapa, Chandra; Jang, Seung Ick; Ahmed, Muhammad Ejaz; Camtepe, Seyit; Pieprzyk, Josef; Nepal, Surya.
  • transformer-based vulnerability detection in code at edittime: zero-shot, few-shot, or fine-tuning? (2023), arXiv, Chan, Aaron; Kharkar, Anant; Moghaddam, Roshanak Zilouchian; Mohylevskyy, Yevhen; Helyar, Alec; Kamal, Eslam; Elkamhawy, Mohamed; Sundaresan, Neel.
  • Understanding the Effectiveness of Large Language Models in Detecting Security Vulnerabilities (2023), arXiv, Khare, Avishree; Dutta, Saikat; Li, Ziyang; Solko-Breslin, Alaia; Alur, Rajeev; Naik, Mayur.
  • When GPT Meets Program Analysis: Towards Intelligent Detection of Smart Contract Logic Vulnerabilities in GPTScan (2023), arXiv, Sun, Yuqiang; Wu, Daoyuan; Xue, Yue; Liu, Han; Wang, Haijun; Xu, Zhengzi; Xie, Xiaofei; Liu, Yang.
  • White-box Compiler Fuzzing Empowered by Large Language Models (2023), arXiv, Yang, Chenyuan; Deng, Yinlin; Lu, Runyu; Yao, Jiayi; Liu, Jiawei; Jabbarvand, Reyhaneh; Zhang, Lingming.
  • XGV-BERT: Leveraging Contextualized Language Model and Graph Neural Network for Efficient Software Vulnerability Detection (2023), arXiv, Quan, Vu Le Anh; Phat, Chau Thuan; Van Nguyen, Kiet; Duy, Phan The; Pham, Van-Hau.

Software maintenance

  • A Chain of AI-based Solutions for Resolving FQNs and Fixing Syntax Errors in Partial Code (2023), arXiv, Huang, Qing; Zhu, Jiahui; Xing, Zhenchang; Jin, Huan; Wang, Changjing; Xu, Xiwei.
  • A Light Bug Triage Framework for Applying Large Pre-trained Language Model (2022), ASE, Lee, Jaehyung; Han, Kisun; Yu, Hwanjo.
  • A Multi-Step Learning Approach to Assist Code Review (2023), SANER, Sghaier, Oussama Ben; Sahraoui, Houari.
  • A New Era in Software Security: Towards Self-Healing Software via Large Language Models and Formal Verification (2023), arXiv, Charalambous, Yiannis; Tihanyi, Norbert; Jain, Ridhi; Sun, Youcheng; Ferrag, Mohamed Amine; Cordeiro, Lucas C.
  • A Novel Approach for Automatic Program Repair using Round-Trip Translation with Large Language Models (2024), arXiv, Ruiz, Fernando Vallecillos; Grishina, Anastasiia; Hort, Max; Moonen, Leon.
  • A prompt pattern catalog to enhance prompt engineering with chatgpt (2023), arXiv, White, Jules; Fu, Quchen; Hays, Sam; Sandborn, Michael; Olea, Carlos; Gilbert, Henry; Elnashar, Ashraf; Spencer-Smith, Jesse; Schmidt, Douglas C.
  • A study on prompt design, advantages and limitations of chatgpt for deep learning program repair (2023), arXiv, Cao, Jialun; Li, Meiziniu; Wen, Ming; Cheung, Shing-chi.
  • Achieving reliable sentiment analysis in the software engineering domain using bert (2020), ICSME, Biswas, Eeshita; Karabulut, Mehmet Efruz; Pollock, Lori; Vijay-Shanker, K.
  • Addressing Compiler Errors: Stack Overflow or Large Language Models? (2023), arXiv, Widjojo, Patricia; Treude, Christoph.
  • an analysis of the automatic bug fixing performance of chatgpt (2023), arXiv, Sobania, Dominik; Briesch, Martin; Hanna, Carol; Petke, Justyna.
  • An empirical study of ChatGPT-3.5 on question answering and code maintenance (2023), arXiv, Kabir, Md Mahir Asef; Hassan, Sk Adnan; Wang, Xiaoyin; Wang, Ying; Yu, Hai; Meng, Na.
  • An exploratory study on code attention in bert (2022), ICPC, Sharma, Rishab; Chen, Fuxiang; Fard, Fatemeh; Lo, David.
  • APPT: Boosting Automated Patch Correctness Prediction via Fine-tuning Pre-trained Models (2024), TSE, Zhang, Quanjun; Fang, Chunrong; Sun, Weisong; Liu, Yan; He, Tieke; Hao, Xiaodong; Chen, Zhenyu.
  • Aspect-based api review classification: How far can pre-trained transformer model go? (2022), SANER, Yang, chengran; Xu, Bowen; Khan, Junaed younus; Uddin, Gias; Han, Donggyun; Yang, Zhou; Lo, David.
  • Assess and Summarize: Improve Outage Understanding with Large Language Models (2023), ESEC/FSE, Jin, Pengxiang; Zhang, Shenglin; Ma, Minghua; Li, Haozhe; Kang, Yu; Li, Liqun; Liu, Yudong; Qiao, Bo; Zhang, Chaoyun; Zhao, Pu; He, Shilin; Sarro, Federica; Dang, Yingnong; Rajmohan, Saravan; Lin, Qingwei; Zhang, Dongmei.
  • AUGER: automatically generating review comments with pre-training models (2022), ESEC/FSE, Li, Lingwei; Yang, Li; Jiang, Huaxi; Yan, Jun; Luo, Tiejian; Hua, Zihan; Liang, Geng; Zuo, Chun.
  • Augmenting commit classification by using fine-grained source code changes and a pre-trained deep neural language model (2021), IST, Ghadhab, Lobna; Jenhani, Ilyes; Mkaouer, Mohamed Wiem; Ben Messaoud, Montassar.
  • Automated Bug Generation in the era of Large Language Models (2023), arXiv, Ibrahimzada, Ali Reza; Chen, Yang; Rong, Ryan; Jabbarvand, Reyhaneh.
  • Automated Repair of Programs from Large Language Models (2022), ICSE, Fan, Zhiyu; Gao, Xiang; Mirchev, Martin; Roychoudhury, Abhik; Tan, Shin Hwei.
  • Automated Summarization of Stack Overflow Posts (2023), ICSE, Kou, Bonan; Chen, Muhao; Zhang, Tianyi.
  • AutoScrum: Automating Project Planning Using Large Language Models (2023), arXiv, Schroder, Martin.
  • BERT-and TF-IDF-based feature extraction for long-lived bug prediction in FLOSS: a comparative study (2023), IST, Gomes, Luiz; Da Silva Torres, Ricardo; Côrtes, Mario Lúcio.
  • Boosting Automated Patch Correctness Prediction via Pre-trained Language Model (2023), arXiv, Zhang, Quanjun; Fang, Chunrong; Sun, Weisong; Liu, Yan; He, Tieke; Hao, Xiaodong; Chen, Zhenyu.
  • CIRCLE: Continual repair across programming languages (2022), ISSTA, Yuan, Wei; Zhang, Quanjun; He, Tieke; Fang, Chunrong; Hung, Nguyen Quoc Viet; Hao, Xiaodong; Yin, Hongzhi.
  • Code Security Vulnerability Repair Using Reinforcement Learning with Large Language Models (2024), arXiv, Islam, Nafis Tanveer; Karkevandi, Mohammad Bahrami; Najafirad, Peyman.
  • Coditt5: Pretraining for source code and natural language editing (2022), ASE, Zhang, Jiyang; Panthaplackel, Sheena; Nie, Pengyu; Li, Junyi Jessy; Gligoric, Milos.
  • Constructing Effective In-Context Demonstration for Code Intelligence Tasks: An Empirical Stud (2023), arXiv, Gao, Shuzheng; Wen, Xin-Cheng; Gao, Cuiyun; Wang, Wenxuan; Lyu, Michael R.
  • ContraBERT: Enhancing Code Pre-Trained Models via Contrastive Learning (2023), ICSE, Liu, Shangqing; Wu, Bozhi; Xie, Xiaofei; Meng, Guozhu; Liu, Yang.
  • Conversational automated program repair (2023), arXiv, Xia, Chunqiu Steven; Zhang, Lingming.
  • Copiloting the Copilots: Fusing Large Language Models with Completion Engines for Automated Program Repair (2023), ESEC/FSE, Wei, Yuxiang; Xia, Chunqiu Steven; Zhang, Lingming.
  • CrashTranslator: Automatically Reproducing Mobile Application Crashes Directly from Stack Trace (2024), ICSE, Huang, Yuchao; Wang, Junjie; Liu, Zhe; Wang, Yawen; Wang, Song; Chen, Chunyang; Hu, Yuanzhe; Wang, Qing.
  • Cupid: Leveraging ChatGPT for More Accurate Duplicate Bug Report Detection (2023), TOSEM, Zhang, Ting; Irsan, Ivana Clairine; Thung, Ferdian; Lo, David.
  • DebugBench: Evaluating Debugging Capability of Large Language Models (2024), arXiv, Tian, Runchu; Ye, Yining; Qin, Yujia; Cong, Xin; Lin, Yankai; Pan, Yinxu; Wu, Yesai; Liu, Zhiyuan; Sun, Maosong.
  • Domain Knowledge Matters: Improving Prompts with Fix Templates for Repairing Python Type Errors (2024), ICSE, Peng, Yun; Gao, Shuzheng; Gao, Cuiyun; Huo, Yintong; Lyu, Michael.
  • Duplicate bug report detection by using sentence embedding and fine-tuning (2021), ICSME, Isotani, Haruna; Washizaki, Hironori; Fukazawa, Yoshiaki; Nomoto, Tsutomu; Ouji, Saori; Saito, Shinobu.
  • Enhancing Automated Program Repair through Fine-tuning and Prompt Engineering (2023), arXiv, Paul, Rishov; Hossain, Md Mohib; Siddiq, Mohammed Latif; Hasan, Masum; Iqbal, Anindya; Santos, Joanna C. S.
  • Enhancing Traceability Link Recovery with Unlabeled Data (2022), ISSRE, Zhu, Jianfei; Xiao, Guanping; Zheng, Zheng; Sui, Yulei.
  • Evaluating Diverse Large Language Models for Automatic and General Bug Reproduction (2023), arXiv, Kang, Sungmin; Yoon, Juyeon; Askarbekkyzy, Nargiz; Yoo, Shin.
  • Evaluating Pre-trained Language Models for Repairing API Misuses (2023), TOSEM, Zhang, Ting; Irsan, Ivana Clairine; Thung, Ferdian; Lo, David; Sharma, Asankhaya; Jiang, Lingxiao.
  • Evaluating representation learning of code changes for predicting patch correctness in program repair (2020), ASE, Tian, Haoye; Liu, Kui; Kaboré, Abdoul Kader; Koyuncu, Anil; Li, Li; Klein, Jacques; Bissyandé, Tegawendé F.
  • Examining zero-shot vulnerability repair with large language models (2021), S&P, Pearce, Hammond; Tan, Benjamin; Ahmad, Baleegh; Karri, Ramesh; Dolan-Gavitt, Brendan.
  • Explainable Automated Debugging via Large Language Model-driven Scientific Debugging (2023), arXiv, Kang, Sungmin; Chen, Bei; Yoo, Shin; Lou, Jian-Guang.
  • Explaining Explanation: An Empirical Study on Explanation in Code Reviews (2023), arXiv, Widyasari, Ratnadira; Zhang, Ting; Bouraffa, Abir; Lo, David.
  • Exploring the Effectiveness of LLMs in Automated Logging Generation: An Empirical Study (2023), arXiv, Li, Yichen; Huo, Yintong; Jiang, Zhihan; Zhong, Renyi; He, Pinjia; Su, Yuxin; Lyu, Michael R.
  • Exploring the Potential of ChatGPT in Automated Code Refinement: An Empirical Study (2024), ICSE, Guo, Qi; Cao, Junming; Xie, Xiaofei; Liu, Shangqing; Li, Xiaohong; Chen, Bihuan; Peng, Xin.
  • Extending the Frontier of ChatGPT: Code Generation and Debugging (2023), arXiv, Sakib, Fardin Ahsan; Khan, Saadat Hasan; Karim, A. H. M. Rezaul.
  • Few-shot learning for sentence pair classification and its applications in software engineering (2023), arXiv, Helmeczi, Robert Kraig; Cevik, Mucahit; Yıldırım, Savas.
  • Fixing Rust Compilation Errors using LLMs (2023), arXiv, Deligiannis, Pantazis; Lal, Akash; Mehrotra, Nikita; Rastogi, Aseem.
  • Frustrated with Code Quality Issues? LLMs can Help! (2023), arXiv, Wadhwa, Nalin; Pradhan, Jui; Sonwane, Atharv; Sahu, Surya Prakash; Natarajan, Nagarajan; Kanade, Aditya; Parthasarathy, Suresh; Rajamani, Sriram.
  • GAMMA: Revisiting Template-based Automated Program Repair via Mask Prediction (2023), ASE, Zhang, Quanjun; Fang, Chunrong; Zhang, Tongke; Yu, Bowen; Sun, Weisong; Chen, Zhenyu.
  • GPTCloneBench: A comprehensive benchmark of semantic clones and cross-language clones using GPT-3 model and SemanticCloneBench (2023), ICSME, Alam, Ajmain I.; Roy, Palash R.; Al-Omari, Farouq; Roy, Chanchal K.; Roy, Banani; Schneider, Kevin A.
  • Guiding ChatGPT to Fix Web UI Tests via Explanation-Consistency Checking (2023), arXiv, Xu, Zhuolin; Li, Qiushi; Tan, Shin Hwei.
  • How Effective Are Neural Networks for Fixing Security Vulnerabilities (2023), ISSTA, Wu, Yi; Jiang, Nan; Pham, Hung Viet; Lutellier, Thibaud; Davis, Jordan; Tan, Lin; Babkin, Petr; Shah, Sameena.
  • Impact of Code Language Models on Automated Program Repair (2023), ICSE, Jiang, Nan; Liu, Kevin; Lutellier, Thibaud; Tan, Lin.
  • Incivility Detection in Open Source Code Review and Issue Discussions (2024), JSS, Ferreira, Isabella; Rafiq, Ahlaam; Cheng, Jinghui.
  • InferFix: End-to-End Program Repair with LLMs Retrieval-Augmented Prompts (2023), arXiv, Jin, Matthew; Shahriar, Syed; Tufano, Michele; Shi, Xin; Lu, Shuai; Sundaresan, Neel; Svyatkovskiy, Alexey.
  • Interpretable Online Log Analysis Using Large Language Models with Prompt Strategies (2024), ICPC, Liu, Yilun; Tao, Shimin; Meng, Weibin; Wang, Jingyu; Ma, Wenbing; Zhao, Yanqing; Chen, Yuhang; Yang, Hao; Jiang, Yanfei; Chen, Xun.
  • Invalidator: Automated patch correctness assessment via semantic and syntactic reasoning (2023), TSE, Le-Cong, Thanh; Luong, Duc-Minh; Le, Xuan Bach D.; Lo, David; Tran, Nhat-Hoa; Quang-Huy, Bui; Huynh, Quyet-Thang.
  • Is ChatGPT the Ultimate Programming Assistant -- How far is it? (2023), arXiv, Tian, Haoye; Lu, Weiqi; Li, Tsz On; Tang, Xunzhu; Cheung, Shing-Chi; Klein, Jacques; Bissyandé, Tegawendé F.
  • Just-in-Time Security Patch Detection -- LLM At the Rescue for Data Augmentation (2023), arXiv, Tang, Xunzhu; Chen, Zhenghan; Kim, Kisub; Tian, Haoye; Ezzini, Saad; Klein, Jacques.
  • Keep the conversation going: fixing 162 out of 337 bugs for $0.42 each using chatgpt (2023), arXiv, Xia, Chunqiu Steven; Zhang, Lingming.
  • KnowLog: Knowledge Enhanced Pre-trained Language Model for Log Understanding (2024), ICSE, Ma, Lipeng; Yang, Weidong; Xu, Bo; Jiang, Sihang; Fei, Ben; Liang, Jiaqing; Zhou, Mingjie; Xiao, Yanghua.
  • Large language models are few-shot testers: Exploring llm-based general bug reproduction (2022), ICSE, Kang, Sungmin; Yoon, Juyeon; Yoo, Shin.
  • LLaMA-Reviewer: Advancing Code Review Automation with Large Language Models through Parameter-Efficient Fine-Tuning (2023), ISSRE, Lu, Junyi; Yu, Lei; Li, Xiaojia; Yang, Li; Zuo, Chun.
  • LLM4CBI: Taming LLMs to Generate Effective Test Programs for Compiler Bug Isolation (2023), arXiv, Tu, Haoxin; Zhou, Zhide; Jiang, He; Yusuf, Imam Nur Bani; Li, Yuxian; Jiang, Lingxiao.
  • LLM-Powered Code Vulnerability Repair with Reinforcement Learning and Semantic Reward (2024), arXiv, Islam, Nafis Tanveer; Khoury, Joseph; Seong, Andrew; Parra, Gonzalo De La Torre; Bou-Harb, Elias; Najafirad, Peyman.
  • Log Parsing with Generalization Ability under New Log Types (2023), ESEC/FSE, Yu, Siyu; Wu, Yifan; Li, Zhijing; He, Pinjia; Chen, Ningjiang; Liu, Changjian.
  • Neural Program Repair with Program Dependence Analysis and Effective Filter Mechanism (2023), arXiv, Zhang, Yuwei; Li, Ge; Jin, Zhi; Xing, Ying.
  • Nova$^+$: Generative Language Models for Binaries (2023), arXiv, Jiang, Nan; Wang, Chengxiao; Liu, Kevin; Xu, Xiangzhe; Tan, Lin; Zhang, Xiangyu.
  • On the Reliability and Explainability of Language Models for Program Generation (2024), TOSEM, Liu, Yue; Tantithamthavorn, Chakkrit; Liu, Yonghui; Li, Li.
  • On the validity of pre-trained transformers for natural language processing in the software engineering domain (2022), TSE, Von Der Mosel, Julian; Trautsch, Alexander; Herbold, Steffen.
  • Practical program repair in the era of large pre-trained language models (2022), arXiv, Xia, Chunqiu Steven; Wei, Yuxiang; Zhang, Lingming.
  • Predicting Code Coverage without Execution (2023), arXiv, Tufano, Michele; Chandel, Shubham; Agarwal, Anisha; Sundaresan, Neel; Clement, Colin.
  • ptm4tag: sharpening tag recommendation of stack overflow posts with pre-trained models (2022), ICPC, He, Junda; Xu, Bowen; Yang, Zhou; Han, DongGyun; Yang, Chengran; Lo, David.
  • PyTy: Repairing Static Type Errors in Python (2024), arXiv, Chow, Yiu Wai; Di Grazia, Luca; Pradel, Michael.
  • RAP-Gen: Retrieval-Augmented Patch Generation with CodeT5 for Automatic Program Repair (2023), ESEC/FSE, Wang, Weishi; Wang, Yue; Joty, Shafiq; Hoi, Steven C H.
  • RefBERT: A Two-Stage Pre-trained Framework for Automatic Rename Refactoring (2023), ISSTA, Liu, Hao; Wang, Yanlin; Wei, Zhao; Xu, Yong; Wang, Juhong; Li, Hui; Ji, Rongrong.
  • RepairLLaMA: Efficient Representations and Fine-Tuned Adapters for Program Repair (2023), arXiv, Silva, André; Fang, Sen; Monperrus, Martin.
  • Resolving Crash Bugs via Large Language Models: An Empirical Study (2023), arXiv, Du, Xueying; Liu, Mingwei; Li, Juntao; Wang, Hanlin; Peng, Xin; Lou, Yiling.
  • Revisiting Sentiment Analysis for Software Engineering in the Era of Large Language Models (2023), TOSEM, Zhang, Ting; Irsan, Ivana Clairine; Thung, Ferdian; Lo, David.
  • Sentiment analysis for software engineering: How far can pre-trained transformer models go? (2020), ICSME, Zhang, Ting; Xu, Bowen; Thung, Ferdian; Haryono, Stefanus Agus; Lo, David; Jiang, Lingxiao.
  • Shipwright: A Human-in-the-Loop System for Dockerfile Repair (2021), ICSE, Henkel J,Silva D,Teixeira L,d'Amorim M,Reps T.
  • STEAM: Simulating the InTeractive BEhavior of ProgrAMmers for Automatic Bug Fixing (2023), arXiv, Zhang, Yuwei; Jin, Zhi; Xing, Ying; Li, Ge.
  • T-FREX: A Transformer-based Feature Extraction Method from Mobile App Reviews (2024), SANER, Motger, Quim; Miaschi, Alessio; Dell'Orletta, Felice; Franch, Xavier; Marco, Jordi.
  • The Best of Both Worlds: Combining Learned Embeddings with Engineered Features for Accurate Prediction of Correct Patches (2023), TOSEM, Tian, Haoye; Liu, Kui; Li, Yinghua; Kaboré, Abdoul Kader; Koyuncu, Anil; Habib, Andrew; Li, Li; Wen, Junhao; Klein, Jacques; Bissyandé, Tegawendé F.
  • The Right Prompts for the Job: Repair Code-Review Defects with Large Language Model (2023), arXiv, Zhao, Zelin; Xu, Zhaogui; Zhu, Jialong; Di, Peng; Yao, Yuan; Ma, Xiaoxing.
  • Towards Automatically Addressing Self-Admitted Technical Debt: How Far Are We? (2023), arXiv, Mastropaolo, Antonio; Di Penta, Massimiliano; Bavota, Gabriele.
  • Towards Efficient Fine-tuning of Pre-trained Code Models: An Experimental Study and Beyond (2023), ISSTA, Shi, Ensheng; Wang, Yanlin; Zhang, Hongyu; Du, Lun; Han, Shi; Zhang, Dongmei; Sun, Hongbin.
  • Towards javascript program repair with generative pre-trained transformer (gpt-2) (2022), ICSE, Lajkó, Márk; Csuvik, Viktor; Vidács, László.
  • Towards Understanding the Capability of Large Language Models on Code Clone Detection: A Survey (2023), arXiv, Shihan Dou, Junjie Shan, Haoxiang Jia, Wenhao Deng, Zhiheng Xi, Wei He, Yueming Wu, Tao Gui, Yang Liu, Xuanjing Huang.
  • UniLog: Automatic Logging via LLM and In-Context Learning (2024), ICSE, Xu, Junjielong; Cui, Ziang; Zhao, Yuan; Zhang, Xu; He, Shilin; He, Pinjia; Li, Liqun; Kang, Yu; Lin, Qingwei; Dang, Yingnong; Rajmohan, Saravan; Zhang, Dongmei.
  • Using a Nearest-Neighbour, BERT-Based Approach for Scalable Clone Detection (2022), ICSME, Chochlov, Muslim; Ahmed, Gul Aftab; Patten, James Vincent; Lu, Guoxian; Hou, Wei; Gregg, David; Buckley, Jim.
  • Using deep learning to generate complete log statements (2022), ICSE, Mastropaolo, Antonio; Pascarella, Luca; Bavota, Gabriele.
  • Using pre-trained language models to resolve textual and semantic merge conflicts (experience paper) (2022), ISSTA, Zhang, Jialu; Mytkowicz, Todd; Kaufman, Mike; Piskac, Ruzica; Lahiri, Shuvendu K.
  • using pre-trained models to boost code review automation (2022), ICSE, Tufano, Rosalia; Masiero, Simone; Mastropaolo, Antonio; Pascarella, Luca; Poshyvanyk, Denys; Bavota, Gabriele.
  • Using Transfer Learning for Code-Related Tasks (2022), TSE, Mastropaolo, Antonio; Cooper, Nathan; Palacio, David Nader; Scalabrino, Simone; Poshyvanyk, Denys; Oliveto, Rocco; Bavota, Gabriele.
  • Utilization of Pre-trained Language Model for Adapter-based Knowledge Transfer in Software Engineering (2023), arXiv, Iman Saberi, Fatemeh Fard, Fuxiang Chen.
  • Where is Your App Frustrating Users? (2022), ICSE, Wang Y,Wang J,Zhang H,Ming X,Shi L,Wang Q.

Software management

  • Can LLMs Configure Software Tools(Using Language Models for Software Tool Configuration) (2023), arXiv, Kannan, Jai.
  • Evaluation of Context-Aware Language Models and Experts for Effort Estimation of Software Maintenance Issues (2022), ICSME, Alhamed, Mohammed; Storer, Tim.
  • Fine-SE: Integrating Semantic Features and Expert Features for Software Effort Estimation (2024), ICSE, Li, Yue; Ren, Zhong; Wang, Zhiqi; Yang, Lanxin; Dong, Liming; Zhong, Chenxing; Zhang, He.

Cites

If you find this repository useful, please cite our survey paper:

@article{hou2023large,
  title={Large language models for software engineering: A systematic literature review},
  author={Hou, Xinyi and Zhao, Yanjie and Liu, Yue and Yang, Zhou and Wang, Kailong and Li, Li and Luo, Xiapu and Lo, David and Grundy, John and Wang, Haoyu},
  journal={arXiv preprint arXiv:2308.10620},
  year={2023}
}

About

Large Language Models for Software Engineering: A Systematic Literature Review

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published