Skip to content

codefuse-ai/Awesome-Code-LLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

60 Commits
 
 
 
 

Repository files navigation

Awesome-Code-LLM

This is the repo for our survey Unifying the Perspectives of NLP and Software Engineering: A Survey on Language Models for Code - a comprehensive review of LLM researches for code. Works in each category are ordered chronologically. If you have a basic understanding of machine learning but are new to NLP, we also provide a list of recommended readings in section 9.

News

🔥🔥🔥 [2024/04] Recent papers:

🔥🔥🔥 [2024/04] We have just made a major update to our paper on arXiv, which should be available by Wednesday April 17th. Changes include:

  • addition of recent models and related works on downstream tasks
  • addition of two new tasks: decompilation and malware detection
  • addition of section 2.1.7 Code LLMs for Low-Resource, Low-Level, and Domain-Specific Languages
  • rewriting of section 7.1 LLMs Extended with Coding Tools
  • addition of section 7.3 Analysis of LLM-Generated Code

🔥🔥🔥 [2024/04] Code Similarity.

🔥🔥     [2024/04] Code Summarization.

🔥         [2024/04] Test Generation using LLM.

🔥         [2024/04] In response to feedback from the community, we collected 26 papers for a new downstream task: malicious code detection.

Table of Contents

  1. Surveys

  2. Models

    2.1 Off-the-Shelf LLM

    2.2 Existing LLM Adapted to Code

    2.3 General Pretraining on Code

    2.4 (Instruction) Fine-Tuning on Code

    2.5 Reinforcement Learning on Code

  3. When Coding Meets Reasoning

    3.1 Coding for Reasoning

    3.2 Code Simulation

    3.3 Coding via Planning

    3.4 Interactive Coding

  4. Code LLM for Low-Resource, Low-Level, and Domain-Specific Languages

  5. Methods/Models for Downstream Tasks

  6. Analysis of AI-Generated Code

  7. User-LLM Interaction

  8. Datasets

    8.1 Pretraining

    8.2 Benchmarks

  9. Recommended Readings

  10. Citation

  11. Star History

  12. Join Us

1. Surveys

We list several recent surveys on similar topics. While they are all about language models for code, 1-2 focus on NLP side; 3-6 focus on SE side; 7-10 are released after ours.

  1. "Large Language Models Meet NL2Code: A Survey" [2022-12] [ACL 2023] [paper]

  2. "A Survey on Pretrained Language Models for Neural Code Intelligence" [2022-12] [paper]

  3. "An Empirical Comparison of Pre-Trained Models of Source Code" [2023-02] [ICSE 2023] [paper]

  4. "Large Language Models for Software Engineering: A Systematic Literature Review" [2023-08] [paper]

  5. "Towards an Understanding of Large Language Models in Software Engineering Tasks" [2023-08] [paper]

  6. "Pitfalls in Language Models for Code Intelligence: A Taxonomy and Survey" [2023-10] [paper]

  7. "A Survey on Large Language Models for Software Engineering" [2023-12] [paper]

  8. "Deep Learning for Code Intelligence: Survey, Benchmark and Toolkit" [2023-12] [paper]

  9. "A Survey of Neural Code Intelligence: Paradigms, Advances and Beyond" [2024-03] [paper]

  10. "Tasks People Prompt: A Taxonomy of LLM Downstream Tasks in Software Verification and Falsification Approaches" [2024-04] [paper]

2. Models

2.1 Off-the-Shelf LLM

These LLMs are not specifically trained for code, but have demonstrated varying coding capability.

  1. LaMDA: "LaMDA: Language Models for Dialog Applications" [2022-01] [paper]

  2. PaLM: "PaLM: Scaling Language Modeling with Pathways" [2022-04] [JMLR] [paper]

  3. GPT-NeoX: "GPT-NeoX-20B: An Open-Source Autoregressive Language Model" [2022-04] [ACL 2022 Workshop on Challenges & Perspectives in Creating LLMs] [paper] [repo]

  4. BLOOM: "BLOOM: A 176B-Parameter Open-Access Multilingual Language Model" [2022-11] [paper] [model]

  5. LLaMA: "LLaMA: Open and Efficient Foundation Language Models" [2023-02] [paper]

  6. GPT-4: "GPT-4 Technical Report" [2023-03] [paper]

  7. LLaMA 2: "Llama 2: Open Foundation and Fine-Tuned Chat Models" [2023-07] [paper] [repo]

  8. Phi-1.5: "Textbooks Are All You Need II: phi-1.5 technical report" [2023-09] [paper] [model]

  9. Baichuan 2: "Baichuan 2: Open Large-scale Language Models" [2023-09] [paper] [repo]

  10. Qwen: "Qwen Technical Report" [2023-09] [paper] [repo]

  11. Mistral: "Mistral 7B" [2023-10] [paper] [repo]

  12. Gemini: "Gemini: A Family of Highly Capable Multimodal Models" [2023-12] [paper]

  13. Phi-2: "Phi-2: The surprising power of small language models" [2023-12] [blog]

  14. YAYI2: "YAYI 2: Multilingual Open-Source Large Language Models" [2023-12] [paper] [repo]

  15. DeepSeek: "DeepSeek LLM: Scaling Open-Source Language Models with Longtermism" [2024-01] [paper] [repo]

  16. Mixtral: "Mixtral of Experts" [2024-01] [paper] [blog]

  17. DeepSeekMoE: "DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models" [2024-01] [paper] [repo]

  18. Orion: "Orion-14B: Open-source Multilingual Large Language Models" [2024-01] [paper] [repo]

  19. OLMo: "OLMo: Accelerating the Science of Language Models" [2024-02] [paper] [repo]

  20. Gemma: "Gemma: Open Models Based on Gemini Research and Technology" [2024-02] [paper] [blog]

  21. Claude 3: "The Claude 3 Model Family: Opus, Sonnet, Haiku" [2024-03] [paper] [blog]

  22. Yi: "Yi: Open Foundation Models by 01.AI" [2024-03] [paper] [repo]

  23. Poro: "Poro 34B and the Blessing of Multilinguality" [2024-04] [paper] [model]

  24. JetMoE: "JetMoE: Reaching Llama2 Performance with 0.1M Dollars" [2024-04] [paper] [repo]

  25. LLaMA 3 [2024-04] [blog] [repo]

  26. Reka Core: "Reka Core, Flash, and Edge: A Series of Powerful Multimodal Language Models" [2024-04] [paper]

  27. Phi-3: "Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone" [2024-04] [paper]

  28. OpenELM: "OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework" [2024-04] [paper] [repo]

2.2 Existing LLM Adapted to Code

These models are general-purpose LLMs further pretrained on code-related data.

  • Codex (GPT-3): "Evaluating Large Language Models Trained on Code" [2021-07] [paper]

  • PaLM Coder (PaLM): "PaLM: Scaling Language Modeling with Pathways" [2022-04] [JMLR] [paper]

  • Minerva (PaLM): "Solving Quantitative Reasoning Problems with Language Models" [2022-06] [paper]

  • PaLM 2 * (PaLM 2): "PaLM 2 Technical Report" [2023-05] [paper]

  • Code LLaMA (LLaMA 2): "Code Llama: Open Foundation Models for Code" [2023-08] [paper] [repo]

  • BTX (LLaMA 2): "Branch-Train-MiX: Mixing Expert LLMs into a Mixture-of-Experts LLM" [2024-03] [paper]

  • "Mastering Text, Code and Math Simultaneously via Fusing Highly Specialized Language Models" [2024-03] [paper]

  • "CodeGemma: Open Code Models Based on Gemma" [2024-04] [paper] [model]

2.3 General Pretraining on Code

These models are Transformer encoders, decoders, and encoder-decoders pretrained from scratch using existing objectives for general language modeling.

Encoder

  1. CuBERT (MLM + NSP): "Learning and Evaluating Contextual Embedding of Source Code", 2019-12, ICML 2020, [paper] [repo]

  2. CodeBERT (MLM + RTD): "CodeBERT: A Pre-Trained Model for Programming and Natural Languages", 2020-02, EMNLP findings 2020, [paper] [repo]

  3. GraphCodeBERT (MLM + DFG Edge Prediction + DFG Node Alignment): "GraphCodeBERT: Pre-training Code Representations with Data Flow", 2020-09, ICLR 2021, [paper] [repo]

  4. SynCoBERT (MLM + Identifier Prediction + AST Edge Prediction + Contrastive Learning): "SynCoBERT: Syntax-Guided Multi-Modal Contrastive Pre-Training for Code Representation", 2021-08, arXiv, [paper]

  5. DISCO (MLM + Node Type MLM + Contrastive Learning): "Towards Learning (Dis)-Similarity of Source Code from Program Contrasts", 2021-q0, ACL 2022, [paper]

  6. Code-MVP (MLM + Type Inference + Contrastive Learning): "CODE-MVP: Learning to Represent Source Code from Multiple Views with Contrastive Pre-Training", 2022-05, NAACL 2022 Technical Track, [paper]

  7. CodeSage (MLM + Deobfuscation + Contrastive Learning): "Code Representation Learning At Scale", 2024-02, ICLR 2024, [paper]

Decoder

  1. GPT-C (CLM): "IntelliCode Compose: Code Generation Using Transformer" [2020-05] [ESEC/FSE 2020] [paper]

  2. CodeGPT (CLM): "CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation" [2021-02] [NeurIPS Datasets and Benchmarks 2021] [paper] [repo]

  3. CodeParrot (CLM) [2021-12] [blog]

  4. PolyCoder (CLM): "A Systematic Evaluation of Large Language Models of Code" [2022-02] [DL4C@ICLR 2022] [paper] [repo]

  5. CodeGen (CLM): "CodeGen: An Open Large Language Model for Code with Multi-Turn Program Synthesis" [2022-03] [ICLR 2023] [paper] [repo]

  6. InCoder (Causal Masking): "InCoder: A Generative Model for Code Infilling and Synthesis" [2022-04] [ICLR 2023] [paper] [repo]

  7. PyCodeGPT (CLM): "CERT: Continual Pre-Training on Sketches for Library-Oriented Code Generation" [2022-06] [IJCAI-ECAI 2022] [paper] [repo]

  8. PanGu-Coder (CLM): "PanGu-Coder: Program Synthesis with Function-Level Language Modeling" [2022-07] [paper]

  9. SantaCoder (FIM): "SantaCoder: don't reach for the stars!" [2023-01] [paper] [model]

  10. CodeGeeX (CLM): "CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Evaluations on HumanEval-X" [2023-03] [paper] [repo]

  11. StarCoder (FIM): "StarCoder: may the source be with you!" [2023-05] [paper] [model]

  12. Phi-1 (CLM): "Textbooks Are All You Need" [2023-06] [paper] [model]

  13. CodeFuse (CLM): "CodeFuse-13B: A Pretrained Multi-lingual Code Large Language Model" [2023-10] [paper] [model]

  14. DeepSeek Coder (CLM+FIM): "DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence" [2024-01] [paper] [repo]

  15. StarCoder2 (CLM+FIM): "StarCoder 2 and The Stack v2: The Next Generation" [2024-02] [paper] [repo]

  16. CodeShell (CLM+FIM): "CodeShell Technical Report" [2024-03] [paper] [repo]

  17. CodeQwen1.5 [2024-04] [blog]

Encoder-Decoder

  1. PyMT5 (Span Corruption): "PyMT5: multi-mode translation of natural language and Python code with transformers", 2020-10, EMNLP 2020, [paper]

  2. Mastropaolo et al. (MLM + Deobfuscation): "DOBF: A Deobfuscation Pre-Training Objective for Programming Languages", 2021-02, ICSE 2021, [paper] [repo]

  3. DOBF (Span Corruption): "Studying the Usage of Text-To-Text Transfer Transformer to Support Code-Related Tasks", 2021-02, NeurIPS 2021, [paper] [repo]

  4. PLBART (DAE): "Unified Pre-training for Program Understanding and Generation", 2021-03, NAACL 2021, [paper] [repo]

  5. CodeT5 (Span Corruption + Identifier Tagging + Masked Identifier Prediction + Text2Code + Code2Text): "CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation", 2021-09, EMNLP 2021, [paper] [repo]

  6. SPT-Code (Span Corruption + NSP + Method Name Prediction): "SPT-Code: Sequence-to-Sequence Pre-Training for Learning Source Code Representations", 2022-01, ICSE 2022 Technical Track, [paper]

  7. AlphaCode (MLM + CLM): "Competition-Level Code Generation with AlphaCode", 2022-02, Science, [paper] [arxiv]

  8. NatGen (Code Naturalization): "NatGen: Generative pre-training by "Naturalizing" source code", 2022-06, ESEC/FSE 2022, [paper] [repo]

  9. ERNIE-Code (Span Corruption + Pivot-based Translation LM): "ERNIE-Code: Beyond English-Centric Cross-lingual Pretraining for Programming Languages", 2022-12, ACL23 (Findings), [paper][repo]

  10. CodeT5+ (Span Corruption + CLM + Text-Code Contrastive Learning + Text-Code Translation): "CodeT5+: Open Code Large Language Models for Code Understanding and Generation", 2023-05, arXiv, [paper] [repo]

  11. AST-T5 (Span Corruption): "AST-T5: Structure-Aware Pretraining for Code Generation and Understanding", 2024-01, arXiv, [paper]

UniLM

  1. CugLM (MLM + NSP + CLM): "Multi-task Learning based Pre-trained Language Model for Code Completion", 2020-12, ASE 2020, [paper]

  2. UniXcoder (MLM + NSP + CLM + Span Corruption + Contrastive Learning + Code2Text): "UniXcoder: Unified Cross-Modal Pre-training for Code Representation", 2022-03, ACL 2022, [paper] [repo]

2.4 (Instruction) Fine-Tuning on Code

These models apply Instruction Fine-Tuning techniques to enhance the capacities of Code LLMs.

  1. WizardCoder (StarCoder + Evol-Instruct): "WizardCoder: Empowering Code Large Language Models with Evol-Instruct" [2023-06] [paper] [repo]

  2. PanGu-Coder 2 (StarCoder + Evol-Instruct + RRTF): "PanGu-Coder2: Boosting Large Language Models for Code with Ranking Feedback" [2023-07] [paper]

  3. OctoCoder (StarCoder) / OctoGeeX (CodeGeeX2): "OctoPack: Instruction Tuning Code Large Language Models" [2023-08] [paper] [repo]

  4. MFTCoder: "MFTCoder: Boosting Code LLMs with Multitask Fine-Tuning" [2023-11] [paper] [repo]

  5. WaveCoder: "WaveCoder: Widespread And Versatile Enhanced Instruction Tuning with Refined Data Generation" [2023-12] [paper]

  6. Astraios: "Astraios: Parameter-Efficient Instruction Tuning Code Large Language Models" [2024-01] [paper]

  7. CCT: "Code Comparison Tuning for Code Large Language Models" [2024-03] [paper]

  8. SAT: "Structure-aware Fine-tuning for Code Pre-trained Models" [2024-04] [paper]

  9. XFT: "XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts" [2024-04] [paper] [repo]

2.5 Reinforcement Learning on Code

  1. CompCoder: "Compilable Neural Code Generation with Compiler Feedback", 2022-03, ACL 2022, [paper]

  2. CodeRL: "CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning", 2022-07, NeurIPS 2022, [paper] [repo]

  3. PPOCoder: "Execution-based Code Generation using Deep Reinforcement Learning", 2023-01, TMLR 2023, [paper] [repo]

  4. RLTF: "RLTF: Reinforcement Learning from Unit Test Feedback", 2023-07, arXiv, [paper] [repo]

  5. StepCoder: "StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback", 2024-02, [paper]

3. When Coding Meets Reasoning

3.1 Coding for Reasoning

  1. PAL: "PAL: Program-aided Language Models" [2022-11] [ICML 2023] [paper] [repo]

  2. PoT: "Program of Thoughts Prompting: Disentangling Computation from Reasoning for Numerical Reasoning Tasks" [2022-11] [TMLR 2023] [paper] [repo]

  3. CoC: "Chain of Code: Reasoning with a Language Model-Augmented Code Emulator" [2023-12] [paper]

  4. FlowMind: "FlowMind: Automatic Workflow Generation with LLMs" [2024-03] [paper]

  5. Think-and-Execute: "Language Models as Compilers: Simulating Pseudocode Execution Improves Algorithmic Reasoning in Language Models" [2024-04] [paper]

3.2 Code Simulation

  • "Code Simulation Challenges for Large Language Models" [2024-01] [paper]

  • "CodeMind: A Framework to Challenge Large Language Models for Code Reasoning" [2024-02] [paper]

  • "Executing Natural Language-Described Algorithms with Large Language Models: An Investigation" [2024-02] [paper]

  • "Can Language Models Pretend Solvers? Logic Code Simulation with LLMs" [2024-03] [paper]

  • "Evaluating Large Language Models with Runtime Behavior of Program Execution" [2024-03] [paper]

  • "NExT: Teaching Large Language Models to Reason about Code Execution" [2024-04] [paper]

3.3 Coding via Planning

  1. Self-collaboration: "Self-collaboration Code Generation via ChatGPT" [2023-04] [paper]

  2. ChatDev: "Communicative Agents for Software Development" [2023-07] [paper] [repo]

  3. MetaGPT: "MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework" [2023-08] [paper] [repo]

  4. CONLINE: "CONLINE: Complex Code Generation and Refinement with Online Searching and Correctness Testing" [2024-03] [paper]

  5. LCG: "When LLM-based Code Generation Meets the Software Development Process" [2024-03] [paper]

  6. RepairAgent: "RepairAgent: An Autonomous, LLM-Based Agent for Program Repair" [2024-03] [paper]

  7. MAGIS:: "MAGIS: LLM-Based Multi-Agent Framework for GitHub Issue Resolution" [2024-03] [paper]

  8. SoA: "Self-Organized Agents: A LLM Multi-Agent Framework toward Ultra Large-Scale Code Generation and Optimization" [2024-04] [paper]

  9. AutoCodeRover: "AutoCodeRover: Autonomous Program Improvement" [2024-04] [paper]

3.4 Interactive Coding

  • "Interactive Program Synthesis" [2017-03] [paper]

  • "Question selection for interactive program synthesis" [2020-06] [PLDI 2020] [paper]

  • "Interactive Code Generation via Test-Driven User-Intent Formalization" [2022-08] [paper]

  • "Improving Code Generation by Training with Natural Language Feedback" [2023-03] [TMLR] [paper]

  • "Self-Refine: Iterative Refinement with Self-Feedback" [2023-03] [NeurIPS 2023] [paper]

  • "Teaching Large Language Models to Self-Debug" [2023-04] [paper]

  • "Self-Edit: Fault-Aware Code Editor for Code Generation" [2023-05] [ACL 2023] [paper]

  • "LeTI: Learning to Generate from Textual Interactions" [2023-05] [paper]

  • "InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback" [2023-06] [NeurIPS 2023] [paper]

  • "OpenCodeInterpreter: Integrating Code Generation with Execution and Refinement" [2024-02] [paper]

  • "Iterative Refinement of Project-Level Code Context for Precise Code Generation with Compiler Feedback" [2024-03] [paper]

  • "CYCLE: Learning to Self-Refine the Code Generation" [2024-03] [paper]

  • "LLM-based Test-driven Interactive Code Generation: User Study and Empirical Evaluation" [2024-04] [paper]

4. Code LLM for Low-Resource, Low-Level, and Domain-Specific Languages

  • [Ruby] "On the Transferability of Pre-trained Language Models for Low-Resource Programming Languages" [2022-04] [ICPC 2022] [paper]

  • [Verilog] "Benchmarking Large Language Models for Automated Verilog RTL Code Generation" [2022-12] [DATE 2023] [paper]

  • [Hansl] "The potential of LLMs for coding with low-resource and domain-specific programming languages" [2023-07] [paper]

  • [Verilog] "VeriGen: A Large Language Model for Verilog Code Generation" [2023-07] [paper]

  • [Verilog] "RTLLM: An Open-Source Benchmark for Design RTL Generation with Large Language Model" [2023-08] [paper]

  • [Racket, OCaml, Lua, R, Julia] "Knowledge Transfer from High-Resource to Low-Resource Programming Languages for Code LLMs" [2023-08] [paper]

  • [Verilog] "VerilogEval: Evaluating Large Language Models for Verilog Code Generation" [2023-09] [ICCAD 2023] [paper]

  • [Verilog] "RTLFixer: Automatically Fixing RTL Syntax Errors with Large Language Models" [2023-11] [paper]

  • [Verilog] "Advanced Large Language Model (LLM)-Driven Verilog Development: Enhancing Power, Performance, and Area Optimization in Code Synthesis" [2023-12] [paper]

  • [Verilog] "RTLCoder: Outperforming GPT-3.5 in Design RTL Generation with Our Open-Source Dataset and Lightweight Solution" [2023-12] [paper]

  • [Haskell] "Investigating the Performance of Language Models for Completing Code in Functional Programming Languages: a Haskell Case Study" [2024-03] [paper]

  • [Verilog] "A Multi-Expert Large Language Model Architecture for Verilog Code Generation" [2024-04] [paper]

  • [Verilog] "CreativEval: Evaluating Creativity of LLM-Based Hardware Code Generation" [2024-04] [paper]

  • [Alloy] "An Empirical Evaluation of Pre-trained Large Language Models for Repairing Declarative Formal Specifications" [2024-04] [paper]

5. Methods/Models for Downstream Tasks

For each task, the first column contains non-neural methods (e.g. n-gram, TF-IDF, and (occasionally) static program analysis); the second column contains non-Transformer neural methods (e.g. LSTM, CNN, GNN); the third column contains Transformer based methods (e.g. BERT, GPT, T5).

Code Generation

  • "The Larger the Better? Improved LLM Code-Generation via Budget Reallocation" [2024-03] [paper]

  • "Comments as Natural Logic Pivots: Improve Code Generation via Comment Perspective" [2024-04] [paper]

  • "Distilling Algorithmic Reasoning from LLMs via Explaining Solution Programs" [2024-04] [paper]

  • "Quality Assessment of Prompts Used in Code Generation" [2024-04] [paper]

  • "Assessing GPT-4-Vision's Capabilities in UML-Based Code Generation" [2024-04] [paper]

Code Translation

  • "Tree-to-tree Neural Networks for Program Translation" [2018-02] [NeurIPS 2018] [paper]

  • "Program Language Translation Using a Grammar-Driven Tree-to-Tree Model" [2018-07] [paper]

  • "Unsupervised Translation of Programming Languages" [2020-06] [NeurIPS 2020] [paper]

  • "Leveraging Automated Unit Tests for Unsupervised Code Translation" [2021-10] [ICLR 2022] paper]

  • "Code Translation with Compiler Representations" [2022-06] [ICLR 2023] [paper]

  • "Multilingual Code Snippets Training for Program Translation" [2022-06] [AAAI 2022] [paper]

  • "BabelTower: Learning to Auto-parallelized Program Translation" [2022-07] [ICML 2022] [paper]

  • "Syntax and Domain Aware Model for Unsupervised Program Translation" [2023-02] [ICSE 2023] [paper]

  • "CoTran: An LLM-based Code Translator using Reinforcement Learning with Feedback from Compiler and Symbolic Execution" [2023-06] [paper]

  • "Lost in Translation: A Study of Bugs Introduced by Large Language Models while Translating Code" [2023-08] [ICSE 2024] [paper]

  • "On the Evaluation of Neural Code Translation: Taxonomy and Benchmark", 2023-08, ASE 2023, [paper]

  • "Program Translation via Code Distillation" [2023-10] [EMNLP 2023] [paper]

  • "Explain-then-Translate: An Analysis on Improving Program Translation with Self-generated Explanations" [2023-11] [EMNLP Findings 2023] [paper]

  • "Exploring the Impact of the Output Format on the Evaluation of Large Language Models for Code Translation" [2024-03] [paper]

  • "Exploring and Unleashing the Power of Large Language Models in Automated Code Translation" [2024-04] [paper]

Code Summarization

  • "A Transformer-based Approach for Source Code Summarization" [2020-05] [ACL 2020] [paper]

  • "Code Summarization with Structure-induced Transformer" [2020-12] [ACL Findings 2021] [paper]

  • "Code Structure Guided Transformer for Source Code Summarization" [2021-04] [ACM TSEM] [paper]

  • "M2TS: Multi-Scale Multi-Modal Approach Based on Transformer for Source Code Summarization" [2022-03] [ICPC 2022] [paper]

  • "AST-trans: code summarization with efficient tree-structured attention" [2022-05] [ICSE 2022] [paper]

  • "CoSS: Leveraging Statement Semantics for Code Summarization" [2023-03] [IEEE TSE] [paper]

  • "Automatic Code Summarization via ChatGPT: How Far Are We?" [2023-05] [paper]

  • "Semantic Similarity Loss for Neural Source Code Summarization" [2023-08] [paper]

  • "Distilled GPT for Source Code Summarization" [2023-08] [ASE] [paper]

  • "CSA-Trans: Code Structure Aware Transformer for AST" [2024-04] [paper]

  • "Analyzing the Performance of Large Language Models on Code Summarization" [2024-04] [paper]

Program Repair

  • "DeepDebug: Fixing Python Bugs Using Stack Traces, Backtranslation, and Code Skeletons" [2021-05] [paper]

  • "Break-It-Fix-It: Unsupervised Learning for Program Repair" [2021-06] [ICML 2021] [paper]

  • "TFix: Learning to Fix Coding Errors with a Text-to-Text Transformer" [2021-07] [ICML 2021] [paper]

  • "Automated Repair of Programs from Large Language Models" [2022-05] [ICSE 2023] [paper]

  • "Less Training, More Repairing Please: Revisiting Automated Program Repair via Zero-shot Learning" [2022-07] [ESEC/FSE 2022] [paper]

  • "Repair Is Nearly Generation: Multilingual Program Repair with LLMs" [2022-08] [AAAI 2023] [paper]

  • "Practical Program Repair in the Era of Large Pre-trained Language Models" [2022-10] [paper]

  • "VulRepair: a T5-based automated software vulnerability repair" [2022-11] [ESEC/FSE 2022] [paper]

  • "Conversational Automated Program Repair" [2023-01] [paper]

  • "Impact of Code Language Models on Automated Program Repair" [2023-02] [ICSE 2023] [paper]

  • "InferFix: End-to-End Program Repair with LLMs" [2023-03] [ESEC/FSE 2023] [paper]

  • "Enhancing Automated Program Repair through Fine-tuning and Prompt Engineering" [2023-04] [paper]

  • "A study on Prompt Design, Advantages and Limitations of ChatGPT for Deep Learning Program Repair" [2023-04] [paper]

  • "Domain Knowledge Matters: Improving Prompts with Fix Templates for Repairing Python Type Errors" [2023-06] [ICSE 2024] [paper]

  • "RepairLLaMA: Efficient Representations and Fine-Tuned Adapters for Program Repair" [2023-12] [paper]

  • "The Fact Selection Problem in LLM-Based Program Repair" [2024-04] [paper]

  • "Aligning LLMs for FL-free Program Repair" [2024-04] [paper]

  • "A Deep Dive into Large Language Models for Automated Bug Localization and Repair" [2024-04] [paper]

  • "Multi-Objective Fine-Tuning for Enhanced Program Repair with LLMs" [2024-04] [paper]

  • "How Far Can We Go with Practical Function-Level Program Repair?" [2024-04] [paper]

  • "Revisiting Unnaturalness for Automated Program Repair in the Era of Large Language Models" [2024-04] [paper]

Code Similarity (Clone Detection, Code Search)

  • "Self-Supervised Contrastive Learning for Code Retrieval and Summarization via Semantic-Preserving Transformations" [2020-09] [SIGIR 2021] [paper]

  • "REINFOREST: Reinforcing Semantic Code Similarity for Cross-Lingual Code Search Models" [2023-05] [paper]

  • "Revisiting Code Similarity Evaluation with Abstract Syntax Tree Edit Distance" [2024-04] [paper]

  • "Is Next Token Prediction Sufficient for GPT? Exploration on Code Logic Comprehension" [2024-04] [paper]

Vulnerability Detection

  • "VulDeePecker: A Deep Learning-Based System for Vulnerability Detection" [2018-01] [NDSS 2018] [paper]

  • "DeepBugs: A Learning Approach to Name-based Bug Detection" [2018-04] [Proc. ACM Program. Lang.] [paper]

  • "Automated Vulnerability Detection in Source Code Using Deep Representation Learning" [2018-07] [ICMLA 2018] [paper]

  • "SySeVR: A Framework for Using Deep Learning to Detect Software Vulnerabilities" [2018-07] [IEEE TDSC] [paper]

  • "Devign: Effective Vulnerability Identification by Learning Comprehensive Program Semantics via Graph Neural Networks" [2019-09] [NeurIPS 2019] [paper]

  • "Improving bug detection via context-based code representation learning and attention-based neural networks" [2019-10] [Proc. ACM Program. Lang.] [paper]

  • "Global Relational Models of Source Code" [2019-12] [ICLR 2020] [paper]

  • "VulDeeLocator: A Deep Learning-based Fine-grained Vulnerability Detector" [2020-01] [IEEE TDSC] [paper]

  • "Deep Learning based Vulnerability Detection: Are We There Yet?" [2020-09] [IEEE TSE] [paper]

  • "Security Vulnerability Detection Using Deep Learning Natural Language Processing" [2021-05] [INFOCOM Workshops 2021] [paper]

  • "Self-Supervised Bug Detection and Repair" [2021-05] [NeurIPS 2021] [paper]

  • "Vulnerability Detection with Fine-grained Interpretations" [2021-06] [ESEC/SIGSOFT FSE 2021] [paper]

  • "ReGVD: Revisiting Graph Neural Networks for Vulnerability Detection" [2021-10] [ICSE Companion 2022] [paper]

  • "VUDENC: Vulnerability Detection with Deep Learning on a Natural Codebase for Python" [2022-01] [Inf. Softw. Technol] [paper]

  • "Transformer-Based Language Models for Software Vulnerability Detection" [222-04] [ACSAC 2022] [paper]

  • "LineVul: A Transformer-based Line-Level Vulnerability Prediction" [2022-05] [MSR 2022] [paper]

  • "VulBERTa: Simplified Source Code Pre-Training for Vulnerability Detection" [2022-05] [IJCNN 2022] [paper]

  • "Open Science in Software Engineering: A Study on Deep Learning-Based Vulnerability Detection" [2022-09] [IEEE TSE] [paper]

  • "An Empirical Study of Deep Learning Models for Vulnerability Detection" [2022-12] [ICSE 2023] [paper]

  • "CSGVD: A deep learning approach combining sequence and graph embedding for source code vulnerability detection" [2023-01] [J. Syst. Softw.] [paper]

  • "Benchmarking Software Vulnerability Detection Techniques: A Survey" [2023-03] [paper]

  • "Transformer-based Vulnerability Detection in Code at EditTime: Zero-shot, Few-shot, or Fine-tuning?" [2023-05] [paper]

  • "A Survey on Automated Software Vulnerability Detection Using Machine Learning and Deep Learning" [2023-06] [paper]

  • "Limits of Machine Learning for Automatic Vulnerability Detection" [2023-06] [paper]

  • "Evaluating Instruction-Tuned Large Language Models on Code Comprehension and Generation" [2023-08] [paper]

  • "Prompt-Enhanced Software Vulnerability Detection Using ChatGPT" [2023-08] [paper]

  • "Towards Causal Deep Learning for Vulnerability Detection" [2023-10] [paper]

  • "Understanding the Effectiveness of Large Language Models in Detecting Security Vulnerabilities" [2023-11] [paper]

  • "How Far Have We Gone in Vulnerability Detection Using Large Language Models" [2023-11] [paper]

  • "Can Large Language Models Identify And Reason About Security Vulnerabilities? Not Yet" [2023-12] [paper]

  • "LLM4Vuln: A Unified Evaluation Framework for Decoupling and Enhancing LLMs' Vulnerability Reasoning" [2024-01] [paper]

  • "Security Code Review by LLMs: A Deep Dive into Responses" [2024-01] [paper]

  • "Chain-of-Thought Prompting of Large Language Models for Discovering and Fixing Software Vulnerabilities" [2024-02] [paper]

  • "Multi-role Consensus through LLMs Discussions for Vulnerability Detection" [2024-03] [paper]

  • "A Comprehensive Study of the Capabilities of Large Language Models for Vulnerability Detection" [2024-03] [paper]

  • "Vulnerability Detection with Code Language Models: How Far Are We?" [2024-03] [paper]

  • "Multitask-based Evaluation of Open-Source LLM on Software Vulnerability" [2024-04] [paper]

  • "Large Language Model for Vulnerability Detection and Repair: Literature Review and Roadmap" [2024-04] [paper]

  • "Pros and Cons! Evaluating ChatGPT on Software Vulnerability" [2024-04] [paper]

  • "VulEval: Towards Repository-Level Evaluation of Software Vulnerability Detection" [2024-04] [paper]

Type Prediction

  • "Learning type annotation: is big data enough?", 2021-08, ESEC/FSE 2021, [paper]

  • "Do Machine Learning Models Produce TypeScript Types That Type Check?", 2023-02, ECOOP 2023, [paper]

  • "TypeT5: Seq2seq Type Inference using Static Analysis", 2023-03, ICLR 2023, [paper]

  • "Type Prediction With Program Decomposition and Fill-in-the-Type Training", 2023-05, [paper]

  • "Generative Type Inference for Python", 2023-07, ASE 2023, [paper]

  • "Activation Steering for Robust Type Prediction in CodeLLMs", 2024-04, [paper]

Malicious Code Detection

  • "Deep Android Malware Detection", 2017-03, CODASPY 2017, [paper]

  • "A Multimodal Deep Learning Method for Android Malware Detection Using Various Features", 2018-08, IEEE Trans. Inf. Forensics Secur. 2019, [paper]

  • "Portable, Data-Driven Malware Detection using Language Processing and Machine Learning Techniques on Behavioral Analysis Reports", 2018-12, Digit. Investig. 2019, [paper]

  • "I-MAD: Interpretable Malware Detector Using Galaxy Transformer", 2019-09, Comput. Secur. 2021, [paper]

  • "Droidetec: Android Malware Detection and Malicious Code Localization through Deep Learning", 2020-02, [paper]

  • "Malicious Code Detection: Run Trace Output Analysis by LSTM", 2021-01, IEEE Access 2021, [paper]

  • "Intelligent malware detection based on graph convolutional network", 2021-08, J. Supercomput. 2021, [paper]

  • "Malbert: A novel pre-training method for malware detection", 2021-09, Comput. Secur. 2021, [paper]

  • "Single-Shot Black-Box Adversarial Attacks Against Malware Detectors: A Causal Language Model Approach", 2021-12, ISI 2021, [paper]

  • "M2VMapper: Malware-to-Vulnerability mapping for Android using text processing", 2021-12, Expert Syst. Appl. 2022, [paper]

  • "Malware Detection and Prevention using Artificial Intelligence Techniques", 2021-12, IEEE BigData 2021, [paper]

  • "An Ensemble of Pre-trained Transformer Models For Imbalanced Multiclass Malware Classification", 2021-12, Comput. Secur. 2022, [paper]

  • "EfficientNet convolutional neural networks-based Android malware detection", 2022-01, Comput. Secur. 2022, [paper]

  • "Static Malware Detection Using Stacked BiLSTM and GPT-2", 2022-05, IEEE Access 2022, [paper]

  • "APT Malicious Sample Organization Traceability Based on Text Transformer Model", 2022-07, PRML 2022, [paper]

  • "Self-Supervised Vision Transformers for Malware Detection", 2022-08, IEEE Access 2022, [paper]

  • "A Survey of Recent Advances in Deep Learning Models for Detecting Malware in Desktop and Mobile Platforms", 2022-09, ACM Computing Surveys, [paper]

  • "Malicious Source Code Detection Using Transformer", 2022-09, [paper]

  • "Flexible Android Malware Detection Model based on Generative Adversarial Networks with Code Tensor", 2022-10, CyberC 2022, [paper]

  • "MalBERTv2: Code Aware BERT-Based Model for Malware Identification", 2023-03, Big Data Cogn. Comput. 2023, [paper]

  • "GPThreats-3: Is Automatic Malware Generation a Threat?", 2023-05, SPW 2023, [paper]

  • "GitHub Copilot: A Threat to High School Security? Exploring GitHub Copilot's Proficiency in Generating Malware from Simple User Prompts", 2023-08, ETNCC 2023, [paper]

  • "An Attacker’s Dream? Exploring the Capabilities of ChatGPT for Developing Malware", 2023-08, CSET 2023, [paper]

  • "Malicious code detection in android: the role of sequence characteristics and disassembling methods", 2023-12, Int. J. Inf. Sec. 2023, [paper]

  • "Prompt Engineering-assisted Malware Dynamic Analysis Using GPT-4", 2023-12, [paper]

  • "Shifting the Lens: Detecting Malware in npm Ecosystem with Large Language Models", 2024-03, [paper]

Repository-Level Coding

  • "Repository-Level Prompt Generation for Large Language Models of Code", 2022-06, ICML 2023, [paper]

  • "CoCoMIC: Code Completion By Jointly Modeling In-file and Cross-file Context", 2022-12, [paper]

  • "RepoCoder: Repository-Level Code Completion Through Iterative Retrieval and Generation", 2023-03, EMNLP 2023, [paper]

  • "RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems", 2023-06, [paper]

  • "Guiding Language Models of Code with Global Context using Monitors", 2023-06, [paper]

  • "RepoFusion: Training Code Models to Understand Your Repository", 2023-06, [paper]

  • "CodePlan: Repository-level Coding using LLMs and Planning", 2023-09, [paper]

  • "SWE-bench: Can Language Models Resolve Real-World GitHub Issues?", 2023-10, ICLR 2024, [paper]

  • "CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion", 2023-10, NeurIPS 2023, [paper]

  • "A^3-CodGen: A Repository-Level Code Generation Framework for Code Reuse with Local-Aware, Global-Aware, and Third-Party-Library-Aware", 2023-12, [paper]

  • "RepoHyper: Better Context Retrieval Is All You Need for Repository-Level Code Completion", 2024-03, [paper]

  • "Repoformer: Selective Retrieval for Repository-Level Code Completion", 2024-03, [paper]

  • "CodeS: Natural Language to Code Repository via Multi-Layer Sketch", 2024-03, [paper]

Compiler Optimization

  • "Large Language Models for Compiler Optimization", 2023-09, [paper]

  • "Refining Decompiled C Code with Large Language Models", 2023-10, [paper]

  • "Priority Sampling of Large Language Models for Compilers", 2024-02, [paper]

Frontend Development & Web Agents

  • "Seeking the user interface", 2014-09, ASE 2014, [paper]

  • "pix2code: Generating Code from a Graphical User Interface Screenshot", 2017-05, EICS 2018, [paper]

  • "Machine Learning-Based Prototyping of Graphical User Interfaces for Mobile Apps", 2018-02, TSE 2020, [paper]

  • "Automatic HTML Code Generation from Mock-Up Images Using Machine Learning Techniques", 2019-04, EBBT 2019, [paper]

  • "Sketch2code: Generating a website from a paper mockup", 2019-05, [paper]

  • "HTLM: Hyper-Text Pre-Training and Prompting of Language Models", 2021-07, ICLR 2022, [paper]

  • "MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding", 2021-10, ACL 2022, [paper]

  • "WebKE: Knowledge Extraction from Semi-structured Web with Pre-trained Markup Language Model", 2021-10, CIKM 2021, [paper]

  • "WebGPT: Browser-assisted question-answering with human feedback", 2021-12, [paper]

  • "CM3: A Causal Masked Multimodal Model of the Internet", 2022-01, [paper]

  • "DOM-LM: Learning Generalizable Representations for HTML Documents", 2022-01, [paper]

  • "WebFormer: The Web-page Transformer for Structure Information Extraction", 2022-02, WWW 2022, [paper]

  • "A Dataset for Interactive Vision-Language Navigation with Unknown Command Feasibility", 2022-02, ECCV 2022, [paper]

  • "WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents", 2022-07, NeurIPS 2022, [paper]

  • "Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding", 2022-10, ICML 2023, [paper]

  • "Understanding HTML with Large Language Models", 2022-10, EMNLP 2023 findings, [paper]

  • "WebUI: A Dataset for Enhancing Visual UI Understanding with Web Semantics", 2023-01, CHI 2023, [paper]

  • "Learning UI-to-Code Reverse Generator Using Visual Critic Without Rendering", 2023-05, [paper]

  • "Mind2Web: Towards a Generalist Agent for the Web", 2023-06, NeurIPS 2023, [paper]

  • "A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis", 2023-07, ICLR 2024, [paper]

  • "WebArena: A Realistic Web Environment for Building Autonomous Agents", 2023-07, [paper]

  • "CogAgent: A Visual Language Model for GUI Agents", 2023-12, [paper]

  • "GPT-4V(ision) is a Generalist Web Agent, if Grounded", 2024-01, [paper]

  • "WebVoyager: Building an End-to-End Web Agent with Large Multimodal Models", 2024-01, [paper]

  • "WebLINX: Real-World Website Navigation with Multi-Turn Dialogue", 2024-02, [paper]

  • "OmniACT: A Dataset and Benchmark for Enabling Multimodal Generalist Autonomous Agents for Desktop and Web", 2024-02, [paper]

  • "Design2Code: How Far Are We From Automating Front-End Engineering?" [2024-03] [paper]

  • "Unlocking the conversion of Web Screenshots into HTML Code with the WebSight Dataset" [2024-03] [paper]

  • "AutoWebGLM: Bootstrap And Reinforce A Large Language Model-based Web Navigating Agent" [2024-04] [paper]

  • "WILBUR: Adaptive In-Context Learning for Robust and Accurate Web Agents" [2024-04] [paper]

  • "VISION2UI: A Real-World Dataset with Layout for Code Generation from UI Designs" [2024-04] [paper]

  • "AutoCrawler: A Progressive Understanding Web Agent for Web Crawler Generation" [2024-04] [paper]

Text-To-SQL

  • "PICARD: Parsing Incrementally for Constrained Auto-Regressive Decoding from Language Models" [2021-09] [EMNLP 2021] [paper]

  • "CodexDB: Generating Code for Processing SQL Queries using GPT-3 Codex" [2022-04] [paper]

  • "T5QL: Taming language models for SQL generation" [2022-09] [paper]

  • "Towards Generalizable and Robust Text-to-SQL Parsing" [2022-10] [EMNLP Findings 2022] [paper]

  • "XRICL: Cross-lingual Retrieval-Augmented In-Context Learning for Cross-lingual Text-to-SQL Semantic Parsing" [2022-10] [EMNLP Findings 2022] [paper]

  • "A comprehensive evaluation of ChatGPT's zero-shot Text-to-SQL capability" [2023-03] [paper]

  • "DIN-SQL: Decomposed In-Context Learning of Text-to-SQL with Self-Correction" [2023-04] [NeurIPS 2023] [paper]

  • "How to Prompt LLMs for Text-to-SQL: A Study in Zero-shot, Single-domain, and Cross-domain Settings" [2023-05] [paper]

  • "Enhancing Few-shot Text-to-SQL Capabilities of Large Language Models: A Study on Prompt Design Strategies" [2023-05] [paper]

  • "SQL-PaLM: Improved Large Language Model Adaptation for Text-to-SQL" [2023-05] [paper]

  • "Retrieval-augmented GPT-3.5-based Text-to-SQL Framework with Sample-aware Prompting and Dynamic Revision Chain" [2023-07] [ICONIP 2023] [paper]

  • "Text-to-SQL Empowered by Large Language Models: A Benchmark Evaluation" [2023-08] [paper]

  • "SQL-Encoder: Improving NL2SQL In-Context Learning Through a Context-Aware Encoder" [2024-03] [paper]

  • "LLM-R2: A Large Language Model Enhanced Rule-based Rewrite System for Boosting Query Efficiency" [2024-04] [paper]

  • "Dubo-SQL: Diverse Retrieval-Augmented Generation and Fine Tuning for Text-to-SQL" [2024-04] [paper]

  • "EPI-SQL: Enhancing Text-to-SQL Translation with Error-Prevention Instructions" [2024-04] [paper]

  • "ProbGate at EHRSQL 2024: Enhancing SQL Query Generation Accuracy through Probabilistic Threshold Filtering and Error Handling" [2024-04] [paper]

Decompilation

  • "Using recurrent neural networks for decompilation" [2018-03] [SANER 2018] [paper]

  • "Evolving Exact Decompilation" [2018] [paper]

  • "Towards Neural Decompilation" [2019-05] [paper]

  • "Coda: An End-to-End Neural Program Decompiler" [2019-06] [NeurIPS 2019] [paper]

  • "N-Bref : A High-fidelity Decompiler Exploiting Programming Structures" [2020-09] [paper]

  • "Neutron: an attention-based neural decompiler" [2021-03] [Cybersecurity 2021] [paper]

  • "Beyond the C: Retargetable Decompilation using Neural Machine Translation" [2022-12] [paper]

  • "Boosting Neural Networks to Decompile Optimized Binaries" [2023-01] [ACSAC 2022] [paper]

  • "SLaDe: A Portable Small Language Model Decompiler for Optimized Assembly" [2023-05] [paper]

  • "Nova+: Generative Language Models for Binaries" [2023-11] [paper]

  • "LLM4Decompile: Decompiling Binary Code with Large Language Models" [2024-03] [paper]

Test Generation

  • "Unit Test Case Generation with Transformers and Focal Context" [2020-09] [AST@ICSE 2022] [paper]

  • "Generating Accurate Assert Statements for Unit Test Cases using Pretrained Transformers" [2020-09] [paper]

  • "TOGA: A Neural Method for Test Oracle Generation" [2021-09] [ICSE 2022] [paper]

  • "An Empirical Evaluation of Using Large Language Models for Automated Unit Test Generation" [2023-02] [IEEE TSE] [paper]

  • "A3Test: Assertion-Augmented Automated Test Case Generation" [2023-02] [paper]

  • "Learning Deep Semantics for Test Completion" [2023-02] [ICSE 2023] [paper]

  • "Using Large Language Models to Generate JUnit Tests: An Empirical Study" [2023-04] [EASE 2024] [paper]

  • "CodaMosa: Escaping Coverage Plateaus in Test Generation with Pre-Trained Large Language Models" [2023-05] [ICSE 2023] [paper]

  • "No More Manual Tests? Evaluating and Improving ChatGPT for Unit Test Generation" [2023-05] [paper]

  • "ChatUniTest: a ChatGPT-based automated unit test generation tool" [2023-05] [paper]

  • "ChatGPT vs SBST: A Comparative Assessment of Unit Test Suite Generation" [2023-07] [paper]

  • "Can Large Language Models Write Good Property-Based Tests?" [2023-07] [paper]

  • "Domain Adaptation for Deep Unit Test Case Generation" [2023-08] [paper]

  • "Effective Test Generation Using Pre-trained Large Language Models and Mutation Testing" [2023-08] [paper]

  • "How well does LLM generate security tests?" [2023-10] [paper]

  • "Reinforcement Learning from Automatic Feedback for High-Quality Unit Test Generation" [2023-10] [paper]

  • "An initial investigation of ChatGPT unit test generation capability" [2023-10] [SAST 2023] [paper]

  • "CoverUp: Coverage-Guided LLM-Based Test Generation" [2024-03] [paper]

  • "Enhancing LLM-based Test Generation for Hard-to-Cover Branches via Program Analysis" [2024-04] [paper]

  • "Test Code Generation for Telecom Software Systems using Two-Stage Generative Model" [2024-04] [paper]

  • "LLM-Powered Test Case Generation for Detecting Tricky Bugs" [2024-04] [paper]

  • "Generating Test Scenarios from NL Requirements using Retrieval-Augmented LLMs: An Industrial Study" [2024-04] [paper]

  • "Large Language Models as Test Case Generators: Performance Evaluation and Enhancement" [2024-04] [paper]

Mutation Testing

  • "μBERT: Mutation Testing using Pre-Trained Language Models" [2022-03] [paper]

  • "Efficient Mutation Testing via Pre-Trained Language Models" [2023-01] [paper]

  • "LLMorpheus: Mutation Testing using Large Language Models" [2024-04] [paper]

Commit Message Generation

  • "Automated Commit Message Generation with Large Language Models: An Empirical Study and Beyond" [2024-04] [paper]

6. Analysis of AI-Generated Code

Vulnerabilities

  • "You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion" [2021-08] [USENIX Security Symposium 2021] [paper]

  • "Is GitHub's Copilot as Bad as Humans at Introducing Vulnerabilities in Code?" [2022-04] [Empir. Softw. Eng.] [paper]

  • "Lost at C: A User Study on the Security Implications of Large Language Model Code Assistants" [2022-08] [USENIX Security Symposium 2023] [paper]

  • "Do Users Write More Insecure Code with AI Assistants?" [2022-1] [CCS 2023] [paper]

  • "Just another copy and paste? Comparing the security vulnerabilities of ChatGPT generated code and StackOverflow answers" [2024-03] [paper]

  • "DeVAIC: A Tool for Security Assessment of AI-generated Code" [2024-04] [paper]

  • "LLMs in Web-Development: Evaluating LLM-Generated PHP code unveiling vulnerabilities and limitations" [2024-04] [paper]

Correctness

  • "An Empirical Evaluation of GitHub Copilot's Code Suggestions" [2022-05] [MSR 2022] [paper]

  • "Large Language Models and Simple, Stupid Bugs" [2023-03] [MSR 2023] [paper]

  • "Evaluating the Code Quality of AI-Assisted Code Generation Tools: An Empirical Study on GitHub Copilot, Amazon CodeWhisperer, and ChatGPT" [2023-04] [paper]

  • "No Need to Lift a Finger Anymore? Assessing the Quality of Code Generation by ChatGPT" [2023-08] [paper]

  • "Bugs in Large Language Models Generated Code: An Empirical Study" [2024-03] [paper]

  • "ChatGPT Incorrectness Detection in Software Reviews" [2024-03] [paper]

AI-Generated Code Detection

  • "Zero-Shot Detection of Machine-Generated Codes" [2023-10] [paper]

  • "CodeIP: A Grammar-Guided Multi-Bit Watermark for Large Language Models of Code" [2024-04] [paper]

Others

  • "Exploring and Evaluating Hallucinations in LLM-Powered Code Generation" [2024-04] [paper]

  • "Syntactic Robustness for LLM-based Code Generation" [2024-04] [paper]

  • "Testing the Effect of Code Documentation on Large Language Model Code Understanding" [2024-04] [paper]

  • "On Evaluating the Efficiency of Source Code Generated by LLMs" [2024-04] [paper]

  • "Does Your Neural Code Completion Model Use My Code? A Membership Inference Approach" [2024-04] [paper]

7. User-LLM Interaction

  • "Expectation vs. Experience: Evaluating the Usability of Code Generation Tools Powered by Large Language Models" [2022-04] [CHI EA 2022] [paper]

  • "Grounded Copilot: How Programmers Interact with Code-Generating Models" [2022-06] [OOPSLA 2023] [paper]

  • "Reading Between the Lines: Modeling User Behavior and Costs in AI-Assisted Programming" [2022-10] [paper]

  • "The Impact of AI on Developer Productivity: Evidence from GitHub Copilot" [2023-02] [paper]

  • "The Programmer's Assistant: Conversational Interaction with a Large Language Model for Software Development" [2023-02] [IUI 2023] [paper]

  • ""It's Weird That it Knows What I Want": Usability and Interactions with Copilot for Novice Programmers" [2023-04] [ACM TCHI] [paper]

  • "DevGPT: Studying Developer-ChatGPT Conversations" [2023-08] [paper]

  • "How Do Analysts Understand and Verify AI-Assisted Data Analyses?" [2023-09] [paper]

  • "How Novices Use LLM-Based Code Generators to Solve CS1 Coding Tasks in a Self-Paced Learning Environment" [2023-09] [Koli Calling 2023] [paper]

  • "Conversational Challenges in AI-Powered Data Science: Obstacles, Needs, and Design Opportunities" [2023-10] [paper]

  • "The RealHumanEval: Evaluating Large Language Models' Abilities to Support Programmers" [2024-04] [paper]

  • "Unlocking Adaptive User Experience with Generative AI" [2024-04] [paper]

  • "BISCUIT: Scaffolding LLM-Generated Code with Ephemeral UIs in Computational Notebooks" [2024-04] [paper]

  • "How far are AI-powered programming assistants from meeting developers' needs?" [2024-04] [paper]

  • "Beyond Code Generation: An Observational Study of ChatGPT Usage in Software Engineering Practice" [2024-04] [paper]

8. Datasets

8.1 Pretraining

  1. CodeSearchNet: "CodeSearchNet Challenge: Evaluating the State of Semantic Code Search" [2019-09] [paper] [repo] [data]

  2. The Pile: "The Pile: An 800GB Dataset of Diverse Text for Language Modeling" [2020-12], [paper] [data]

  3. CodeParrot, 2022-02, [data]

  4. The Stack: "The Stack: 3 TB of permissively licensed source code" [2022-11] [paper] [data]

  5. ROOTS: "The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset" [2023-03] [NeurIPS 2022 Datasets and Benchmarks Track] [paper] [data]

  6. The Stack v2: "StarCoder 2 and The Stack v2: The Next Generation" [2024-02] [paper] [data]

8.2 Benchmarks

Integrated Benchmarks

  • CodeXGLUE: "CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation" [2021-02] [NeurIPS Datasets and Benchmarks 2021] [paper] [repo] [data]

  • CodefuseEval: "CodeFuse-13B: A Pretrained Multi-lingual Code Large Language Model" [2023-10] [paper] [repo]

  • CodeEditorBench: "CodeEditorBench: Evaluating Code Editing Capability of Large Language Models" [2024-04] [paper] [repo]

Program Synthesis

Date Venue Benchmark Size Language Source
2018-02 LREC 2018 NL2Bash 9305 Bash "NL2Bash: A Corpus and Semantic Parser for Natural Language Interface to the Linux Operating System" [paper] [data]
2018-08 EMNLP 2018 CONCODE 104K Java "Mapping Language to Code in Programmatic Context" [paper] [data]
2019-10 EMNLP-IJCNLP 2019 JuICe 1.5M/3725 * Python "JuICe: A Large Scale Distantly Supervised Dataset for Open Domain Context-based Code Generation" [paper] [data]
2021-05 NeurIPS 2021 APPS 10000 Python "Measuring Coding Challenge Competence With APPS" [paper] [data]
2021-07 arXiv HumanEval 164 Python "Evaluating Large Language Models Trained on Code" [paper] [data]
2021-08 arXiv MBPP/MathQA-Python 974/23914 Python "Program Synthesis with Large Language Models" [paper] [MBPP] [MathQA-Python]
2021-08 ACL/IJCNLP 2021 PlotCoder 40797 Python "PlotCoder: Hierarchical Decoding for Synthesizing Visualization Code in Programmatic Context" [paper] [data]
2022-01 arXiv DSP 1119 Python "Training and Evaluating a Jupyter Notebook Data Science Assistant" [paper] [data]
2022-02 Science CodeContests 13610 C++, Python, Java "Competition-Level Code Generation with AlphaCode" [paper] [data]
2022-03 EACL 2023 Findings MCoNaLa 896 Python "MCoNaLa: A Benchmark for Code Generation from Multiple Natural Languages" [paper] [data]
2022-06 arXiv AixBench 336 Java "AixBench: A Code Generation Benchmark Dataset" [paper] [data]
2022-08 IEEE Trans. Software Engineering MultiPL-E "MultiPL-E: A Scalable and Extensible Approach to Benchmarking Neural Code Generation", [paper] [data]
2022-10 ICLR 2023 MBXP 12.4K Python, Java, JS, TypeScript, Go, C#, PHP, Ruby, Kotlin, C++, Perl, Scala, Swift "Multi-lingual Evaluation of Code Generation Models" [paper] [data]
2022-10 ICLR 2023 Multilingual HumanEval 1.9K Python, Java, JS, TypeScript, Go, C#, PHP, Ruby, Kotlin, Perl, Scala, Swift "Multi-lingual Evaluation of Code Generation Models" [paper] [data]
2022-10 ICLR 2023 MathQA-X 5.6K Python, Java, JS "Multi-lingual Evaluation of Code Generation Models" [paper] [data]
2022-11 arXiv ExeDS 534 Python "Execution-based Evaluation for Data Science Code Generation Models" [paper] [data]
2022-11 arXiv DS-1000 1000 Python "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation" [paper] [data]
2022-12 arXiv ODEX 945 Python "Execution-Based Evaluation for Open-Domain Code Generation" [paper] [data]
2023-02 arXiv CoderEval 460 Python, Java "CoderEval: A Benchmark of Pragmatic Code Generation with Generative Pre-trained Models" [paper] [data]
2023-03 arXiv xCodeEval 5.5M C, C#, C++, Go, Java, JS, Kotlin, PHP, Python, Ruby, Rust "xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval" [paper] [data]
2023-03 arXiv HumanEval-X 820 Python, C++, Java, JS, Go "CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Evaluations on HumanEval-X" [paper] [data]
2023-05 arXiv HumanEval+ 164 Python "Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation" [paper] [data]
2023-06 arXiv StudentEval 1749 $^\dagger$ Python "StudentEval: A Benchmark of Student-Written Prompts for Large Language Models of Code" [paper] [data]
2023-08 arXiv HumanEvalPack 984 Python, JS, Go, Java, C++, Rust "OctoPack: Instruction Tuning Code Large Language Models" [paper] [data]
2023-06 NeurIPS 2023 DotPrompts 10538 $^\ddagger$ Java "Guiding Language Models of Code with Global Context using Monitors" [paper] [data]
2023-09 arXiv CodeApex 476 C++ "CodeApex: A Bilingual Programming Evaluation Benchmark for Large Language Models" [paper] [data]
2023-09 arXiv VerilogEval 8645/156 $^\diamond$ Verilog "VerilogEval: Evaluating Large Language Models for Verilog Code Generation" [paper] [data]
2023-11 arXiv ML-Bench 10040 Bash "ML-Bench: Large Language Models Leverage Open-source Libraries for Machine Learning Tasks" [paper] [data]
2024-04 arXiv MMCode 3548 Python "MMCode: Evaluating Multi-Modal Code Large Language Models with Visually Rich Programming Problems" [paper] [data]
2024-04 arXiv USACO 307 Python "Can Language Models Solve Olympiad Programming?" [paper] [data]

* Automatically mined/human-annotated

$^\dagger$ 1749 prompts for 48 problems

$^\ddagger$ 10538 prompts for 1420 problems

$^\diamond$ Machine/human prompts

Text-to-SQL

  • "Deep learning driven natural languages text to SQL query conversion: A survey", 2022-08, arXiv, [paper]
  • "Recent Advances in Text-to-SQL: A Survey of What We Have and What We Expect", 2022-08, COLING 2022, [paper]
  • "A Survey on Text-to-SQL Parsing: Concepts, Methods, and Future Directions", 2022-08, arXiv, [paper]
  • "A survey on deep learning approaches for text-to-SQL", 2023-01, VLDB J., [paper]
Date Venue Benchmark Size Language Source
2017-08 arXiv WikiSQL 80654 "Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning" [paper] [data]
2018-06 CL 2018 Advising 4570 "Improving Text-to-SQL Evaluation Methodology" [paper] [data]
2018-09 EMNLP 2018 Spider 10181 "Spider: A Large-Scale Human-Labeled Dataset for Complex and Cross-Domain Semantic Parsing and Text-to-SQL Task" [paper] [data]
2019-06 ACL 2019 SParC 12726 "SParC: Cross-Domain Semantic Parsing in Context" [paper] [data]
2019-07 WWW 2020 MIMICSQL 10000 "Text-to-SQL Generation for Question Answering on Electronic Medical Records" [paper] [data]
2019-09 EMNLP-IJCNLP 2019 CoSQL 15598 "CoSQL: A Conversational Text-to-SQL Challenge Towards Cross-Domain Natural Language Interfaces to Databases" [paper] [data]
2020-05 LREC 2020 Criteria-to-SQL 2003 "Dataset and Enhanced Model for Eligibility Criteria-to-SQL Semantic Parsing" [paper] [data]
2020-10 EMNLP 2020 Findings Squall 11276 "On the Potential of Lexico-logical Alignments for Semantic Parsing to SQL Queries" [paper] [data]
2020-10 NAACL-HLT 2021 Spider-Realistic 508 "Structure-Grounded Pretraining for Text-to-SQL" [paper] [data]
2021-06 ACL/IJCNLP 2021 Spider-Syn 8034 "Towards Robustness of Text-to-SQL Models against Synonym Substitution" [paper] [data]
2021-06 NLP4Prog 2021 SEDE 12023 "Text-to-SQL in the Wild: A Naturally-Occurring Dataset Based on Stack Exchange Data" [paper] [data]
2021-06 ACL/IJCNLP 2021 KaggleDBQA 400 "KaggleDBQA: Realistic Evaluation of Text-to-SQL Parsers" [paper] [data]
2021-09 EMNLP Spider-DK 535 "Exploring Underexplored Limitations of Cross-Domain Text-to-SQL Generalization" [paper] [data]
2022-05 NAACL 2022 Findings Spider-SS/CG 8034/45599 "Measuring and Improving Compositional Generalization in Text-to-SQL via Component Alignment" [paper] [data]
2023-05 arXiv BIRD 12751 "Can LLM Already Serve as A Database Interface? A BIg Bench for Large-Scale Database Grounded Text-to-SQLs" [paper] [data]
2023-06 ACL 2023 XSemPLR 24.4K "XSemPLR: Cross-Lingual Semantic Parsing in Multiple Natural Languages and Meaning Representations" [paper] [data]

Code Translation

Date Venue Benchmark Size Language Source
2020-06 NeurIPS 2020 Transcoder GeeksforGeeks 1.4K C++, Java, Python "Unsupervised Translation of Programming Languages" [paper] [data]
2021-02 NeurIPS Datasets and Benchmarks 2021 CodeTrans 11.8K Java, C# "CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation" [paper] [data]
2021-08 ACL 2023 Findings Avatar 9515 Java, Python "AVATAR: A Parallel Corpus for Java-Python Program Translation" [paper] [data]
2022-06 AAAI 2022 CoST 132K C++, Java, Python, C#, JS, PHP, C "Multilingual Code Snippets Training for Program Translation" [paper] [data]
2022-06 arXiv XLCoST 567K C++, Java, Python, C#, JS, PHP, C "XLCoST: A Benchmark Dataset for Cross-lingual Code Intelligence" [paper] [data]
2023-03 arXiv xCodeEval 5.6M C, C#, C++, Go, Java, JS, Kotlin, PHP, Python, Ruby, Rust "xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval" [paper] [data]
2023-03 arXiv HumanEval-X 1640 Python, C++, Java, JS, Go "CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Evaluations on HumanEval-X" [paper] [data]
2023-08 arXiv G-TransEval 4000 C++, Java, C#, JS, Python "On the Evaluation of Neural Code Translation: Taxonomy and Benchmark" [paper] [data]
2023-10 arXiv CodeTransOcean 270.5K 45 "CodeTransOcean: A Comprehensive Multilingual Benchmark for Code Translation" [paper] [data]

Program Repair

  • "Neural Program Repair: Systems, Challenges and Solutions", 2022-02, Internetware 2022, [paper]
  • "A Survey of Learning-based Automated Program Repair", 2023-01, arXiv, [paper]
  • "A Survey on Automated Program Repair Techniques", 2023-03, arXiv, [paper]
Date Venue Benchmark Size Language Source
2014-07 ISSTA 2014 Defects4J 357 Java "Defects4J: A Database of Existing Faults to Enable Controlled Testing Studies for Java Programs" [paper] [data]
2015-12 IEEE Trans. Software Engineering ManyBugs/IntroClass 185/998 C "The ManyBugs and IntroClass Benchmarks for Automated Repair of C Programs" [paper] [data]
2016-11 FSE 2016 BugAID 105K JS "Discovering Bug Patterns in JavaScript" [paper] [data]
2017-02 AAAI 2017 DeepFix 6971 C "DeepFix: Fixing Common C Language Errors by Deep Learning" [paper] [data]
2017-05 ICSE-C 2017 Codeflaws 3902 C "DeepFix: Fixing Common C Language Errors by Deep Learning" [paper] [data]
2017-10 SPLASH 2017 QuixBugs 80 Java, Python "QuixBugs: a multi-lingual program repair benchmark set based on the quixey challenge" [paper] [data]
2018-05 MSR 2018 Bugs.jar 1158 Java "Bugs.jar: a large-scale, diverse dataset of real-world Java bugs" [paper] [data]
2018-12 ACM Trans. Softw. Eng. Methodol. BFP 124K Java "An Empirical Study on Learning Bug-Fixing Patches in the Wild via Neural Machine Translation" [paper] [data]
2019-01 SANER 2019 Bears 251 Java "Bears: An Extensible Java Bug Benchmark for Automatic Program Repair Studies" [paper] [data]
2019-01 ICSE 2019 unnamed 21.8K * Java "On Learning Meaningful Code Changes via Neural Machine Translation" [paper] [data]
2019-04 ICST 2019 BugsJS 453 JS "BugsJS: a Benchmark of JavaScript Bugs" [paper] [data]
2019-05 ICSE 2019 BugSwarm 1827/1264 Java/Python "BugSwarm: mining and continuously growing a dataset of reproducible failures and fixes" [paper] [data]
2019-05 ICSE 2019 CPatMiner 17K * Java "Graph-based mining of in-the-wild, fine-grained, semantic code change patterns" [paper] [data]
2019-05 MSR 2020 ManySStuBs4J 154K Java "How Often Do Single-Statement Bugs Occur? The ManySStuBs4J Dataset" [paper] [data]
2019-11 ASE 2019 Refactory 1783 Python "Re-factoring based program repair applied to programming assignments" [paper] [data]
2020-07 ISSTA 2020 CoCoNut 24M Java, Python, C, JS "CoCoNuT: combining context-aware neural translation models using ensemble for program repair" [paper] [data]
2020-10 Inf. Softw. Technol. Review4Repair 58021 Java "Review4Repair: Code Review Aided Automatic Program Repairing" [paper] [data]
2020-11 ESEC/FSE 2020 BugsInPy 493 Python "BugsInPy: A Database of Existing Bugs in Python Programs to Enable Controlled Testing and Debugging Studies" [paper] [data]
2021-07 ICML 2021 TFix 105K JS "TFix: Learning to Fix Coding Errors with a Text-to-Text Transformer" [paper] [data]
2021-08 arXiv Megadiff 663K * Java "Megadiff: A Dataset of 600k Java Source Code Changes Categorized by Diff Size" [paper] [data]
2022-01 SSB/TSSB MSR 2022 9M/3M Python "TSSB-3M: Mining single statement bugs at massive scale" [paper] [data]
2022-10 MSR 2022 FixJS 324K JS "FixJS: a dataset of bug-fixing JavaScript commits" [paper] [data]
2022-11 ESEC/FSE 2022 TypeBugs 93 Python "PyTER: Effective Program Repair for Python Type Errors" [paper] [data]
2023-03 arXiv xCodeEval 4.7M C, C#, C++, Go, Java, JS, Kotlin, PHP, Python, Ruby, Rust "xCodeEval: A Large Scale Multilingual Multitask Benchmark for Code Understanding, Generation, Translation and Retrieval" [paper] [data]
2023-04 arXiv RunBugRun 450K C, C++, Java, Python, JS, Ruby, Go, PHP "RunBugRun -- An Executable Dataset for Automated Program Repair" [paper] [data]
2023-08 arXiv HumanEvalPack 984 Python, JS, Go, Java, C++, Rust "OctoPack: Instruction Tuning Code Large Language Models" [paper] [data]
2024-01 arXiv DebugBench 4253 C++, Java, Python "DebugBench: Evaluating Debugging Capability of Large Language Models" [paper] [data]

* These are code-change datasest, and only a subset therein concerns bug fixing.

Code Summarization

  • "A Survey of Automatic Source Code Summarization", 2022-02, Symmetry, [paper]
Date Venue Benchmark Size Language Source
2016-08 ACL 2016 CODE-NN 66K/32K C#/SQL "Summarizing Source Code using a Neural Attention Model" [paper] [data]
2017-07 IJCNLP 2017 unnamed 150K Python "A parallel corpus of Python functions and documentation strings for automated code documentation and code generation" [paper] [data]
2018-05 ICPC 2018 DeepCom 588K Java "Deep code comment generation" [paper] [data]
2018-07 IJCAI 2018 TL-CodeSum 411K Java "Summarizing Source Code with Transferred API Knowledge" [paper] [data]
2018-11 ASE 2018 unnamed 109K Python "Improving Automatic Source Code Summarization via Deep Reinforcement Learning" [paper] [data]
2019-09 arxiv CodeSearchNet 2.3M Go, JS, Python, PHP, Java, Ruby "CodeSearchNet Challenge: Evaluating the State of Semantic Code Search" [paper] [data]
2023-08 arXiv HumanEvalPack 984 Python, JS, Go, Java, C++, Rust "OctoPack: Instruction Tuning Code Large Language Models" [paper] [data]

Defect/Vulnerability Detection

  • "Benchmarking Software Vulnerability Detection Techniques: A Survey", 2023-03, arXiv, [paper]
Date Venue Benchmark Size Language Source
2018-01 NDSS 2018 CGD 62K C, C++ "VulDeePecker: A Deep Learning-Based System for Vulnerability Detection" [paper] [data]
2018-04 IEEE Trans. Ind. Informatics unnamed 32988 C, C++ "Cross-Project Transfer Representation Learning for Vulnerable Function Discovery" [paper] [data]
2018-07 ICMLA 2018 Draper VDISC 12.8M C, C++ "Automated Vulnerability Detection in Source Code Using Deep Representation Learning" [paper] [data]
2018-07 IEEE TDSC SySeVR 15591 C, C++ "SySeVR: A Framework for Using Deep Learning to Detect Software Vulnerabilities" [paper] [data]
2019-02 MSR 2019 unnamed 624 Java "A Manually-Curated Dataset of Fixes to Vulnerabilities of Open-Source Software" [paper] [data]
2019-09 NeurIPS 2019 Devign 49K C "Devign: Effective Vulnerability Identification by Learning Comprehensive Program Semantics via Graph Neural Networks" [paper] [data]
2019-11 IEEE TDSC unnamed 170K C, C++ "Software Vulnerability Discovery via Learning Multi-Domain Knowledge Bases" [paper] [data]
2019-12 ICLR 2020 GREAT 2.8M Python "Global Relational Models of Source Code" [paper] [data]
2020-01 IEEE TDSC MVD 182K C, C++ "μVulDeePecker: A Deep Learning-Based System for Multiclass Vulnerability Detection" [paper] [data]
2020-02 ICICS 2019 unnamed 1471 C "Deep Learning-Based Vulnerable Function Detection: A Benchmark" [paper] [data]
2020-09 IEEE Trans. Software Eng. ReVeal 18K C "Deep Learning based Vulnerability Detection: Are We There Yet?" [paper] [data]
2020-09 MSR 2020 Big-Vul 265K C, C++ "A C/C++ Code Vulnerability Dataset with Code Changes and CVE Summaries" [paper] [data]
2021-02 ICSE (SEIP) 2021 D2A 1.3M C, C++ "D2A: A Dataset Built for AI-Based Vulnerability Detection Methods Using Differential Analysis" [paper] [data]
2021-05 NeurIPS 2021 PyPIBugs 2374 Python "Self-Supervised Bug Detection and Repair" [paper] [data]
2021-07 In PROMISE 2021 CVEfixes 5495 27 "CVEfixes: Automated Collection of Vulnerabilities and Their Fixes from Open-Source Software" [paper] [data]
2021-08 ESEC/FSE 2021 CrossVul 27476 40+ "CrossVul: a cross-language vulnerability dataset with commit data" [paper] [data]
2023-04 RAID 2023 DiverseVul 349K C, C++ "DiverseVul: A New Vulnerable Source Code Dataset for Deep Learning Based Vulnerability Detection" [paper] [data]
2023-06 arXiv VulnPatchPairs 26K C "Limits of Machine Learning for Automatic Vulnerability Detection" [paper] [data]
2023-11 arXiv VulBench 455 C "How Far Have We Gone in Vulnerability Detection Using Large Language Models" [paper] [data]
2024-03 arXiv PrimeVul 236K C/C++ "Vulnerability Detection with Code Language Models: How Far Are We?" [paper]

Code Retrieval

  • "Code Search: A Survey of Techniques for Finding Code", 2022-04, ICSME 2021, [[paper](ACM Comput. Surv)]
  • "A Survey of Deep Code Search", 2023-05, arXiv, [paper]
Date Venue Benchmark Size Language Source
2018-03 WWW 2018 StaQC 148K/120K Python/SQL "StaQC: A Systematically Mined Question-Code Dataset from Stack Overflow" [paper] [data]
2018-05 ICSE 2018 DeepCS 16.2M Java "Deep Code Search" [paper] [data]
2018-05 MSR 2018 CoNaLa 600K/2.9K Python "Learning to Mine Aligned Code and Natural Language Pairs from Stack Overflow" [paper] [data]
2019-08 arXiv unnamed 287 Java "Neural Code Search Evaluation Dataset" [paper] [data]
2019-09 arXiv CodeSearchNet 2.3M/99 Go, PHP, JS, Python, Java, Ruby "CodeSearchNet Challenge: Evaluating the State of Semantic Code Search" [paper] [data]
2020-02 SANER 2020 CosBench 52 Java "Are the Code Snippets What We Are Searching for? A Benchmark and an Empirical Study on Code Search with Natural-Language Queries" [paper] [data]
2020-08 arXiv SO-DS 2.2K Python "Neural Code Search Revisited: Enhancing Code Snippet Retrieval through Natural Language Intent" [paper] [data]
2020-10 ACM Trans. Knowl. Discov. Data FB-Java 249K Java "Deep Graph Matching and Searching for Semantic Code Retrieval" [paper] [data]
2021-02 NeurIPS Datasets and Benchmarks 2021 AdvTest/WebQueryTest 280K/1K Python "CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation" [paper] [[data]]
2021-05 ACL/IJCNLP 2021 CoSQA 21K Python "CoSQA: 20,000+ Web Queries for Code Search and Question Answering" [paper] [data]
2024-03 arXiv ProCQA 5.2M C, C++, Java, Python, Ruby, Lisp, JS, C#, Go, Rust, PHP "ProCQA: A Large-scale Community-based Programming Question Answering Dataset for Code Search" [paper] [data]

Type Inference

Date Venue Benchmark Size Language Source
2019-12 ESEC/FSE 2020 TypeWriter OSS 208K Python "TypeWriter: Neural Type Prediction with Search-based Validation" [paper] [data]
2020-04 PLDI 2020 Typilus 252K Python "Typilus: Neural Type Hints" [paper] [data]
2020-04 ICLR 2020 LambdaNet 300 * TypeScript "LambdaNet: Probabilistic Type Inference using Graph Neural Networks" [paper] [data]
2021-04 MSR 2021 ManyTypes4Py 869K Python "ManyTypes4Py: A Benchmark Python Dataset for Machine Learning-based Type Inference" [paper] [data]
2022-10 MSR 2022 ManyTypes4TypeScript 9.1M TypeScript "ManyTypes4TypeScript: a comprehensive TypeScript dataset for sequence-based type inference" [paper] [data]
2023-02 ECOOP 2023 TypeWeaver 513 * TypeScript "Do Machine Learning Models Produce TypeScript Types That Type Check?" [paper] [data]
2023-03 ICLR 2023 BetterTypes4Py/InferTypes4Py 608K/4.6K Python "TypeT5: Seq2seq Type Inference using Static Analysis" [paper] [data]
2023-05 arXiv OpenTau 744 * TypeScript "Type Prediction With Program Decomposition and Fill-in-the-Type Training" [paper] [data]

* These are project counts.

Commit Message Generation

  • "On the Evaluation of Commit Message Generation Models: An Experimental Study", 2021-07, ICSME 2021, [paper]
Date Venue Benchmark Size Language Source
2017-03 ICPC 2017 unnamed 509K Java "Towards Automatic Generation of Short Summaries of Commits" [paper] [data]
2017-04 ACL 2017 CommitGen 153K Python, JS, C++, Java "A Neural Architecture for Generating Natural Language Descriptions from Source Code Changes" [paper] [data]
2017-08 ASE 2017 CommitGen 32K/75K * Java "Automatically Generating Commit Messages from Diffs using Neural Machine Translation" [paper] [data]
2018-09 ASE 2018 NNGen 27K Java "Neural-machine-translation-based commit message generation: how far are we?" [paper] [data]
2019-05 MSR 2019 PtrGNCMsg 64.9K Java "Generating commit messages from diffs using pointer-generator network" [paper] [[data(https://zenodo.org/records/2593787)]]
2019-08 IJCAI 2019 CoDiSum 90.7K Java "Commit message generation for source code changes" [paper] [data]
2019-12 IEEE Trans. Software Eng. ATOM 160K Java "ATOM: Commit Message Generation Based on Abstract Syntax Tree and Hybrid Ranking" [paper] [data]
2021-05 arXiv CommitBERT 346K Python, PHP, Go, Java, JS, Ruby "CommitBERT: Commit Message Generation Using Pre-Trained Programming Language Model" [paper] [data]
2021-07 ICSME 2021 MCMD 2.25M Java, C#, C++, Python, JS "On the Evaluation of Commit Message Generation Models: An Experimental Study" [paper] [data]
2021-07 ACM Trans. Softw. Eng. Methodol. CoRec 107K Java "Context-aware Retrieval-based Deep Commit Message Generation" [paper] [data]
2023-07 ASE 2023 ExGroFi 19263 Java "Delving into Commit-Issue Correlation to Enhance Commit Message Generation Models" [paper] [data]
2023-08 ASE 2023 CommitChronicle 10.7M 20 "From Commit Message Generation to History-Aware Commit Message Completion" [paper] [data]

* with/without verb-direct object filter

Repo-Level Coding

Date Venue Benchmark Size Language Source
2023-03 arXiv RepoEval 1600/1600/373 * Python "RepoCoder: Repository-Level Code Completion Through Iterative Retrieval and Generation" [paper] [data]
2023-06 arXiv RepoBench 890K/9M/43K $^\dagger$ Python, Java "RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems" [paper] [data]
2023-06 NeurIPS 2023 PragmaticCode 880 ** Java "Guiding Language Models of Code with Global Context using Monitors" [paper] [data]
2023-06 arXiv Stack-Repo 816K Java "RepoFusion: Training Code Models to Understand Your Repository" [paper] [data]
2023-09 arXiv CodePlan 645/21 $^\ddagger$ C#/Python $^\ddagger$ "CodePlan: Repository-level Coding using LLMs and Planning" [paper] [data] **
2023-10 arXiv SWE-Bench 2294 Python "SWE-bench: Can Language Models Resolve Real-World GitHub Issues?" [paper] [data]
2023-10 arXiv CrossCodeEval 9928 Python, Java, TypeScript, C# "CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion" [paper] [data]
2024-03 arXiv EvoCodeBench 275 Python "EvoCodeBench: An Evolving Code Generation Benchmark Aligned with Real-World Code Repositories" [paper] [data]

*Line Completion/API Invocation Completion/Function Completion

$^\dagger$ Retrieval/Completion/Pipeline

** File count

$^\ddagger$ Migration/Temporal Edit

** This is the link given in the paper, but we are unable to access it at the time of writing.

Other tasks are coming soon!

9. Recommended Readings

30 papers as a primer on LLM.

Date Keyword Paper TL;DR
2014-09 Attention Neural Machine Translation by Jointly Learning to Align and Translate The original attention, proposed for encoder-decoder RNN
2015-08 BPE Neural Machine Translation of Rare Words with Subword Units Byte-pair encoding: split rare words into subword units
2017-06 Transformer Attention Is All You Need Replace LSTM with self-attention for long-range dependency and parallel training
2017-10 Mixed Precision Training Mixed Precision Training Store model weights in fp16 to save memory
2018-04 GLUE GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding A language understanding benchmark
2018-06 GPT Improving Language Understanding by Generative Pre-Training Pretraining-finetuning paradigm applied to Transformer decoder
2018-10 BERT BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding Masked Language Modeling (MLM) applied to Transformer encoder for pretraining
2019-02 GPT-2 Language Models are Unsupervised Multitask Learners GPT made larger (1.5B). They found language models implicitly learn about downstream tasks (such as translation) during pretraining.
2019-05 SuperGLUE SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems Another language understanding benchmark
2019-07 RoBERTa RoBERTa: A Robustly Optimized BERT Pretraining Approach An optimized BERT
2019-09 Megatron-LM Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism Model parallelism
2019-10 ZeRO ZeRO: Memory Optimizations Toward Training Trillion Parameter Models Memory-efficient distributed optimization
2019-10 T5 Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer Transformer encoder-decoder pretrained with an MLM-like denoising objective
2020-05 GPT-3 Language Models are Few-Shot Learners By training an even larger version of GPT-2 (175B), they discovered a new learning paradigm: In-Context Learning (ICL)
2020-09 MMLU Measuring Massive Multitask Language Understanding A world-knowledge and complex reasoning benchmark
2020-12 Pile The Pile: An 800GB Dataset of Diverse Text for Language Modeling A diverse pretraining dataset
2021-06 LoRA LoRA: Low-Rank Adaptation of Large Language Models Memory-efficient finetuning
2021-09 FLAN Finetuned Language Models Are Zero-Shot Learners Instruction-finetuning
2021-10 T0 Multitask Prompted Training Enables Zero-Shot Task Generalization Also instruction finetuning, but applied to the much smaller T5
2021-12 Gopher Scaling Language Models: Methods, Analysis & Insights from Training Gopher A 280B LLM with comprehensive experiments
2022-01 CoT Chain-of-Thought Prompting Elicits Reasoning in Large Language Models Chain-of-Though reasoning
2022-03 InstructGPT Training language models to follow instructions with human feedback GPT-3 instruction finetuned with RLHF (reinforcement learning from human feedback)
2022-03 Chinchilla Training Compute-Optimal Large Language Models A smaller (70B) version of Gopher that's pretrained on more data
2022-04 PaLM PaLM: Scaling Language Modeling with Pathways The largest dense model ever (540B)
2022-05 0-shot CoT Large Language Models are Zero-Shot Reasoners Tell LLMs to think step by step, and they can actually do it
2022-06 BIG Bench Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models Another world-knowledge and complex reasoning benchmark
2022-06 Emergent Ability Emergent Abilities of Large Language Models A review on emergent abilities
2022-10 Flan Scaling Instruction-Finetuned Language Models Consolidate all the existing instruction tuning datasets, and you get SOTA
2022-11 BLOOM BLOOM: A 176B-Parameter Open-Access Multilingual Language Model The largest open-source LLM, trained on 46 languages, with detailed discussion about training and evaluation
2022-12 Self-Instruct Self-Instruct: Aligning Language Models with Self-Generated Instructions Instruction tuning using LLM-generated data

This list aims to provide the essential background for understanding current LLM technologies, and thus excludes more recent models such as LLaMA, GPT-4 or PaLM 2. For comprehensive reviews on these more general topics, we refer to other sources such as this paper or these repositories: Awesome-LLM, Awesome AIGC Tutorials. And for LLM applications in other specific domains: Awesome Domain LLM, Awesome Tool Learning, Awesome-LLM-MT, Awesome Education LLM.

Citation

If you find this repo or our survey helpful, please consider citing us:

@article{zhang2023unifying,
      title={Unifying the Perspectives of NLP and Software Engineering: A Survey on Language Models for Code},
      author={Ziyin Zhang and Chaoyu Chen and Bingchang Liu and Cong Liao and Zi Gong and Hang Yu and Jianguo Li and Rui Wang},
      year={2023},
      journal={CoRR},
      volume={abs/2311.07989},
      url={https://doi.org/10.48550/arXiv.2311.07989},
      doi={10.48550/ARXIV.2311.07989},
      eprint={2311.07989},
      eprinttype={arXiv},
}

Star History

Star History Chart

Join US

We are the AI Native team within the Platform Technology Business Group at Ant Group, dedicated to the intelligentization of Ant Group's platform engineering. Established for over three years, our team has played a pivotal role in supporting the intelligent operation and maintenance of Ant Group's cloud computing infrastructure. Our mission is to build algorithm services and platforms with a wide user base through world-class technological innovation and impact, supporting the implementation of internal and external products and businesses. Embracing an innovation-driven ethos, our team not only supports business implementation but also propels technological influence. Over the past three years, we have published more than 20 papers at top conferences like ICLR, NeurIPS, KDD, and ACL. Our innovative business outcomes have earned us two Ant Technology's highest T-Star awards and one SuperMA award from Ant Group. Our open-source project CodeFuse has received 4K stars as of February 2024, and our models have been downloaded over 1.5 million times on Huggingface and Modelscope.

We are on the lookout for top talents to join our vibrant team! If you're eager to develop your career in an environment filled with energy, innovation, and a culture of excellence, we welcome you to explore our career opportunities for both campus and experienced hires. Join us and be a part of creating the next milestone in the industry.

Campus Recruitment: https://hrrecommend.antgroup.com/guide.html?code=8uoP5mlus5DqQYbE_EnqcE2FD5JZH21MwvMUIb9mb6X3osXPuBraG54SyM8GLn_7

Experienced Hires: https://talent.antgroup.com/off-campus-position?positionId=1933830

我们是平台技术事业群 AI Native 团队,负责蚂蚁蚂蚁集团平台工程的智能化,团队成立 3 年多以来,支持了蚂蚁集团云计算基础设施智能化运维的升级改造。团队的 Mission 是,通过世界级的技术创新和影响,构建有广泛用户的算法服务和平台,支撑内外部产品和业务落地。团队秉承创新基因,在支撑业务落地的同时,推动技术影响。3 年以来在 ICLR、NeurIPS、KDD、ACL 等顶会发表论文 20 余篇,创新业务结果获得两次蚂蚁技术最高奖 T-Star,1 次蚂蚁集团最高奖 SuperMA。开源项目 CodeFuse 获得 4K 点赞(2024 年 2 月),Huggingface 和 modelscope 上模型累积下载量超过 150 万次。

我们正在寻找行业中的佼佼者加入我们的团队!如果您希望在一个充满活力、创新和卓越文化的环境中发展您的职业生涯,欢迎您查看我们的社招&校招机会,加入我们,一起创造下一个行业里程碑。

校招https://hrrecommend.antgroup.com/guide.html?code=8uoP5mlus5DqQYbE_EnqcE2FD5JZH21MwvMUIb9mb6X3osXPuBraG54SyM8GLn_7

社招https://talent.antgroup.com/off-campus-position?positionId=1933830