Skip to content

Code and data for EMNLP'24 paper "CoCoST: Automatic Complex Code Generation with Online Searching and Correctness Testing".

License

Notifications You must be signed in to change notification settings

microsoft/CoCoST

CoCoST: Automatic Complex Code Generation with Online Searching and Correctness Testing

Large Language Models have revolutionized code generation ability by converting natural language descriptions into executable code. However, generating complex code within real-world scenarios remains challenging due to intricate structures, subtle bugs, understanding of advanced data types, and lack of supplementary contents. To address these challenges, we introduce the CoCoST framework, which enhances complex code generation by online searching for more information with planned queries and correctness testing for code refinement. Moreover, CoCoST serializes the complex inputs and outputs to improve comprehension and generates test cases to ensure the adaptability for real-world applications. CoCoST is validated through rigorous experiments on the DS-1000 and ClassEval datasets. Experimental results show that CoCoST substantially improves the quality of complex code generation, highlighting its potential to enhance the practicality of LLMs in generating complex code.

Quick Start

This repository provides the implementation of the methods described in our paper. The workflow consists of three stages: Retrieval, Refinement, and Evaluation.

0. Setup

Before running the pipeline, make sure to complete the following steps:

  • Install dependencies

    pip install -r requirements.txt
  • Add your custom LLM API function

    • Create a function run_llm inside the llm_api folder.

    • The function should have the following signature:

    def run_llm(model: str, prompt: str) -> str:
        # return the LLM response
    
  • Add your custom search user agents To speed up online search, add your user agent list to search/user_agent.py . This will allow the retrieval scripts to rotate user agents efficiently.

1. Retrieval

This stage contains three steps: planning, online search, and code generation.

python -m run.run_retrieve \
    --model <MODEL_NAME> \
    --is_azure \
    --output_dir <OUTPUT_DIR>

python -m run.run_retrieve1.py \
    --model <MODEL_NAME> \
    --is_azure \
    --output_dir <OUTPUT_DIR>

python -m run.run_retrieve2.py \
    --model <MODEL_NAME> \
    --is_azure \
    --retrieve_dir <OUTPUT_DIR> \
    --output_dir <OUTPUT_DIR_2> \
    --overwrite_output_dir

Replace <MODEL_NAME> and <OUTPUT_DIR> with your desired model and output directory. Example: gpt-35-turbo-16k-0613 and gpt_retrieve_outputs.

2. Refinement

This stage contains three steps: test case generation, execution and code refinement.

python -m run.run_gen_test_case.py \
    --model <MODEL_NAME> \
    --is_azure \
    --output_dir <TEST_CASE_OUTPUT_DIR>
    
python -m run.run_refinement.py \
    --model <MODEL_NAME> \
    --output_dir <OUTPUT_DIR_2>  \
    --refine_output_dir <REFINED_OUTPUT_DIR> \
    --use_gen_test_case True \
    --test_case_dir <TEST_CASE_OUTPUT_DIR>

python -m run.run_refinement1.py \
    --model <MODEL_NAME> \
    --is_azure \
    --output_dir <OUTPUT_DIR_2>  \
    --output_retrieve_dir <REFINED_OUTPUT_DIR> \
    --refine_output_dir <REFINED_OUTPUT_DIR_2>

Replace <OUTPUT_DIR_2> and <REFINED_OUTPUT_DIR> with your previous retrieval outputs and desired refinement outputs.

3. Evaluation

python -m run.run_test.py \
    --model <MODEL_NAME> \
    --output_dir <REFINED_OUTPUT_DIR_2> \
    --output_dir4refined <OUTPUT_DIR_2>

Replace <OUTPUT_DIR_2> and <REFINED_OUTPUT_DIR_2> with your previous retrieval outputs and refinement outputs.

Citation

If you find this repository useful, please considering giving ⭐ or citing:

@inproceedings{he-etal-2024-cocost,
    title = "{C}o{C}o{ST}: Automatic Complex Code Generation with Online Searching and Correctness Testing",
    author = "He, Xinyi  and
      Zou, Jiaru  and
      Lin, Yun  and
      Zhou, Mengyu  and
      Han, Shi  and
      Yuan, Zejian  and
      Zhang, Dongmei",
    editor = "Al-Onaizan, Yaser  and
      Bansal, Mohit  and
      Chen, Yun-Nung",
    booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.emnlp-main.1082/",
    doi = "10.18653/v1/2024.emnlp-main.1082",
    pages = "19433--19451",
}

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

About

Code and data for EMNLP'24 paper "CoCoST: Automatic Complex Code Generation with Online Searching and Correctness Testing".

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages