Skip to content

WeixiangYAN/CodeScope

Repository files navigation


CodeScope, an execution-based, multilingual, multi-task, multi-dimensional evaluation benchmark for comprehensively gauging LLM capabilities on coding tasks. CodeScope covers 43 programming languages and 8 coding tasks. It evaluates the coding performance of LLMs from three dimensions (perspectives): difficulty, efficiency, and length.

Datasets

🤗Hugging Face or Google Drive or Github Data

Code

CodeScope evaluates the comprehensive ability of LLMs in code understanding and code generation from eight coding tasks.

Code Understanding

  1. Code Summarization
  2. Code Smell
  3. Code Review
  4. Automated Testing

Code Generation

  1. Program Synthesis
  2. Code Translation
  3. Code Repair
  4. Code Optimization

Citation

Please cite the paper if you use the data or code from CodeScope.

@misc{yan2023codescope,
      title={CodeScope: An Execution-based Multilingual Multitask Multidimensional Benchmark for Evaluating LLMs on Code Understanding and Generation},
      author={Weixiang Yan and Haitian Liu and Yunkun Wang and Yunzhe Li and Qian Chen and Wen Wang and Tingyu Lin and Weishan Zhao and Li Zhu and Shuiguang Deng and Hari Sundaram},
      year={2023},
      eprint={2311.08588},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

Contact

For questions, please feel free to reach out via email at weixiangyan@ucsb.edu.

About

Benchmark, datasets and code for the paper CodeScope.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published