This repository lists key projects and related demos about CodeFuse.
CodeFuse aims to develop Code Large Language Models (Code LLMs) to support and enhance full-lifecycle AI native sotware developing, covering crucial stages such as design requirements, coding, testing, building, deployment, operations, and insight analysis. Below is the overall framework of CodeFuse.
** 2024.04 ** CodeFuse-muAgent: a multi-agent framework, more detail see Release & Next Release
We listed repositories according to the lifecycle above.
LifeCycle Stage | Project Repository | Repo-Description | Road Map |
---|---|---|---|
Requirement & Design | MFT-VLM | Instruction-fine-tuning for Vision-language tasks | |
Coding | MFTCoder | Instruction-Tuning Framework | |
FastTransformer4CodeFuse | FT based Inference Engine | ||
CodeFuse-Eval | Evaluation kits for CodeFuse | ||
Test & Build | TestAgent | TestGPT demo frontend | |
DevOps | DevOps-Eval | Benchmark for DevOps | |
DevOps-Model | index for DevOps models | ||
Data Insight | NA | NA | |
Base | ChatBot | General chatbot frontend for CodeFuse | |
muAgent | multi-agent framework | ||
ModelCache | Semantic Cache for LLM Serving | ||
CodeFuse-Query | Query-Based Code Analysis Engine | ||
Others | CoCA | Colinear Attention | |
Awesine-Code-LLM | Code-LLM Survey | ||
This Repo | General Introduction & index of CodeFuse Repos |
ModelName | Short Description | Modele Linls |
---|---|---|
CodeFuse-13B | Training from scratch by CodeFuse | HF ; MS |
CodeFuse-CodeLLaMA-34B | Finetuning on CodeLLaMA-34B | HF ; MS |
** CodeFuse-CodeLLaMA-34B-4bits | 4bits quantized 34B model | HF ; MS |
CodeFuse-StarCoder-15B | Finetuning on StarCoder-15B | HF ; MS |
CodeFuse-Qwen-14B | Finetuning on Qwen-14B | HF ; MS |
CodeFuse-CodeGeeX2-6B | Finetuning on CodeGeeX2-6B | HF ; MS |
CodeFuse-DevOps-14B-Chat | Finetuning on DevOps-14B | HF ; MS |
CodeFuse-DevOps-14B-Base | Continue trianing on Qwen-14B | HF ; MS |
CodeFuse-TestGPT-7B | FineTuning on CodeLLaMA-7B | HF ; MS |
CodeFuse-DeepSeek-33B | FineTuning on DeepSeek-Coder-33b | HF ; MS |
** CodeFuse-DeepSeek-33B-4bits | 4-bit quantized 33B model | HF ; MS |
CodeFuse-Mixtral-8x7B | FineTuning on Mixtral-8x7B (MoE) | HF ; MS |
CodeFuse-VLM-14B | SoTA vision-language model | HF ; MS |
** -- recommended models;
-
Video demos: Chinese version at below, English version under preparation.
demo_video.mp4
-
Online Demo: You can try our CodeFuse-CodeLlama-34B model on ModelScope: CodeFuse-CodeLlama34B-MFT-Demo
- You can also try to install the CodeFuse-Chatbot to test our models locally.
- HuggingFace.
- ModelScope.
- WiseModel.
- Train or finetuning on your own models, you can try our MFTCoder, which enables efficient fine-tuning for multi-task, multi-model, and multi-training-framework scenarios.
For more technique details about CodeFuse, please refer to our paper MFTCoder.
If you find our work useful or helpful for your R&D work, please feel free to cite our paper as follows.
@article{mftcoder2023,
title={MFTCoder: Boosting Code LLMs with Multitask Fine-Tuning},
author={Bingchang Liu and Chaoyu Chen and Cong Liao and Zi Gong and Huan Wang and Zhichao Lei and Ming Liang and Dajun Chen and Min Shen and Hailian Zhou and Hang Yu and Jianguo Li},
year={2023},
journal={arXiv preprint arXiv},
archivePrefix={arXiv},
eprint={2311.02303}
}