From aac7f578e0a9e64129abb4c9d06659bb04e7eb19 Mon Sep 17 00:00:00 2001
From: whcao <41630003+HIT-cwh@users.noreply.github.com>
Date: Mon, 29 Apr 2024 19:39:21 +0800
Subject: [PATCH] [Docs] Delete colab and add speed benchmark (#617)
* delete colab and add speed benchmark
* change speed benchmark figures
* fix en readme
---
README.md | 49 +++++++++++++++++++------------------------------
README_zh-CN.md | 49 +++++++++++++++++++------------------------------
2 files changed, 38 insertions(+), 60 deletions(-)
diff --git a/README.md b/README.md
index f92acf36f..e0be695ef 100644
--- a/README.md
+++ b/README.md
@@ -23,6 +23,20 @@ English | [简体中文](README_zh-CN.md)
+## 🚀 Speed Benchmark
+
+- Llama2 7B Training Speed
+
+
+
+
+
+- Llama2 70B Training Speed
+
+
+
+
+
## 🎉 News
- **\[2024/04\]** [LLaVA-Phi-3-mini](https://huggingface.co/xtuner/llava-phi-3-mini-hf) is released! Click [here](xtuner/configs/llava/phi3_mini_4k_instruct_clip_vit_large_p14_336) for details!
@@ -65,31 +79,6 @@ XTuner is an efficient, flexible and full-featured toolkit for fine-tuning large
- Support chatting with large models with pre-defined templates.
- The output models can seamlessly integrate with deployment and server toolkit ([LMDeploy](https://github.com/InternLM/lmdeploy)), and large-scale evaluation toolkit ([OpenCompass](https://github.com/open-compass/opencompass), [VLMEvalKit](https://github.com/open-compass/VLMEvalKit)).
-## 🌟 Demos
-
-- Ready-to-use models and datasets from XTuner API [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/17CSO7T8q6KePuvu684IiHl6_id-CjPjh?usp=sharing)
-
-- QLoRA Fine-tune [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1QAEZVBfQ7LZURkMUtaq0b-5nEQII9G9Z?usp=sharing)
-
-- Plugin-based Chat [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/144OuTVyT_GvFyDMtlSlTzcxYIfnRsklq?usp=sharing)
-
-
-
- Examples of Plugin-based Chat 🔥🔥🔥 |
-
-
-
-
- |
-
-
- |
-
-
- |
-
-
-
## 🔥 Supports
@@ -112,13 +101,12 @@ XTuner is an efficient, flexible and full-featured toolkit for fine-tuning large
- InternLM2
- - InternLM
- - Llama
+ - Llama 3
- Llama 2
+ - Phi-3
- ChatGLM2
- ChatGLM3
- Qwen
- - Baichuan
- Baichuan2
- Mixtral 8x7B
- DeepSeek MoE
@@ -192,7 +180,7 @@ XTuner is an efficient, flexible and full-featured toolkit for fine-tuning large
pip install -e '.[all]'
```
-### Fine-tune [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1QAEZVBfQ7LZURkMUtaq0b-5nEQII9G9Z?usp=sharing)
+### Fine-tune
XTuner supports the efficient fine-tune (*e.g.*, QLoRA) for LLMs. Dataset prepare guides can be found on [dataset_prepare.md](./docs/en/user_guides/dataset_prepare.md).
@@ -235,7 +223,7 @@ XTuner supports the efficient fine-tune (*e.g.*, QLoRA) for LLMs. Dataset prepar
xtuner convert pth_to_hf ${CONFIG_NAME_OR_PATH} ${PTH} ${SAVE_PATH}
```
-### Chat [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/144OuTVyT_GvFyDMtlSlTzcxYIfnRsklq?usp=sharing)
+### Chat
XTuner provides tools to chat with pretrained / fine-tuned LLMs.
@@ -295,6 +283,7 @@ We appreciate all contributions to XTuner. Please refer to [CONTRIBUTING.md](.gi
## 🎖️ Acknowledgement
- [Llama 2](https://github.com/facebookresearch/llama)
+- [DeepSpeed](https://github.com/microsoft/DeepSpeed)
- [QLoRA](https://github.com/artidoro/qlora)
- [LMDeploy](https://github.com/InternLM/lmdeploy)
- [LLaVA](https://github.com/haotian-liu/LLaVA)
diff --git a/README_zh-CN.md b/README_zh-CN.md
index dcc6649ff..c5037d28c 100644
--- a/README_zh-CN.md
+++ b/README_zh-CN.md
@@ -23,6 +23,20 @@
+## 🚀 Speed Benchmark
+
+- XTuner 与 LLaMA-Factory 在 Llama2-7B 模型上的训练效率对比
+
+
+
+
+
+- XTuner 与 LLaMA-Factory 在 Llama2-70B 模型上的训练效率对比
+
+
+
+
+
## 🎉 更新
- **\[2024/04\]** 多模态大模型 [LLaVA-Phi-3-mini](https://huggingface.co/xtuner/llava-phi-3-mini-hf) 发布!快速开始请查阅此[文档](xtuner/configs/llava/phi3_mini_4k_instruct_clip_vit_large_p14_336)!
@@ -65,31 +79,6 @@ XTuner 是一个高效、灵活、全能的轻量化大模型微调工具库。
- 预定义众多开源对话模版,支持与开源或训练所得模型进行对话。
- 训练所得模型可无缝接入部署工具库 [LMDeploy](https://github.com/InternLM/lmdeploy)、大规模评测工具库 [OpenCompass](https://github.com/open-compass/opencompass) 及 [VLMEvalKit](https://github.com/open-compass/VLMEvalKit)。
-## 🌟 示例
-
-- XTuner APIs所提供的开箱即用的模型与数据集 [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/17CSO7T8q6KePuvu684IiHl6_id-CjPjh?usp=sharing)
-
-- QLoRA 微调 [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1QAEZVBfQ7LZURkMUtaq0b-5nEQII9G9Z?usp=sharing)
-
-- 基于插件的对话 [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/144OuTVyT_GvFyDMtlSlTzcxYIfnRsklq?usp=sharing)
-
-
-
- 基于插件的对话 🔥🔥🔥 |
-
-
-
-
- |
-
-
- |
-
-
- |
-
-
-
## 🔥 支持列表
@@ -112,13 +101,12 @@ XTuner 是一个高效、灵活、全能的轻量化大模型微调工具库。
- InternLM2
- - InternLM
- - Llama
+ - Llama 3
- Llama 2
+ - Phi-3
- ChatGLM2
- ChatGLM3
- Qwen
- - Baichuan
- Baichuan2
- Mixtral 8x7B
- DeepSeek MoE
@@ -192,7 +180,7 @@ XTuner 是一个高效、灵活、全能的轻量化大模型微调工具库。
pip install -e '.[all]'
```
-### 微调 [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1QAEZVBfQ7LZURkMUtaq0b-5nEQII9G9Z?usp=sharing)
+### 微调
XTuner 支持微调大语言模型。数据集预处理指南请查阅[文档](./docs/zh_cn/user_guides/dataset_prepare.md)。
@@ -235,7 +223,7 @@ XTuner 支持微调大语言模型。数据集预处理指南请查阅[文档](.
xtuner convert pth_to_hf ${CONFIG_NAME_OR_PATH} ${PTH} ${SAVE_PATH}
```
-### 对话 [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/144OuTVyT_GvFyDMtlSlTzcxYIfnRsklq?usp=sharing)
+### 对话
XTuner 提供与大语言模型对话的工具。
@@ -295,6 +283,7 @@ xtuner chat internlm/internlm2-chat-7b --visual-encoder openai/clip-vit-large-pa
## 🎖️ 致谢
- [Llama 2](https://github.com/facebookresearch/llama)
+- [DeepSpeed](https://github.com/microsoft/DeepSpeed)
- [QLoRA](https://github.com/artidoro/qlora)
- [LMDeploy](https://github.com/InternLM/lmdeploy)
- [LLaVA](https://github.com/haotian-liu/LLaVA)
| |