Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support llava #577

Merged
merged 8 commits into from
Mar 20, 2024
Merged
Show file tree
Hide file tree
Changes from 7 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -222,6 +222,7 @@ You can refer to the following scripts to customize your own training script.
- Multi-Modal:
- [qwen-vl](https://github.com/QwenLM/Qwen-VL) series: qwen-vl, qwen-vl-chat, qwen-vl-chat-int4.
- [qwen-audio](https://github.com/QwenLM/Qwen-Audio) series: qwen-audio, qwen-audio-chat.
- [llava](https://github.com/haotian-liu/LLaVA) seires: llava1d6-mistral-7b-chat.
- [deepseek-vl](https://github.com/deepseek-ai/DeepSeek-VL) series: deepseek-vl-1_3b-chat, deepseek-vl-7b-chat.
- [yi-vl](https://github.com/01-ai/Yi) series: yi-vl-6b-chat, yi-vl-34b-chat.
- [internlm-xcomposer2](https://github.com/InternLM/InternLM-XComposer) series: internlm-xcomposer2-7b-chat.
Expand Down
1 change: 1 addition & 0 deletions README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -222,6 +222,7 @@ app_ui_main(infer_args)
- 多模态:
- [qwen-vl](https://github.com/QwenLM/Qwen-VL) 系列: qwen-vl, qwen-vl-chat, qwen-vl-chat-int4.
- [qwen-audio](https://github.com/QwenLM/Qwen-Audio) 系列: qwen-audio, qwen-audio-chat.
- [llava](https://github.com/haotian-liu/LLaVA) 系列: llava1d6-mistral-7b-chat.
- [deepseek-vl](https://github.com/deepseek-ai/DeepSeek-VL) 系列: deepseek-vl-1_3b-chat, deepseek-vl-7b-chat.
- [yi-vl](https://github.com/01-ai/Yi) 系列: yi-vl-6b-chat, yi-vl-34b-chat.
- [internlm-xcomposer2](https://github.com/InternLM/InternLM-XComposer) 系列: internlm-xcomposer2-7b-chat.
Expand Down
9 changes: 1 addition & 8 deletions docs/source/LLM/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,7 @@


### Multi-Modal最佳实践系列

1. [Qwen-VL最佳实践](../Multi-Modal/qwen-vl最佳实践.md)
2. [Qwen-Audio最佳实践](../Multi-Modal/qwen-auidio最佳实践.md)
3. [Deepseek-VL最佳实践](../Multi-Modal/deepseek-vl最佳实践.md)
4. [Yi-VL最佳实践.md](../Multi-Modal/yi-vl最佳实践.md)
5. [Internlm2-Xcomposers最佳实践](../Multi-Modal/internlm-xcomposer2最佳实践.md)
6. [MiniCPM-V最佳实践](../Multi-Modal/minicpm-v最佳实践.md)
7. [CogVLM最佳实践](../Multi-Modal/cogvlm最佳实践.md)
查看这里: [Multi-Modal最佳实践系列](../Multi-Modal/index.md)


### 教程
Expand Down
5 changes: 3 additions & 2 deletions docs/source/LLM/支持的模型和数据集.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,15 +78,16 @@
|llama2-70b|[modelscope/Llama-2-70b-ms](https://modelscope.cn/models/modelscope/Llama-2-70b-ms/summary)|q_proj, k_proj, v_proj|default-generation-bos|✔|✔||-|
|llama2-70b-chat|[modelscope/Llama-2-70b-chat-ms](https://modelscope.cn/models/modelscope/Llama-2-70b-chat-ms/summary)|q_proj, k_proj, v_proj|llama|✔|✔||-|
|llama2-7b-aqlm-2bit-1x16|[AI-ModelScope/Llama-2-7b-AQLM-2Bit-1x16-hf](https://modelscope.cn/models/AI-ModelScope/Llama-2-7b-AQLM-2Bit-1x16-hf/summary)|q_proj, k_proj, v_proj|default-generation-bos|✔|✘|transformers>=4.38, aqlm, torch>=2.2.0|-|
|llava1d6-mistral-7b-chat|[AI-ModelScope/llava-v1.6-mistral-7b](https://modelscope.cn/models/AI-ModelScope/llava-v1.6-mistral-7b/summary)|q_proj, k_proj, v_proj|llava-mistral|✔|✘|transformers>=4.34|multi-modal, vision|
|yi-6b|[01ai/Yi-6B](https://modelscope.cn/models/01ai/Yi-6B/summary)|q_proj, k_proj, v_proj|default-generation|✔|✔||-|
|yi-6b-200k|[01ai/Yi-6B-200K](https://modelscope.cn/models/01ai/Yi-6B-200K/summary)|q_proj, k_proj, v_proj|default-generation|✔|✔||-|
|yi-6b-chat|[01ai/Yi-6B-Chat](https://modelscope.cn/models/01ai/Yi-6B-Chat/summary)|q_proj, k_proj, v_proj|yi|✔|✔||-|
|yi-9b|[01ai/Yi-9B](https://modelscope.cn/models/01ai/Yi-9B/summary)|q_proj, k_proj, v_proj|default-generation|✔|✔||-|
|yi-34b|[01ai/Yi-34B](https://modelscope.cn/models/01ai/Yi-34B/summary)|q_proj, k_proj, v_proj|default-generation|✔|✔||-|
|yi-34b-200k|[01ai/Yi-34B-200K](https://modelscope.cn/models/01ai/Yi-34B-200K/summary)|q_proj, k_proj, v_proj|default-generation|✔|✔||-|
|yi-34b-chat|[01ai/Yi-34B-Chat](https://modelscope.cn/models/01ai/Yi-34B-Chat/summary)|q_proj, k_proj, v_proj|yi|✔|✔||-|
|yi-vl-6b-chat|[01ai/Yi-VL-6B](https://modelscope.cn/models/01ai/Yi-VL-6B/summary)|q_proj, k_proj, v_proj|yi-vl|✘|✘|transformers>=4.34|multi-modal, vision|
|yi-vl-34b-chat|[01ai/Yi-VL-34B](https://modelscope.cn/models/01ai/Yi-VL-34B/summary)|q_proj, k_proj, v_proj|yi-vl|✘|✘|transformers>=4.34|multi-modal, vision|
|yi-vl-6b-chat|[01ai/Yi-VL-6B](https://modelscope.cn/models/01ai/Yi-VL-6B/summary)|q_proj, k_proj, v_proj|yi-vl|✔|✘|transformers>=4.34|multi-modal, vision|
|yi-vl-34b-chat|[01ai/Yi-VL-34B](https://modelscope.cn/models/01ai/Yi-VL-34B/summary)|q_proj, k_proj, v_proj|yi-vl|✔|✘|transformers>=4.34|multi-modal, vision|
|internlm-7b|[Shanghai_AI_Laboratory/internlm-7b](https://modelscope.cn/models/Shanghai_AI_Laboratory/internlm-7b/summary)|q_proj, k_proj, v_proj|default-generation-bos|✘|✔||-|
|internlm-7b-chat|[Shanghai_AI_Laboratory/internlm-chat-7b-v1_1](https://modelscope.cn/models/Shanghai_AI_Laboratory/internlm-chat-7b-v1_1/summary)|q_proj, k_proj, v_proj|internlm|✘|✔||-|
|internlm-7b-chat-8k|[Shanghai_AI_Laboratory/internlm-chat-7b-8k](https://modelscope.cn/models/Shanghai_AI_Laboratory/internlm-chat-7b-8k/summary)|q_proj, k_proj, v_proj|internlm|✘|✔||-|
Expand Down
15 changes: 8 additions & 7 deletions docs/source/Multi-Modal/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,11 @@

### Multi-Modal最佳实践系列

1. [Qwen-VL最佳实践](../Multi-Modal/qwen-vl最佳实践.md)
2. [Qwen-Audio最佳实践](../Multi-Modal/qwen-auidio最佳实践.md)
3. [Deepseek-VL最佳实践](../Multi-Modal/deepseek-vl最佳实践.md)
4. [Yi-VL最佳实践.md](../Multi-Modal/yi-vl最佳实践.md)
5. [Internlm2-Xcomposers最佳实践](../Multi-Modal/internlm-xcomposer2最佳实践.md)
6. [MiniCPM-V最佳实践](../Multi-Modal/minicpm-v最佳实践.md)
7. [CogVLM最佳实践](../Multi-Modal/cogvlm最佳实践.md)
1. [Qwen-VL最佳实践](qwen-vl最佳实践.md)
2. [Qwen-Audio最佳实践](qwen-auidio最佳实践.md)
3. [Llava最佳实践](llava最佳实践.md)
4. [Deepseek-VL最佳实践](deepseek-vl最佳实践.md)
5. [Yi-VL最佳实践.md](yi-vl最佳实践.md)
6. [Internlm2-Xcomposers最佳实践](internlm-xcomposer2最佳实践.md)
7. [MiniCPM-V最佳实践](minicpm-v最佳实践.md)
8. [CogVLM最佳实践](cogvlm最佳实践.md)
209 changes: 209 additions & 0 deletions docs/source/Multi-Modal/llava最佳实践.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,209 @@

# Llava 最佳实践

## 目录
- [环境准备](#环境准备)
- [推理](#推理)
- [微调](#微调)
- [微调后推理](#微调后推理)


## 环境准备
```shell
git clone https://github.com/modelscope/swift.git
cd swift
pip install -e .[llm]
```

## 推理

推理[llava1d6-mistral-7b-chat](https://modelscope.cn/models/AI-ModelScope/llava-v1.6-mistral-7b/summary):
```shell
# Experimental environment: A10, 3090, V100...
# 20GB GPU memory
CUDA_VISIBLE_DEVICES=0 swift infer --model_type llava1d6-mistral-7b-chat
```

输出: (支持传入本地路径或URL)
```python
"""
<<< Describe this image.
Input a media path or URL <<< http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/cat.png
The image shows a close-up of a kitten with a soft, blurred background that suggests a natural, outdoor setting. The kitten has a mix of white and gray fur with darker stripes, typical of a tabby pattern. Its eyes are wide open, with a striking blue color that contrasts with the kitten's fur. The kitten's nose is small and pink, and its whiskers are long and white, adding to the kitten's cute and innocent appearance. The lighting in the image is soft and diffused, creating a gentle and warm atmosphere. The focus is sharp on the kitten's face, while the rest of the image is slightly out of focus, which draws attention to the kitten's features.
--------------------------------------------------
<<< How many sheep are in the picture?
Input a media path or URL <<< http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/animal.png
There are four sheep in the picture.
--------------------------------------------------
<<< What is the calculation result?
Input a media path or URL <<< http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/math.png
The calculation result is 14352 + 45304 = 145304.
--------------------------------------------------
<<< Write a poem based on the content of the picture.
Input a media path or URL <<< http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/poem.png
In the quiet of the night,
A solitary boat takes flight,
Across the water's gentle swell,
Underneath the stars that softly fell.

The boat, a vessel of the night,
Carries but one, a lone delight,
A solitary figure, lost in thought,
In the tranquil calm, they find a wraith.

The stars above, like diamonds bright,
Reflect upon the water's surface light,
Creating a path for the boat's journey,
Guiding through the night with a gentle purity.

The boat, a silent sentinel,
In the stillness, it gently swells,
A vessel of peace and calm,
In the quiet of the night, it carries on.

The figure on board, a soul at ease,
In the serene embrace of nature's peace,
They sail through the night,
Under the watchful eyes of the stars' light.

The boat, a symbol of solitude,
In the vast expanse of the universe's beauty,
A lone journey, a solitary quest,
In the quiet of the night, it finds its rest.
"""
```

示例图片如下:

cat:

<img src="http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/cat.png" width="250" style="display: inline-block;">

animal:

<img src="http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/animal.png" width="250" style="display: inline-block;">

math:

<img src="http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/math.png" width="250" style="display: inline-block;">

poem:

<img src="http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/poem.png" width="250" style="display: inline-block;">

**单样本推理**

```python
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'

from swift.llm import (
get_model_tokenizer, get_template, inference, ModelType,
get_default_template_type, inference_stream
)
from swift.utils import seed_everything
import torch

model_type = ModelType.llava1d6_mistral_7b_chat
template_type = get_default_template_type(model_type)
print(f'template_type: {template_type}')

model, tokenizer = get_model_tokenizer(model_type, torch.float16,
model_kwargs={'device_map': 'auto'})
model.generation_config.max_new_tokens = 256
template = get_template(template_type, tokenizer)
seed_everything(42)

images = ['http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/road.png']
query = 'How far is it from each city?'
response, _ = inference(model, template, query, images=images)
print(f'query: {query}')
print(f'response: {response}')

# 流式
query = 'Which city is the farthest?'
gen = inference_stream(model, template, query, images=images)
print_idx = 0
print(f'query: {query}\nresponse: ', end='')
for response, _ in gen:
delta = response[print_idx:]
print(delta, end='', flush=True)
print_idx = len(response)
print()
"""
query: How far is it from each city?
response: The image shows a road sign indicating the distances to three cities: Mata, Yangjiang, and Guangzhou. The distances are given in kilometers.

- Mata is 14 kilometers away.
- Yangjiang is 62 kilometers away.
- Guangzhou is 293 kilometers away.

Please note that these distances are as the crow flies and do not take into account the actual driving distance due to road conditions, traffic, or other factors.
query: Which city is the farthest?
response: The farthest city listed on the sign is Mata, which is 14 kilometers away.
"""
```

示例图片如下:

road:

<img src="http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/road.png" width="250" style="display: inline-block;">


## 微调
多模态大模型微调通常使用**自定义数据集**进行微调. 这里展示可直接运行的demo:

LoRA微调:

(默认只对LLM部分的qkv进行lora微调. 如果你想对所有linear含vision模型部分都进行微调, 可以指定`--lora_target_modules ALL`.)
```shell
# Experimental environment: A10, 3090, V100...
# 21GB GPU memory
CUDA_VISIBLE_DEVICES=0 swift sft \
--model_type llava1d6-mistral-7b-chat \
--dataset coco-mini-en-2 \
```

全参数微调:
```shell
# Experimental environment: 4 * A100
# 4 * 70 GPU memory
NPROC_PER_NODE=4 CUDA_VISIBLE_DEVICES=0,1,2,3 swift sft \
--model_type llava1d6-mistral-7b-chat \
--dataset coco-mini-en-2 \
--train_dataset_sample -1 \
--sft_type full \
--deepspeed default-zero2
```


[自定义数据集](../LLM/自定义与拓展.md#-推荐命令行参数的形式)支持json, jsonl样式, 以下是自定义数据集的例子:

(只支持单轮对话, 每轮对话必须包含一张图片, 支持传入本地路径或URL)

```jsonl
{"query": "55555", "response": "66666", "images": ["image_path"]}
{"query": "eeeee", "response": "fffff", "images": ["image_path"]}
{"query": "EEEEE", "response": "FFFFF", "images": ["image_path"]}
```


## 微调后推理
直接推理:
```shell
CUDA_VISIBLE_DEVICES=0 swift infer \
--ckpt_dir output/llava1d6-mistral-7b-chat/vx-xxx/checkpoint-xxx \
--load_dataset_config true \
```

**merge-lora**并推理:
```shell
CUDA_VISIBLE_DEVICES=0 swift export \
--ckpt_dir output/llava1d6-mistral-7b-chat/vx-xxx/checkpoint-xxx \
--merge_lora true

CUDA_VISIBLE_DEVICES=0 swift infer \
--ckpt_dir output/llava1d6-mistral-7b-chat/vx-xxx/checkpoint-xxx-merged \
--load_dataset_config true
```
2 changes: 1 addition & 1 deletion docs/source/Multi-Modal/qwen-audio最佳实践.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ CUDA_VISIBLE_DEVICES=0,1 swift sft \

# ZeRO2
# Experimental environment: 4 * A100
# 2 * 80 GPU memory
# 4 * 80 GPU memory
NPROC_PER_NODE=4 CUDA_VISIBLE_DEVICES=0,1,2,3 swift sft \
--model_type qwen-audio-chat \
--dataset aishell1-mini-zh \
Expand Down
9 changes: 5 additions & 4 deletions docs/source/Multi-Modal/qwen-vl最佳实践.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,9 +45,10 @@ Picture 2:<img>http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/cat.pn
计算结果是多少#
1452 + 45304 = 46756
--------------------------------------------------
<<< clear
<<<[M] Picture 1:<img>http://modelscope-open.oss-cn-hangzhou.aliyuncs.com/images/poem.png</img>
根据图片中的内容写首诗#
湖面星光点点闪,孤舟独影静如眠。男子举灯照山谷,小猫陪伴在身边
月光如水船如星,独坐船头吹夜风。深林倒影照水面,萤火点点照船行
"""
```

Expand Down Expand Up @@ -142,9 +143,9 @@ CUDA_VISIBLE_DEVICES=0 swift sft \

全参数微调:
```shell
# Experimental environment: 2 * A100
# 2 * 55 GPU memory
CUDA_VISIBLE_DEVICES=0,1 swift sft \
# Experimental environment: 4 * A100
# 4 * 70 GPU memory
NPROC_PER_NODE=2 CUDA_VISIBLE_DEVICES=0,1,2,3 swift sft \
--model_type qwen-vl-chat \
--dataset coco-mini-en \
--train_dataset_sample -1 \
Expand Down
4 changes: 4 additions & 0 deletions swift/llm/infer.py
Original file line number Diff line number Diff line change
Expand Up @@ -478,6 +478,10 @@ def llm_infer(args: InferArguments) -> None:
print('-' * 50)
if args.save_result and args.ckpt_dir is not None:
logger.info(f'save_result_path: {jsonl_path}')
if args.val_dataset_sample == 10: # is default
logger.info(
'You can set `--val_dataset_sample -1` to perform inference on the entire dataset.'
)
return {'result': result}


Expand Down
5 changes: 4 additions & 1 deletion swift/llm/utils/argument.py
Original file line number Diff line number Diff line change
Expand Up @@ -407,7 +407,10 @@ def __post_init__(self) -> None:
self.max_length = None

if self.deepspeed is not None:
assert not is_mp(), 'DeepSpeed is not compatible with MP.'
if is_mp():
raise ValueError('DeepSpeed is not compatible with MP. '
f'n_gpu: {torch.cuda.device_count()}, '
f'local_world_size: {get_dist_setting()[3]}.')
require_version('deepspeed')
if self.deepspeed.endswith('.json') or os.path.isfile(
self.deepspeed):
Expand Down
Loading
Loading