Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AMD #23

Closed
taikai-zz opened this issue Jul 11, 2023 · 2 comments
Closed

AMD #23

taikai-zz opened this issue Jul 11, 2023 · 2 comments

Comments

@taikai-zz
Copy link

请问支持AMD GPU吗?

@jameswu2014
Copy link
Collaborator

不支持

@MachineGunLin
Copy link

我在 AMD MI210 上跑起来了 Baichuan2,可以作为参考:

docker pull rocm/deepspeed:rocm5.7_ubuntu20.04_py3.9_pytorch_2.0.1_DeepSpeed
docker run -it --device /dev/kfd --device /dev/dri dfac3db401df /bin/bash

# 不要直接 pip install -r requirements.txt,那是 nvidia 版本的依赖
pip install numpy
pip install transformers
pip install sentencepiece
pip install tokenizers
pip install accelerate

# 安装 rocm 版 bitsandbytes
cd /home
git clone https://github.com/Lzy17/bitsandbytes-rocm
cd bitsandbytes-rocm
make hip
python setup.py install

cd /home
git clone https://github.com/baichuan-inc/Baichuan2.git
cd Baichuan2/fine-tune

# 显卡不支持 bf16 和 tf32
hostfile=""
deepspeed --hostfile=$hostfile fine-tune.py  \
    --report_to "none" \
    --data_path "data/belle_chat_ramdon_10k.json" \
    --model_name_or_path "baichuan-inc/Baichuan2-7B-Base" \
    --output_dir "output" \
    --model_max_length 512 \
    --num_train_epochs 4 \
    --per_device_train_batch_size 16 \
    --gradient_accumulation_steps 1 \
    --save_strategy epoch \
    --learning_rate 2e-5 \
    --lr_scheduler_type constant \
    --adam_beta1 0.9 \
    --adam_beta2 0.98 \
    --adam_epsilon 1e-8 \
    --max_grad_norm 1.0 \
    --weight_decay 1e-4 \
    --warmup_ratio 0.0 \
    --logging_steps 1 \
    --gradient_checkpointing True \
    --deepspeed ds_config.json \
    --fp16 True

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants