Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

load_in_8bit可以初始为False #26

Open
weberrr opened this issue Apr 1, 2023 · 2 comments
Open

load_in_8bit可以初始为False #26

weberrr opened this issue Apr 1, 2023 · 2 comments

Comments

@weberrr
Copy link

weberrr commented Apr 1, 2023

修改load_in_8bit参数,可以不用安装最新的bitsandbytes和peft(尤其是要处理对于cuda等环境依赖的时候,成本会很高)
我推荐 load_in_8bit=False,不影响模型的训练和加载,可以提升大家接入使用的速度

@PhoebusSi PhoebusSi reopened this Apr 3, 2023
@snippetzero
Copy link

使用alpaca-lora finetune 测试了load_in_8bit=True 和False的两种情况,结果load_in_8bit 为 True的情况下,训练速度相比为False 慢了一倍多? 这可能是什么原因了?

@PhoebusSi
Copy link
Owner

速度没有对比过 但是load_in_8bit=True的时候对显存需求大幅度减小 对只有24/32g卡的同学更友好些

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants