We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
修改load_in_8bit参数,可以不用安装最新的bitsandbytes和peft(尤其是要处理对于cuda等环境依赖的时候,成本会很高) 我推荐 load_in_8bit=False,不影响模型的训练和加载,可以提升大家接入使用的速度
The text was updated successfully, but these errors were encountered:
使用alpaca-lora finetune 测试了load_in_8bit=True 和False的两种情况,结果load_in_8bit 为 True的情况下,训练速度相比为False 慢了一倍多? 这可能是什么原因了?
Sorry, something went wrong.
速度没有对比过 但是load_in_8bit=True的时候对显存需求大幅度减小 对只有24/32g卡的同学更友好些
No branches or pull requests
修改load_in_8bit参数,可以不用安装最新的bitsandbytes和peft(尤其是要处理对于cuda等环境依赖的时候,成本会很高)
我推荐 load_in_8bit=False,不影响模型的训练和加载,可以提升大家接入使用的速度
The text was updated successfully, but these errors were encountered: