Skip to content

Commit

Permalink
fix transformer version
Browse files Browse the repository at this point in the history
  • Loading branch information
骁灵 committed Sep 18, 2023
1 parent 9e48841 commit 7833961
Show file tree
Hide file tree
Showing 3 changed files with 9 additions and 3 deletions.
8 changes: 6 additions & 2 deletions cn_clip/training/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -261,8 +261,12 @@ def main():
try:
teacher_model = Model.from_pretrained(args.teacher_model_name)
except Exception as e:
print("An error occurred while loading the model:", e)
print("Maybe the transformer version is not compatible, recommend to use transformers >= 4.10.0 and <= 4.30.2")
error_message = (
"An error occurred while loading the model: {}\n"
"Maybe the transformer version is not compatible, "
"recommend to use transformers >= 4.10.0 and <= 4.30.2".format(e)
)
raise RuntimeError(error_message)

for k, v in teacher_model.state_dict().items():
v.requires_grad = False
Expand Down
1 change: 1 addition & 0 deletions distillation.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
+ Pytorch 1.12及以上版本。
+ [requirements.txt](requirements.txt)要求的其他依赖项
+ **ModelScope**:通过执行`pip install modelscope`安装ModelScope。
+ 为了配合**ModelScope**的使用,**transformers**版本最好控制在(4.10.0, 4.30.2)之间。

## 在Chinese-CLIP中用起来!

Expand Down
3 changes: 2 additions & 1 deletion distillation_En.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,9 @@ Here we provide an example of knowledge distillation for Chinese-CLIP fine-tunin
+ Nvidia GPUs **with Turning, Ampere, Ada or Hopper architecture** (such as H100, A100, RTX 3090, T4, and RTX 2080). Please refer to [this document](https://en.wikipedia.org/wiki/CUDA#GPUs_supported) for the corresponding GPUs of each Nvidia architecture.
+ CUDA 11.4 and above.
+ PyTorch 1.12 and above.
+ **ModelScope**:Install FlashAttention by executing `pip install modelscope`.
+ Other dependencies as required in [requirements.txt](requirements.txt).
+ **ModelScope**:Install FlashAttention by executing `pip install modelscope`.
+ In order to cooperate with the use of **ModelScope**, the version of **transformers** is best controlled between (4.10.0, 4.30.2).

## Use it in Chinese-CLIP!
It is not complicated to apply knowledge distillation to the image side in Chinese-CLIP finetune. Just add the `--distllation` configuration item to the sh script of finetune.
Expand Down

0 comments on commit 7833961

Please sign in to comment.