-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
【Hackathon 4th No.102】给AutoConverter增加新的模型组网的支持 CLIPModel #5595
Conversation
…nto clip_auto_converter
Thanks for your contribution! |
Codecov Report
@@ Coverage Diff @@
## develop #5595 +/- ##
===========================================
+ Coverage 59.70% 59.91% +0.20%
===========================================
Files 482 481 -1
Lines 68138 68113 -25
===========================================
+ Hits 40685 40809 +124
+ Misses 27453 27304 -149
... and 6 files with indirect coverage changes Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
目前 paddlenlp 中针对于load_torch
的加载不兼容 HF Hub 上的部分CLIP模型,你可以先不用尝试加载全量模型openai/clip-vit-base-patch32
,只需要兼容hf-internal-testing/tiny-random-CLIPModel
即可。
当然如果你想在本地测试的话,可以调整一下:paddlenlp.utils.serialization.load_torch
方法:
def load_torch(path: str, **pickle_load_args):
from paddlenlp.utils.import_utils import import_module
torch_model = import_module("torch")
if torch_model is not None:
state_dict = torch_model.load(path)
return {key: value.cpu().numpy() for key, value in state_dict.items()}
["text_model.embeddings.position_embedding.weight", "text_model.positional_embedding.weight"], | ||
["text_model.final_layer_norm.weight", "text_model.ln_final.weight"], | ||
["text_model.final_layer_norm.bias", "text_model.ln_final.bias"], | ||
["text_projection.weight", "text_projection", "transpose"], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个需要通过 cls
判断下是否有text_project
这个layer,然后再添加text_projection
的映射,如果是没有的话是不需要添加到这里的配置中来。
增加了 text/vision projection 的判断,请评审,谢谢! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm. 请顺带删除clip/converter.py
…nto clip_auto_converter
已删~ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
PR types
New features
PR changes
APIs
Description
【Hackathon 4th No.102】给AutoConverter增加新的模型组网的支持 CLIPModel
Hackathon 4th No.102 这个任务里面有5个模型,我计划每个模型单独提PR,这个PR是处理
clip
模型。hf-internal-testing/tiny-random-CLIPModel
反馈几个问题:
hf-internal-testing/tiny-random-CLIPModel
这个模型,transformers
在CLIPTextModelWithProjection
和CLIPVisionModelWithProjection
模型转换会出现:ignore_mismatched_sizes=True
后初始化可以,但是最后的torch_logit.text_embeds.shape
不一致,有什么好的处理方式?openai/clip-vit-base-patch32
模型就行测试,会提示报错:由于是第一次做这种任务,希望大佬帮忙指导一下,谢谢!:)