Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Huggingface上的ViT-B/16的demo的输出为什么和本地跑的模型输出不一致? #48

Closed
zhh9211 opened this issue Feb 4, 2023 · 2 comments

Comments

@zhh9211
Copy link

zhh9211 commented Feb 4, 2023

用的不是同一套权重么?还是压根就不是同一个模型?表现也差太多了吧

@zhh9211 zhh9211 closed this as completed Feb 4, 2023
@yangapku
Copy link
Member

yangapku commented Feb 4, 2023

您好,使用的是同一个模型,可以用torch.load找到对应的参数看看参数值是否一致。另外我们在HF transformers代码库,也提供了从github到HF的ckpt的格式转换脚本。我们跑过同样的case,在微小的误差范围内,是可以视为一致的。

如果有更多问题,欢迎继续留言。如果觉得Chinese-CLIP代码库对您有帮助,请您为我们点点star⭐️并推荐给身边的朋友们!

@songge25
Copy link

songge25 commented Apr 9, 2024

脚本要求的json文件在哪呢,我看模型训练之后么有

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants