Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About CLIP training on nosied images #44

Open
yufeng9819 opened this issue Sep 24, 2022 · 0 comments
Open

About CLIP training on nosied images #44

yufeng9819 opened this issue Sep 24, 2022 · 0 comments

Comments

@yufeng9819
Copy link

yufeng9819 commented Sep 24, 2022

Hey! I think GLIDE is a wonderful work. But I have a question about CLIP training on nosied images.

I want to know why CLIP can be trained on nosied images. I think if t (range from 0 to 1000) is large(maybe close to 500 or more), then the noised images hardly contain any semantic information. In this case, I want to know CLIP model how to encode similar features from noised images and text and I also think it may cause model to not converge (because it is hard to encode similar features between noised images and text)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant