You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
But the results from the published code and models are not even remotely comparable to the shown results in the paper. Is there anything we can do to get closer to the original work?
E.g. could we train on different (maybe bigger and more diverse) dataset?
Or do we need bigger model?
Or maybe tweaking the params a bit could help?
Image from the paper for: "a surrealist dream-like oil painting by salvador dalı́ of a cat playing checkers"
Image from the code for the same text prompt "a surrealist dream-like oil painting by salvador..."
It's almost like that meme: " Your vs. The guy she told you not to worry about" 🤣
Anyway, if you can give us some advice on this matter it would be greatly appreciated! 👍
The text was updated successfully, but these errors were encountered:
We have not released the full GLIDE model--only GLIDE (filtered) which is 10x smaller than the original model and trained on a much more restricted dataset. We hope this model is still useful for future research, but it won't be able to reproduce the best images in the paper because of these limitations.
Really inspirational work guys!
But the results from the published code and models are not even remotely comparable to the shown results in the paper. Is there anything we can do to get closer to the original work?
Image from the paper for: "a surrealist dream-like oil painting by salvador dalı́ of a cat playing checkers"
Image from the code for the same text prompt "a surrealist dream-like oil painting by salvador..."
It's almost like that meme: " Your vs. The guy she told you not to worry about" 🤣
Anyway, if you can give us some advice on this matter it would be greatly appreciated! 👍
The text was updated successfully, but these errors were encountered: