-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ModuleNotFoundError: No module named 'clip' #4
Comments
Sorry I haven't made a file stating the dependencies; it requires the original CLIP repository's pip installation protocol. Simply enter the following into your terminal and it will install the
|
Thank you very much! Yes I figured out after I found the openai clip repo and made it run. I trained on 9000 image text pairs(each image has 5 pieces of texts) but I got accuracy closed to 0 in first five epochs:/-------- Original message --------From: Cade Gordon ***@***.***>Date: Tue, Jun 1, 2021, 9:27 PMTo: Zasder3/train-CLIP ***@***.***>Cc: Wei Jiang ***@***.***>, Author ***@***.***>Subject: Re: [Zasder3/train-CLIP] ModuleNotFoundError: No module named 'clip' (#4)
Sorry I haven't made a file stating the dependencies, it requires the original CLIP repositories pip installation protocol. Simply enter the following into your terminal and it will install the clip module:
$ pip install git+https://github.com/openai/CLIP.git
—You are receiving this because you authored the thread.Reply to this email directly, view it on GitHub, or unsubscribe.
|
This is an important point. Our repo starts from random inits and CLIP used 400M text image pairs. They don't even state what their accuracy is during a batch... One thing that could be helpful is using a pretrained initialization and see what happens from there. I'll make sure I double check to ensure the accuracy is correct though. |
Thanks! I went through the paper and find the accuracy curve. I think it is
reasonable for almost 0 accuracy :D
[image: image.png]
…On Tue, Jun 1, 2021 at 11:59 PM Cade Gordon ***@***.***> wrote:
This is an important point. Our repo starts from random inits and CLIP
used 400M text image pairs. They don't even state what their accuracy is
during a batch... One thing that could be helpful is using a pretrained
initialization and see what happens from there. I'll make sure I double
check to ensure the accuracy is correct though.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#4 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AKBQHPNV6IF3DCM2FR4GOD3TQVJ4JANCNFSM455DU4GA>
.
|
Looks like the accuracy calculation was incorrect! I'll update the repo. |
Hi, many thanks for the update! I trained my model and now I get accuracy up to 85%. But when I load the model by suing the function “build_model”. I encounter another error showed in picture. Do you know what is the issue of it?
|
Hi @Jiang15, the photo you sent over sadly isn't visible. Try using the GitHub website or app to insert it. Although I can't see your error I do have some useful feedback on reloading the weights. Lightning requires you to actually use the checkpoints that the training run spits out. There's likely folder called |
<!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:DengXian;
panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:"\@dengxian";
panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
font-size:11.0pt;
font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
code
{mso-style-priority:99;
font-family:"Courier New";}
.MsoChpDefault
{mso-style-type:export-only;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
{page:WordSection1;}
-->Hi, What I tried is to use CLIPWrapper.load_from_checkpoint(myPath.ckpt). But it shows to me that “_init__() missing 3 required positional arguments: 'model_name', 'config', and 'minibatch_size'”. I went through the internet and I guess the hyperparameter is not saved?Have a nice day! Sent from Mail for Windows 10 From: Cade GordonSent: Sunday, 6 June 2021 12:39 amTo: Zasder3/train-CLIPCc: Wei Jiang; MentionSubject: Re: [Zasder3/train-CLIP] ModuleNotFoundError: No module named 'clip' (#4) Hi @Jiang15, the photo you sent over sadly isn't visible. Try using the GitHub website or app to insert it.Although I can't see your error I do have some useful feedback on reloading the weights. Lightning requires you to actually use the checkpoints that the training run spits out. There's likely folder called lightning_Logs that will have your model saved. Checkout the documentation here and use the class CLIPWrapper to receive the checkpoint.—You are receiving this because you were mentioned.Reply to this email directly, view it on GitHub, or unsubscribe.
|
Thanks for your great work! when I call train.py with my dataset. I have ModuleNotFoundError: No module named 'clip'. Do I miss something?
The text was updated successfully, but these errors were encountered: