-
Notifications
You must be signed in to change notification settings - Fork 351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use l2l.vision.models.ResNet12
?
#389
Comments
Hello @Jeong-Bin, Method 3 is correct. Try using How much GPU memory do you have? If you have more than 1 GPU, you can use |
@seba-1511 Additionally, I referred to 'adaptation steps' in MAML paper. |
Yes, gradient steps are adaptation steps. |
All right, thank you for your help! Have a nice day😊 |
Hi, I'm using l2l to create large MAML model.
However, I have a question regarding the usage of
l2l.vision.models.ResNet12
orWRN28
.I tried the following 3 methods.
In
Method 1
, ErrorRuntimeError: only batches of spatial targets supported (3D tensors) but got targets of size: : [5]
occurred.Also, when I modified code to
lambda x: x.view(-1, 256)
andtorch.nn.Linear(256, ways)
,RuntimeError: mat1 and mat2 shapes cannot be multiplied (1260x84 and 256x5)
occurred.Method 2
worked well, but its test accuracy was lower than basic MAML model.I used following code for basic MAML.
Method 3
worked well during training, but I encounteredOutOfMemoryError
in testing.(Actually, the training was very slow.)
What is the right way, and what should I modify?
Or is there any other way to make a large MAML model?
I set the training and testing configurations as follows:
The text was updated successfully, but these errors were encountered: