New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Declining test results #4
Comments
What is the transform of the baseline? What is the complete diff to the baseline? |
My training set is a flower classification training set, the model is mobilenetV2, using pytorch's pre-training model, before adding this line of code |
Is it wrong to use the following way? |
Are you talking about training or validation loss? Training loss is supposed to go up, as classifying augmented images is generally harder. You should turn off TrivialAugment during validation as well. For reference on how to use the torchvision version, see their references https://github.com/pytorch/vision/tree/main/references/classification which has TrivialAugment as an option. It is tested there and works for ImageNet. It generally is really hard for me to help you out with so little information, so if the above does not help, I could have quick glance over your codebase if you put it online somewhere. |
`import os
if name == 'main': |
This is my code, after using TA's method the loss value does go up compared to before. |
Ok, it indeed is the problem I mentioned above. You use TrivialAugment during validation as well, but it is only to be used during training. |
Yes, I've experimented with the method you mentioned, I just didn't remove the TA from the validation process when sending the code, but the loss value does go up after the validation process removes the TA, compared to the method without the TA |
One more thing: You do fine-tuning. This was not part of our experiments, we always trained from scratch. Generally, fine-tuning is known to need different and usually less augmentations. If you still want to try out standard augmentation methods, it might be good to start from something like the references of torchvision and then try multiple augmentation methods there. |
Hello!Hi, the following code was used directly in my model:
transforms.Compose([transforms.TrivialAugmentWide(),transforms.RandomResizedCrop(224),transforms.RandomHorizontalFlip(), transforms. ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
after I found that both validation loss and training loss increased on mobilev2 and resnet50 , am I using the wrong method?The training set is the flower dataset
The text was updated successfully, but these errors were encountered: