-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Improvement] Set RandAugment as Imgaug default transforms. #585
[Improvement] Set RandAugment as Imgaug default transforms. #585
Conversation
Codecov Report
@@ Coverage Diff @@
## master #585 +/- ##
==========================================
+ Coverage 85.44% 85.49% +0.04%
==========================================
Files 130 130
Lines 9336 9339 +3
Branches 1564 1564
==========================================
+ Hits 7977 7984 +7
+ Misses 962 959 -3
+ Partials 397 396 -1
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
@innerlee This pr is ready for review. |
@congee524 would you like to do a comparison with this aug on/off on tsn&i3d? |
oh, hh,I forgot about it. (ノ´д`) Because of the shortage of resources, it may start this Saturday. |
I did some tests myself, with tsn_r50_video_1x1x8_100e_kinetics400_rgb.py(test with 256 ThreeCrop) Training loss(epoch 100) is quite large. Maybe more epochs and tuning lr could help to get better results.
|
iφ(* ̄0 ̄), I‘m working on i3d |
Hope randaugment could help. |
@irvingzhang0512 Hi, when I was training the ckpt of i3d with imgaug, I found that the training speed was very slow because the CPU was running at full load. Is this method CPU intensive? |
Yes. It takes me 3 days to train tsn + 1x1x8 + randaugment with 8 V100 gpus. |
Update some training results. RandAugment doesn't improve accuracy for I3D. Training more epochs may help.
|
My bet is movie transition fx #436 (comment) will be more helpful |
Could you provide your training log? I have trained 66 epoch Maybe the performance will be better |
I3D_Vanilla.log FYI, there is a huge gap between our kinetics-400... |
It looks similar. Maybe I can stop the program :-P |
Since our datasets are quite different, it's better to upload ckpts trained by your dataset. |
OK, about 4 days left. |
@irvingzhang0512 My result is 72.82%/90.64%. Maybe your ckpt should be uploaded. |
Worse than model without randaugment? |
Yes, from 73.27 to 72.82. |
I use short-side 256. But our training&val samples(videos) are quite different. |
I'll try randaugment in other datasets(sth) this month |
kindly ping @congee524 training tsm-r50 on sthv1 with randaugment, get very good results. ckpt/json/log is here Maybe this ckpt is enough for this pr? |
@irvingzhang0512 Sorry for my late reply... When I use Imgaug with default transforms, The training time has increased a lot (about 18 days on 8 V100), which may be due to weak CPU performance. But the performance didn't improve (72.49/90.65). |
Maybe Kinetics is large enough, so the image enhancement effect is very small, and simple For sthv1 and shtv2, since we didn't apply I have found that the results of imgaug and flip with label transform on sthv1 are similar. |
Maybe we can add sthv1/tsm-r50/randaugment ckpt/log/json to model zoo instead of i3d-r50/kinetics/randaugment. BTW, train sthv1 & tsm-r50 with flip & randaugment, get 47.85/50.31 top-1 accu1 and 76.78/78.18 |
I'll test your ckpt on our sthv1 now. (ง •_•)ง |
BTW, could you provide the ckpt of tsm_50_flip_randaugment ? |
tsm_r50_randaugment on our dataset: 47.07/48.90 75.81/77.64 |
I do know there is a gap between our sthv1 datasets, which is odd. We could directly download sthv1 rawframes from official website. So we are supposed to have exact the same dataset...
|
@dreamerlin may help to upload the ckpt/log/json :p |
It seems not the tsm_50_flip_randaugment. |
My mistake, |
tsm_r50_flip_randaugment on our dataset: |
configs/recognition/tsm/tsm_r50_flip_randaugment_1x1x8_50e_sthv1_rgb.py
Outdated
Show resolved
Hide resolved
uploaded |
@innerlee This PR is ready. |
Thanks! |
Use imgaug to reimplement RandAugment.
According to VideoMix, RandAugment helps a little.
Results