Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update performance of video models #256

Merged
merged 1 commit into from Oct 16, 2020

Conversation

kennymckormick
Copy link
Member

Update the performance of video models.
This PR validates that the performance trained with video clips or raw frames has no significant difference.

@codecov
Copy link

codecov bot commented Oct 15, 2020

Codecov Report

Merging #256 into master will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff           @@
##           master     #256   +/-   ##
=======================================
  Coverage   85.08%   85.08%           
=======================================
  Files          81       81           
  Lines        5276     5276           
  Branches      849      849           
=======================================
  Hits         4489     4489           
  Misses        648      648           
  Partials      139      139           
Flag Coverage Δ
#unittests 85.08% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.


Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 58eb37c...8bd98b3. Read the comment docs.

@@ -93,7 +93,7 @@
pipeline=test_pipeline))
# optimizer
optimizer = dict(
type='SGD', lr=0.1, momentum=0.9,
type='SGD', lr=0.3, momentum=0.9,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

double check that this lr is for 8 gpus

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that is for 8 gpu (since we use batch size 24 on each GPU

@@ -74,7 +74,7 @@
dict(type='ToTensor', keys=['imgs'])
]
data = dict(
videos_per_gpu=8,
videos_per_gpu=24,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

v100?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope, slowfast_4x16 doesn't consume lots of memory, we can fit 24 samples onto a 1080 Ti.

scales=(1, 0.875, 0.75, 0.66),
random_crop=False,
max_wh_scale_gap=1),
dict(type='RandomResizedCrop'),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does it mean that RandomResizedCrop is better than MultiScaleCrop in this case?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, we already validated that RandomResizedCrop can outperform MultiScaleCrop. The contribution of that PR is to show that training with videos doesn't lead to any performance drop.
Before renaming, this config is named tsn_r50_video_1x1x3_100e_kinetics400_rgb.py, and there is no performance score for that config in README.

@innerlee innerlee merged commit 7158362 into open-mmlab:master Oct 16, 2020
@kennymckormick kennymckormick deleted the fix_video_performance branch October 16, 2020 06:26
kennymckormick pushed a commit to kennymckormick/mmaction2 that referenced this pull request Oct 16, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants