Skip to content

Conversation

ShrillShrestha
Copy link
Contributor

Resolves #3915

Hi @NicolasHug. Can you please review it? Please, let me know if I have to change anything.

@facebook-github-bot
Copy link
Contributor

Hi @ShrillShrestha!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks!

@facebook-github-bot
Copy link
Contributor

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Facebook open source project. Thanks!

Copy link
Collaborator

@pmeier pmeier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @ShrillShrestha, thanks a lot for the PR! Overall looking good. I left a few comments below.

test/test_io.py Outdated
def test_probe_video_from_file():
with temp_video(10, 300, 300, 5) as (f_name, data):
video_info = io._probe_video_from_file(f_name)
assert pytest.approx(video_info.video_duration, 0.1) == 2
Copy link
Collaborator

@pmeier pmeier May 26, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. By convention, pytest.approx should always be applied to the expected value, i.e. 2 here. While this has no functional difference, it makes the intent more clear: "The computed value should be in proximity of the expected value" and not the other way around.
  2. Although not required, rtol and atol should always passed as keyword argument. I needed to look up the definition to see if 0.1 is rtol or atol and that should not be required when reading a test.
  3. Shouldn't it be rtol=0.0, atol=0.1? At least that is what I get from the documentation of self.assertAlmostEqual

test/test_io.py Outdated
with temp_video(10, 300, 300, 5) as (f_name, data):
video_info = io._probe_video_from_file(f_name)
assert pytest.approx(video_info.video_duration, 0.1) == 2
assert pytest.approx(video_info.video_fps, 0.1) == 5
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above.

test/test_io.py Outdated
with open(f_name, "rb") as fp:
filebuffer = fp.read()
video_info = io._probe_video_from_memory(filebuffer)
assert pytest.approx(video_info.video_duration, 0.1) == 2
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above. Please apply this to all other occurrences of the same pattern.

test/test_io.py Outdated
expected_pts = [i * pts_step for i in range(num_frames)]

assert pts == expected_pts
container.close()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will never be hit in case the above assert fails. Is it possible to open the container within a context manager so we don't need to handle proper closing?

Since this already existed before, this could be fixed in a follow-up PR.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch. A simple fix here would probably to call close before the assert.

test/test_io.py Outdated
lv, _, _ = io.read_video(f_name, pts[start], pts[start + offset - 1])
s_data = data[start:(start + offset)]
assert len(lv) == offset
assert_equal(s_data, lv, rtol=0.0, atol=TOLERANCE)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@NicolasHug that seems like a porting issue I made. It should be torch.testing.assert_close, shouldn't it?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lol I thought you had done this on purpose, I assumed the logic was that we're comparing int tensors so assert_equals makes more sense.... which I didn't disagree with.

No strong opinion here, we can keep it as-is IMHO

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, from my stand point assert_equal should only be used without tolerances (or better with the default rtol = atol = 0.0), because otherwise we wouldn't compare for equality.

Copy link
Member

@NicolasHug NicolasHug left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @ShrillShrestha , congrats on your first PR here! I made a few additional comments but this looks pretty good already

test/test_io.py Outdated
reason="video_reader backend not available")
@pytest.mark.skipif(av is None, reason="PyAV unavailable")
def test_write_read_video():
with temp_video(10, 300, 300, 5, lossless=True) as (f_name, data):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the future, we could change temp_video into a fixture. But let's keep this for another PR

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@NicolasHug can you open an issue to track this? Making this into a fixture will make tests faster

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll keep this in the back of my mind but I'd like to focus on porting the rest of the tests for now. The fixture will need to be parametrized somehow and while I think it's possible, I might be missing something and I'm not 100% sure.

There won't be any significant time gain, temp_video is extremely cheap. I created this test:

def test_lol():
    for _ in range(1000):
        options = {'bframes': '16', 'keyint': '10', 'min-keyint': '4'}
        temp_video(10, 300, 300, 5, lossless=True, options=options)

(with and without options, it was the same)

and it ran in less than 5ms. For ref, the slowest tests are:

3.52s call     test/test_io.py::TestVideo::test_write_video_with_audio
0.43s call     test/test_io.py::TestVideo::test_read_partial_video_bframes[1-60]
0.41s call     test/test_io.py::TestVideo::test_read_partial_video_bframes[2-60]
0.41s call     test/test_io.py::TestVideo::test_read_partial_video_bframes[2-20]
0.40s call     test/test_io.py::TestVideo::test_read_partial_video_bframes[3-0]
0.40s call     test/test_io.py::TestVideo::test_read_partial_video_bframes[2-40]
0.40s call     test/test_io.py::TestVideo::test_read_partial_video_bframes[3-20]
0.39s call     test/test_io.py::TestVideo::test_read_partial_video_bframes[3-60]
0.39s call     test/test_io.py::TestVideo::test_read_partial_video_bframes[2-0]
0.39s call     test/test_io.py::TestVideo::test_read_partial_video_bframes[3-40]
0.37s call     test/test_io.py::TestVideo::test_read_partial_video_bframes[1-40]
0.37s call     test/test_io.py::TestVideo::test_read_partial_video_bframes[1-0]
0.36s call     test/test_io.py::TestVideo::test_read_partial_video_bframes[1-20]
0.11s call     test/test_io.py::TestVideo::test_read_partial_video_pts_unit_sec[1-1]
0.10s call     test/test_io.py::TestVideo::test_read_partial_video_pts_unit_sec[2-1]
0.10s call     test/test_io.py::TestVideo::test_read_partial_video_pts_unit_sec[3-1]
0.10s call     test/test_io.py::TestVideo::test_read_partial_video[2-3]
0.10s call     test/test_io.py::TestVideo::test_read_partial_video[1-3]
0.09s call     test/test_io.py::TestVideo::test_read_partial_video[3-3]
0.09s call     test/test_io.py::TestVideo::test_read_partial_video[1-2]

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok sounds good. If the video creation is cheap we don't need to bother about it, thanks for double-checking!

test/test_io.py Outdated
expected_pts = [i * pts_step for i in range(num_frames)]

assert pts == expected_pts
container.close()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch. A simple fix here would probably to call close before the assert.

test/test_io.py Outdated
Comment on lines 131 to 132
for start in range(5):
for offset in range(1, 4):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How long do these 2 lines take:

    with temp_video(10, 300, 300, 5, lossless=True) as (f_name, data):
        pts, _ = io.read_video_timestamps(f_name)

?

If they're very short, like less than a few hundreds of ms, then we can afford to repeat those across calls and turn the 2 nested for loops into a parametrization, like so:

@pytest.mark.parametrize('start', range(5))
@pytest.mark.parametrize('offset', range(1, 4))
def test_read_partial_video(start, offset):

I'd suggest to write the parametrization, run this test, and if the difference between the parametrized and the non-parametrized versions is small, then let's go for it.

test/test_io.py Outdated
Comment on lines 154 to 155
for start in range(0, 80, 20):
for offset in range(1, 4):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here about parametrization, there are more occurrences below

test/test_io.py Outdated
lv, _, _ = io.read_video(f_name, pts[start], pts[start + offset - 1])
s_data = data[start:(start + offset)]
assert len(lv) == offset
assert_equal(s_data, lv, rtol=0.0, atol=TOLERANCE)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lol I thought you had done this on purpose, I assumed the logic was that we're comparing int tensors so assert_equals makes more sense.... which I didn't disagree with.

No strong opinion here, we can keep it as-is IMHO

test/test_io.py Outdated
expected_pts = [i * pts_step for i in range(num_frames)]

assert pts == expected_pts
container.close()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same note about calling close before the assert (and in other places as well)

@ShrillShrestha
Copy link
Contributor Author

ShrillShrestha commented May 26, 2021

Thanks for the comments @NicolasHug and @pmeier. I have made the suggested changes. Can you please review it?

Copy link
Collaborator

@pmeier pmeier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM @ShrillShrestha. Thanks for awesome work! I have two non-blocking suggestions below. Plus, a more general suggestion: you can mark review comments that you addressed with the "Resolve conversation" button below them. That makes it easier for a reviewer to see which comments still need to be addressed and which are already dealt with.

Copy link
Member

@NicolasHug NicolasHug left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @ShrillShrestha !

- add skip decorators to TestVideo class
- add absolute and relative tolorence to pytest.approx
- add parametrization instead of nested loops
- close container before assert statements
@ShrillShrestha
Copy link
Contributor Author

Thanks, @pmeier, and @NicolasHug. I got to learn some important things from this issue. I really appreciate your feedback.

@NicolasHug NicolasHug merged commit 882e11d into pytorch:master May 27, 2021
@NicolasHug
Copy link
Member

Merging, thanks again @ShrillShrestha !

facebook-github-bot pushed a commit that referenced this pull request Jun 10, 2021
Reviewed By: NicolasHug

Differential Revision: D29027327

fbshipit-source-id: a7572b5c8bb7098803a4d754c7a661438fcbc81a
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Port test/test_io.py to pytest
6 participants