Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature proposal: human-comprehensible display of quality assessment #127

Open
dHannasch opened this issue Jan 5, 2020 · 5 comments · May be fixed by #131
Open

Feature proposal: human-comprehensible display of quality assessment #127

dHannasch opened this issue Jan 5, 2020 · 5 comments · May be fixed by #131

Comments

@dHannasch
Copy link
Contributor

The tools for measuring video quality based on a reference video http://www.scikit-video.org/stable/measure.html are very useful.
As a convenience feature, it would be nice to easily get a human-viewable video showing the quality assessment measure at each frame and an image showing the actual diff.
test
I'm picturing this as taking the quality-assessment measure (for example, skvideo.measure.ssim) and the image-differencing function (for example, PIL.ImageChops.difference) as separate parameters, so that they could be trivially swapped out. Not sure whether that's a good idea or not.

The image above shows the pristine video, which seems like the clearest way to communicate the idea, but I'm guessing in practice most people won't need or want to see the pristine video again, so up top we'll only need the distorted video and the diff. (Or, ideal world, just the distorted video, with the changes somehow highlighted with an overlay. But I don't know of a good way to turn an image diff into an overlay without obscuring the actual video.)

The proof of concept that produced the above image is at https://github.com/dHannasch/scikit-video/blob/compare-two-videos/skvideo/tests/test_compare.py, very crude right now.

@beyondmetis
Copy link
Member

Thanks @dHannasch for this idea and great work! I think it would make a great addition to the examples section as well. Maybe we can build out some visualization components for scikit-video that at least includes this gui to demonstrate quality assessment tools.

As for the diff, we need to be general enough to allow more than 1 type of diff, as is the case with spatio-temporal metrics like STRRED and MOVIE. Alternatively, maybe we can specialize the visualization for each quality assessment method. I agree that we should keep the diff separate from the video to reduce confusion.

Several years ago, I put together an instructional package of videos that we could use as further examples. Here is a link to that package if you (or anyone else is interested): https://live.ece.utexas.edu/research/VIP/VIP_materials.html

@dHannasch dHannasch linked a pull request Jan 12, 2020 that will close this issue
@dHannasch
Copy link
Contributor Author

dHannasch commented Jan 12, 2020

What is MOVIE? I don't see that in http://www.scikit-video.org/stable/measure.html.

http://www.scikit-video.org/stable/modules/generated/skvideo.measure.strred.html#skvideo.measure.strred is tricky. We could still have a plot on the bottom and just map each frame index (from 0 to T-1) to an appropriate-seeming index in the ST-RRED array, possibly just dividing by 2. It obviously doesn't make complete sense to match a single frame to one of the ST-RRED results, but it seems like the overall idea of how a video is more or less distorted at different points in time still applies. (Though if the user paused the video, they might find the the "current" score didn't appear to be justified by the actual distortion in that particular frame...we'd need to be careful to make clear what the plot was actually showing.) But I don't know whether we'd want to show the spatial score or the temporal score...we could show multiple scores, but it seems like the display would get too "busy".

Another complicating factor is that I think it would be best to avoid referencing specific differencing methods in the code of the display function itself (checking specifically for strred, and so forth). That seems like it would be unsustainable in the long term.

The videos you linked aren't currently in skvideo.datasets, are they? Should they be?

@beyondmetis
Copy link
Member

MOVIE is full-reference video quality metric that requires a large amount of compute, but seems to perform best on a number of datasets. A significant improvement on this method is FS-MOVIE, but it requires even more compute. It's not implemented in skvideo yet due to the amount of compute required (large number and size of convolutions). I can link the relevant papers if you are curious.

That seems like it would be unsustainable in the long term.

I agree! Another feature that'd be nice is multiple quality lines on the same plot. For example, measurements for a collection of distorted videos. Maybe we should create some wrappers for matplotlib that make video plots in a grid layout easier to make. This could easily get complicated fast of course.

The videos you linked aren't currently in skvideo.datasets, are they? Should they be?

The videos I linked are educational videos showing how methods work. The videos provided in skvideo.datasets allow repeatable real-world test cases. I'd like to avoid adding more, since that makes the package size larger.

@dHannasch
Copy link
Contributor Author

Not sure what to do about multiple quality-plots just yet, but there's a minimal feature set at #131 --- just skipping over the plot if we have nothing for it, as in the case of ST-RRED.

@dHannasch
Copy link
Contributor Author

dHannasch commented May 7, 2020

Do you think it would be worth merging in with just the single plot as shown?
Or would you want the function to take a list of image-differencing functions rather than only one, in case we want to have multiple quality-plots later? (Honestly I'm skeptical that someone would want to clutter their window with multiple different measurements rather than pick whichever one they happen to like, but I can certainly see that if we might eventually want that then it would be good to set the function signature to take multiple image-differencing functions now.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants