You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
R1: For the quantitative results, we evenly extract 10 frames from each video as described in the paper, to cover as many scenarios of each video as possible.
R2: The setting of the source-to-target pairs in all experiments on FF++ follows the standard protocol of the FF++ dataset, for which you can find more information here.
Hi, thanks for your great work!
For the quantitative results in your work,I have questions about the correspondence of pair frames from source and target videos respectively.
(1) Did you randomly select 10 frames from each video or get the same pairs as FaceShifter?
(2) Could you provide the source-to-target pairs numbers for further fair comparison?
Thanks!
The text was updated successfully, but these errors were encountered: