You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, this project is very good, I am very grateful to the authors for open source.
I am not a professional in artificial intelligence, but I am very interested in it.
There is a question, may I simply understand that this method is an upgraded version of the previous Ebsynth method? Maybe my cognition is relatively superficial, please forgive me.
Ebsynth is indeed a commendable work based on propagation, and our work shares some similarities with it, but there are also significant differences: One key difference lies in the use of canonical images. In our method, the canonical image is learned during the process, which allows it to encapsulate more information from the sequence. In contrast, Ebsynth directly uses a frame from the video as the canonical image. Our method also shines in handling challenging cases such as water or smoke, where the motion cannot be accurately represented by optical flow. Thanks to the use of implicit representations, our method is capable of reconstructing these complex motions effectively.
While Ebsynth excels in scenarios involving rigid motion, our method offers a broader range of applications. I would suggest trying out both tools and deciding based on your specific requirements and the nature of the video sequences you're working with.
Hello, this project is very good, I am very grateful to the authors for open source.
I am not a professional in artificial intelligence, but I am very interested in it.
There is a question, may I simply understand that this method is an upgraded version of the previous Ebsynth method? Maybe my cognition is relatively superficial, please forgive me.
The Ebsynth‘s site is https://ebsynth.com/
The text was updated successfully, but these errors were encountered: