Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How is TapNet different from omnimotion? #23

Closed
yhyu13 opened this issue Jun 17, 2023 · 1 comment
Closed

How is TapNet different from omnimotion? #23

yhyu13 opened this issue Jun 17, 2023 · 1 comment

Comments

@yhyu13
Copy link

yhyu13 commented Jun 17, 2023

https://github.com/qianqianwang68/omnimotion

Google had a similar work using the same testing images in your work? How is your work different from it fundamentally?

I am a hobbyist, so it's gratfull for you to spend time explaining briefly the differences, purpose, approach, and result wise.

Thanks!

@yangyi02
Copy link
Collaborator

One notable distinction between OmniMotion and TAPIR lies in their approach to test-time optimization. In the OmniMotion paper's abstract, they introduce a novel method for estimating dense and long-range motion from video sequences through test-time optimization.

OmniMotion achieves impressive results by constructing a scene model for each individual video. Point tracking is a byproduct of this scene model, which can also generate pseudo-depth information and track occluded points as outputs. However, the trade-off is that OmniMotion requires training a model for each video during the inference stage.

On the other hand, TAP-Net, PIPs, and TAPIR primarily focus on point tracking. They employ a pretraining strategy using large-scale synthetic datasets such as FlyingThings and Kubric. The advantage of this approach is that it allows for direct inference on new videos without the need for training on each specific video (zero-shot inference).

When it comes to evaluating their performance, both OmniMotion and TAPIR are assessed using the TAP-Vid benchmark. TAPIR has demonstrated superior results on DAVIS and Kinetics so far, with a score of 61.3 compared to OmniMotion's 51.7 on AJ. However, OmniMotion outperforms TAPIR on the textureless RGB-Stacking dataset, achieving a score of 77.5 versus TAPIR's 62.7 on AJ.

In summary, the key distinction can be seen as OmniMotion's per-video (offline) optimization versus TAPIR's zero-shot (online) inference approach.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants