New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ValueError: All hk.Module
s must be initialized inside an hk.transform
#80
Comments
i'm facing the same issue. Have you fix it? |
Yup, forgot that even the tapir constructor needs to be called inside a haiku transform. I've attached a working verson of this script, but it's a bit ugly; we may need to work on simplifying this further. |
And you, my friend, are true hero |
Hi, your work is pretty good. But when I try to modify your code, I found the performance decreased a lot. |
Here is the video I try to track. The first frame is the query frame, other frames are the tracked frames. The performance is not good at the first few frames and the last few frames. (I show all the visible points and unvisible points into the frames without deleting the unvisible points) gen.mp4 |
Yes, if you want to track across videos, we recommend that you have separate forward passes to extract features and to perform tracking; the model hasn't seen cuts during training, so it tends to get confused by them. That said, these results look extremely jittery. What points are you trying to track? The model will always struggle to track textureless regions, but it should do OK for points on the objects. I suspect there's something wrong with the way you're using the causal state, but it's hard to say without seeing the code. (FYI, we're hoping to release a better version of this interface in the next few days, as soon as I get around to updating our colabs) |
The points I try to track are seven active points got from two demos(six of them are on the red cube which is going to be grasped). As you say, the problem of my video is maybe the causal state. So I try to separate the two parts. Below is my detail, do you think my thought is right?(by the way, my video shown above made some mistakes about the pixel's index(opencv has a different index order with what I think). The video I show below solve this bug) I‘m grateful to see your update of the online version. |
This is a demo video which has the seven active points(tracked by tapir) active.mp4Here is my video which separate the two parts(get query feature part and online track part). The blue points are active demo points and the red points are active points to be tracked online. You can ignore the green and blue arrows(I use them for debug) imitation.mp4 |
The tracks on your demo look pretty reasonable to me, which suggests you're using the code correctly. It looks like the objects are oriented differently between the demo video and the test-time video. This is a known weakness of TAPIR--there's relatively little in-plane rotation in Kubric, so the model doesn't have very good invariance to it. Also, are you re-using textures across different objects? This may cause problems as well (TAPIR may have spurious matches on the wrong object). In real videos, stochastic textures like wood grain are unlikely to repeat exactly. I expect BootsTAP will improve on both of these; we hope to release a causal BootsTAPIR model sometime in the next few weeks. However, it may not completely solve these problems. Also, are you plotting occluded points in the test-time video? I'd like to see a version where you don't do this; TAPIR shouldn't be marking those points as visible since they're obviously wrong. |
Thanks for your reply. imitation.mp4 |
What's more, I find tapir spends more than half an hour to inference a video having only 500 frames?(online version is much faster) I don't know the reason(maybe because I can't use parallel compilation, but I 'm not sure) . could you tell me how to solve the problem?Thanks |
The original bug about "All hk.Modules must be initialized inside an hk.transform" in live_demo.py should be fixed with the latest push. Unfortunately this push also includes the update that replaces the deprecated jax.tree_map with the very recently-introduced jax.tree.map, so the codebase now requires a very recent version of jax in order to run. It should be safe to do a find/replace with jax.tree_map to be compatible with older versions; we aren't using any other new jax features. |
I followed all of the instructions titled "Live Demo" of README.md, including the installation of dependencies, and updating the PYTHONPATH.
However, I get the following error when I run
python3 ./tapnet/live_demo.py
The text was updated successfully, but these errors were encountered: