-
Notifications
You must be signed in to change notification settings - Fork 157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
An important issue. #17
Comments
We always (have to) condition the generative model on some initial trajectory, even at test time — in case this is what you refer to? |
the code in modules: 638 if dynamic_graph and step >= burn_in_steps: |
Yes, we are conditioning the generative model on an initial sub-sequence (a few steps of ‘real’ data) from which our model predicts future time steps. Of course we only evaluate the predictive quality on these ‘future’ time steps and discard the conditioning. This can be seen as a ‘burn-in’ phase for the recurrent generative model, as outlined in the code that you mentioned. |
Yes. However, When employing dynamic-graph in the test step, after burnin step the encoder still use the ground-truth data as " data[:, :, step - burn_in_steps:step, :] " to infer, so it sees ground truth data that should not be seen. Is it right in such case? I mean after burnin step, should the encoder use predicted data for the reasoning? |
Thank you for your kindness and timely response. Thank you very much for your contribution to the community! I am not malicious. I just want to exchange and discuss issues and learn from each other. |
I see what you're saying. Yes, this is indeed using a setting equivalent to teacher forcing to re-estimate the discrete latent graph (in the setting denoted by "dynamic graph"), but the decoder is only conditioned on its own past predictions + the discrete latent graph which can only communicate very little information about the (past) ground truth trajectory. If you would like to deploy this in an online setting, you would have to use its own predictions even for dynamic graph re-estimation. I agree that the green line in Figure 6a (Motion capture data), which is the only result we show that uses this setting, might be slightly affected by this and should be interpreted with care. |
In the test phase, the encoder sees ground truth data that should not be seen, resulting in higher precision. May I ask for some explanation?
The text was updated successfully, but these errors were encountered: