Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

An important issue. #17

Closed
kingwmk opened this issue Jul 3, 2019 · 6 comments
Closed

An important issue. #17

kingwmk opened this issue Jul 3, 2019 · 6 comments

Comments

@kingwmk
Copy link

kingwmk commented Jul 3, 2019

In the test phase, the encoder sees ground truth data that should not be seen, resulting in higher precision. May I ask for some explanation?

@tkipf
Copy link
Collaborator

tkipf commented Jul 3, 2019

We always (have to) condition the generative model on some initial trajectory, even at test time — in case this is what you refer to?

@kingwmk
Copy link
Author

kingwmk commented Jul 3, 2019

the code in modules:

638 if dynamic_graph and step >= burn_in_steps:
639 # NOTE: Assumes burn_in_steps = args.timesteps
logits = encoder(
data[:, :, step - burn_in_steps:step, :].contiguous(),
rel_rec, rel_send)

@tkipf
Copy link
Collaborator

tkipf commented Jul 3, 2019

Yes, we are conditioning the generative model on an initial sub-sequence (a few steps of ‘real’ data) from which our model predicts future time steps. Of course we only evaluate the predictive quality on these ‘future’ time steps and discard the conditioning. This can be seen as a ‘burn-in’ phase for the recurrent generative model, as outlined in the code that you mentioned.

@tkipf tkipf closed this as completed Jul 3, 2019
@kingwmk
Copy link
Author

kingwmk commented Jul 3, 2019

Yes. However, When employing dynamic-graph in the test step, after burnin step the encoder still use the ground-truth data as " data[:, :, step - burn_in_steps:step, :] " to infer, so it sees ground truth data that should not be seen. Is it right in such case? I mean after burnin step, should the encoder use predicted data for the reasoning?

@kingwmk
Copy link
Author

kingwmk commented Jul 3, 2019

Thank you for your kindness and timely response. Thank you very much for your contribution to the community! I am not malicious. I just want to exchange and discuss issues and learn from each other.

@tkipf
Copy link
Collaborator

tkipf commented Jul 3, 2019

I see what you're saying. Yes, this is indeed using a setting equivalent to teacher forcing to re-estimate the discrete latent graph (in the setting denoted by "dynamic graph"), but the decoder is only conditioned on its own past predictions + the discrete latent graph which can only communicate very little information about the (past) ground truth trajectory.

If you would like to deploy this in an online setting, you would have to use its own predictions even for dynamic graph re-estimation.

I agree that the green line in Figure 6a (Motion capture data), which is the only result we show that uses this setting, might be slightly affected by this and should be interpreted with care.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants