Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

Some questions about future_prediction.py #14

Closed
okay-okay opened this issue Oct 20, 2021 · 3 comments
Closed

Some questions about future_prediction.py #14

okay-okay opened this issue Oct 20, 2021 · 3 comments

Comments

@okay-okay
Copy link

okay-okay commented Oct 20, 2021

Hi,

I'm trying to re-implement the model from the paper, and for a time horizon of 2 seconds I'm able to reach the same recall, however, increasing that time horizon didn't result in an increase of accuracy or recall. I noticed that here:

self.assign_to_centroids = assign_to_centroids
, there's this block of code on kmeans/centroids that is not described in the paper, so I'm unsure if this is the insight that I'm missing for longer horizons, or not. Would you be able to help shine some light on what this code is doing? Additionally, do you have any tips on how to see an increase in accuracy / recall when increasing horizon time?

@rohitgirdhar
Copy link
Contributor

Sorry for the confusion, this portion of code is actually not used in the paper, it's from some initial experiments I tried with quantized features. You can just ignore that part from the point of view of the paper.

@okay-okay
Copy link
Author

I see. Do you have any insight into how you were able to reach higher accuracy / recall in increased horizon time? For longer horizons the re-implemented model overfits and there’s no performance boost from using longer horizon, are we missing anything when training for different horizons?

@rohitgirdhar
Copy link
Contributor

Hi, I'm not sure what might be causing the overfitting. You should be able to experiment with different horizons in this code base by setting data_train.num_frames and data_eval.num_frames

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants