-
Notifications
You must be signed in to change notification settings - Fork 453
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement imitation learning baseline #31
Comments
Hi First off, just to confirm, when you say "action shortest path", would I be correct in interpreting this to mean that the expert will be an instance of the ShortestPathFollower class? A really high-level view of what I hope to do is:
Does this seem okay? |
Sorry for a delayed response.
I believe so, but someone like @jacobkrantz can confirm.
Unless I'm missing something, this sounds like a bad idea. Writing heavy observations (images, etc) to disk from a simulator feels wrong as a design. Why can't the observation tensors be directly fed from the simulator to the model? |
One additional node: "(both the modes: "geodesic" and "greedy")." There isn't a difference in the two modes, both result in an extremely similar expert (the names are just bad, going to change them now).
There are some caveats with generating the dataset on-the-fly -- namely that there is a cost to switching scenes and therefore your episodes won't really be IID. For PointNav, I agree that on-the-fly is a must -- train has 5 million episodes and that is just way to large too ever fit on disk (just the images would take over 20 TB even with excellent image compression). |
The reason behind storing an explicit reference to this "Expert dataset" was to ensure that these episodes could be IID when being fed to train the model, but the memory blowup concern is a more pressing one Does a strategy of having alternate cycles of training the policy and generating expert trajectories sound good enough? To elaborate, suppose I wish to use 100 expert trajectories to train the model. I could generate 5 expert trajectories at a time, temporarily store these trajectories, train the model on it for some epochs and then generate the next 5 trajectories whilst deleting the previously stored trajectories. Taking this idea to its logical extreme would be to get an (observation, action) pair from the expert and use this to train the model. However, this would break the IID assumption used in Behavioral Cloning since such pairs would be correlated with each other. In my opinion, having alternate cycles of training and generating would allow for finding a compromise between both the issues of memory consumption and training episodes being IID. |
Yeah, collecting some set of trajectories from some set of environments, learn on them, delete and repeat makes sense to me. |
This PR provides a much faster and reliable greedy follower. While it doesn't generate the shortest path in action space, the paths still tend to be very good.
Thank you @mukulkhanna and @erikwijmans, @Skylion007 for the reviews. Closing the issue and open another one for Dagger baseline. |
Implement imitation learning baseline that uses action shortest path for episode for training.
Place it in https://github.com/facebookresearch/habitat-api/tree/master/baselines.
The text was updated successfully, but these errors were encountered: