Onur Beker, Mohammad Mohammadi, Amir Zamir
TL;DR: We introduce PALMER, a long-horizon planning method that directly operates on high dimensional sensory input observable by an agent on its own (e.g., images from an onboard camera). Our key idea is to retrieve previously observed trajectory segments from a replay buffer and restitch them into approximately optimal paths to connect any given pair of start and goal states. This is achieved by combining classical sampling-based planning algorithms (e.g., PRM, RRT) with learning-based perceptual representations that are informed of actions and their consequences.1
To achieve autonomy in a priori unknown real-world scenarios, agents should be able to:
- act directly from their own sensory observations, without assuming auxiliary instrumentation in their environment (e.g., a precomputed map, or an external mechanism to compute rewards).
- learn from past experience to continually adapt and improve after deployment.
- be capable of long-horizon planning.
Classical planning algorithms (e.g. PRM, RRT) are proficient at handling long-horizon planning. Deep learning based methods in turn can provide the necessary representations to address the others, by modeling statistical contingencies between sensory observations.2
In this direction, we introduce a general-purpose planning algorithm called PALMER that combines classical sampling-based planning algorithms with learning-based perceptual representations.
- For training these representations, we combine Q-learning with contrastive representation learning to create a latent space where the distance between the embeddings of two states captures how easily an optimal policy can traverse between them.
- For planning with these perceptual representations, we re-purpose classical sampling-based planning algorithms to retrieve previously observed trajectory segments from a replay buffer and restitch them into approximately optimal paths that connect any given pair of start and goal states.
This creates a tight feedback loop between representation learning, memory, reinforcement learning, and sampling-based planning. The end result is an experiential framework for long-horizon planning that is more robust and sample efficient compared to existing methods.
- How to retrieve past trajectory segments from a replay-buffer / memory? → by using offline reinforcement learning for contrastive representation learning.
- How to restitch these trajectory segments into a new path? → by repurposing the main subroutines of classical sampling-based planning algorithms.
- What makes PALMER robust and sample-efficient? → it explicitly checks back with a memory / training-dataset whenever it makes test-time decisions.
Please see SETUP.md for instructions.
@article{beker2022palmer,
author = {Onur Beker and Mohammad Mohammadi and Amir Zamir},
title = {{PALMER}: Perception-Action Loop with Memory for Long-Horizon Planning},
journal = {arXiv preprint arXiv:coming soon!},
year = {2022},
}