Skip to content
Code and model for "Peeking into the Future: Predicting Future Person Activities and Locations in Videos", Liang et al, CVPR 2019
Branch: master
Clone or download

README.md

Next

This repository contains the code and models for the following paper:

Peeking into the Future: Predicting Future Person Activities and Locations in Videos
Junwei Liang, Lu Jiang, Juan Carlos Niebles, Alexander Hauptmann, Li Fei-Fei
CVPR 2019

You can find more information at our Project Page.
Please note that this is not an officially supported Google product.

If you find this code useful in your research then please cite

@inproceedings{liang2019peekingfuture,
  title={Peeking into the Future: Predicting Future Person Activities and Locations in Videos},
  author={Junwei Liang and Lu Jiang and Juan Carlos Niebles and Alexander G. Hauptmann and Li Fei-Fei},
  booktitle={CVPR},
  year={2019}
}

Introduction

In applications like self-driving cars and smart robot assistant it is important for a system to be able to predict a person's future locations and activities. In this paper we present an end-to-end neural network model that deciphers human behaviors to predict their future paths/trajectories and their future activities jointly from videos.

Below we show an example of the task. The green and yellow line show two possible future trajectories and two possible activities are shown in the green and yellow boxes. Depending on the future activity, the target person(top right) may take different paths, e.g. the yellow path for “loading” and the green path for “object transfer”.

Model

Given a sequence of video frames containing the person for prediction, our model utilizes person behavior module and person interaction module to encode rich visual semantics into a feature tensor. We propose novel person interaction module that takes into account both person-scene and person-object relations for joint activities and locations prediction.

Dependencies

  • Python 2.7; TensorFlow == 1.10.0

Pretrained Models

You can download pretrained models by running the script bash scripts/download_single_models.sh. This will download the following models, and will require about 5.8 GB of disk space:

  • next-models/actev_single_model/: This folder includes single model for the ActEv experiment.
  • next-models/ethucy_single_model/: This folder includes five single models for the ETH/UCY leave-one-scene-out experiment.

Testing

Instructions for testing pretrained models can be found here.

Training new models

Instructions for training new models can be found here.

You can’t perform that action at this time.