Home

Awesome

Forecasting Human Trajectory from Scene History

This repository contains the official implementation of our paper: Forecasting Human Trajectory from Scene History. Mancheng Meng, Ziyan Wu, Terrence Chen, Xiran Cai, Xiang Sean Zhou, Fan Yang*, Dinggang Shen. NeurIPS 2022. * Corresponding author. paper

arch

Abstract: Predicting the future trajectory of a person remains a challenging problem, due to randomness and subjectivity. However, the moving patterns of human in a constrained scenario typically conform to a limited number of regularities to a certain extent, because of the scenario restrictions (e.g., floor plan, roads, and obstacles) and person-person or person-object interactivity. Thus, an individual person in this scenario should follow one of the regularities as well. In other words, a person’s subsequent trajectory has likely been traveled by others. Based on this hypothesis, we propose to forecast a person’s future trajectory by learning from the implicit scene regularities. We call the regularities, inherently derived from the past dynamics of the people and the environment in the scene, scene history. We categorize scene history information into two types: historical group trajectory and individual-surroundings interaction. To exploit these information for trajectory prediction, we propose a novel framework Scene History Excavating Network (SHENet), where the scene history is leveraged in a simple yet effective approach. In particular, we design two components: the group trajectory bank module to extract representative group trajectories as the candidate for future path, and the cross-modal interaction module to model the interaction between individual past trajectory and its surroundings for trajectory refinement, respectively. In addition, to mitigate the uncertainty in ground truth trajectory, caused by the aforementioned randomness and subjectivity, we propose to include smoothness into training process and evaluation metrics. We conduct extensive evaluations to validate the efficacy of proposed framework on ETH, UCY, as well as a new, challenging benchmark dataset PAV, demonstrating superior performance compared to state-of-the-art methods.

Installation

Environment Setup

We cam create a conda environment use the following commands.

conda create --name SHENet
source activate SHENet
pip install -r requirements.txt

Data Setup

pretrained models

We provide a set of pretrained models(password:7k7k) as follows:

Model training and evaluation

Code structure

The general structure of the project is as follows.

  1. utils/parser.py: training and testing options, default setttings.
  2. datasets: to process the origin data
  3. demo: some cluster examples and visualization scripts.
  4. model: our model structure
  5. tools: train/test our model

command line arguments

-- data_dir (path to the dataset directories)
-- input_n (number of model's input frames)
-- output_n (number of model's output frames)
-- origin_d_tra (dimensions of the origin trajectory dim)
-- origin_d_scene (dimensions of the scene feature dim)
-- input_dim (dimensions of the input coordinates)
-- output_dim (dimensions of the output dim)
-- embed_dim (dimensions of the embed dimensions)
-- vit_config_file (config_file of pretrained scene model)
-- vit_checkpoint_file (pretrained scene model)
-- gpus (gpus ids)
-- n_epochs (number of epochs to train)

Usage

  1. train : python train.py --args
  2. test: python test.py --args

Results

  1. Tested on PAV:
PETSADLVENICEAVG
Ours34.49/78.4014.42/38.677.76/18.3118.89/45.13
  1. Tested on ETH/UCY
ETHHOTELUNIVZARA1ZARA2AVG
Ours0.41/0.610.13/0.200.25/0.430.21/0.320.15/0.260.23/0.36
  1. Qualitative Results on PAV

res

Acknowledgement

Thanks for the preprocessed data and code from ynet.

Citation

If interested, please cite our paper.