Home

Awesome

Adversarial Robustness Analysis of Trajectory Prediction

Requirements

Install necessary packages.

pip install -r requirements.txt

The requirements.txt include packages required by Trajectron++ and a few tools e.g., matplotlib for visualization and pyswarm for PSO implementation.

Directory Structure and Definitions

Parameters:

Directories:

Format of JSON-format input trajectory data

{
    "observe_length": int,
    "predict_length": int,
    "time_step": float,
    "feature_dimension": int, // extra features other than x-y location coordinates
    "objects": {
        "string object id": {
            "type": int,  // 1: small vehicle 2: large vehicle 3: pedestrian 4: unknown
            "complete": bool, // all time frames are filled
            "visible": bool, // the last frame of history is filled
            "observe_trace": [observe_length * 2],
            "observe_feature": [observe_length * feature_dimension],
            "observe_mask": [observe_length],
            "future_trace": [predict_length * 2],
            "future_feature": [predict_length * feature_dimension],
            "future_mask": [predict_length],
            "predict_trace": [predict_length * 2] // Empty before inference
        }, ...
    }
}

Format of JSON-format output result data

{
    "perturbation": [observe_length+attack_length-1 * 2];
    "loss": number or dict,
    "obj_id": string,
    "attack_opts": dict, // other configuration or options of the attack
    "output_data": {
        "0": { // ID of the time frame (string)
            // The content is the same as the input trajectory data
        }, ...
    }
}

Steps to reproduce

Prepare datasets

First of all, we provide formulated test cases via Google Drives. Download the ZIP file and unzip it into directory test/data. By doing so, you can skip the following steps in this subsection (except for trajectron_map model on nuScenes since we still need map data from nuScenes dataset).

First, place datasets in directory /data following README.md in data/apolloscape, data/NGSIM, and data/nuScenes.

Second, this codebase translate raw dataset into JSON-format testing data. This is done by using APIs we provide. Here we show code samples for Apolloscape datasets. The translation on various datasets is implemented in /prediction/dataset.

To quickly generate the JSON-format test cases, run scripts in directory test:

python generate_data.py ${dataset_name}

Prepare models

The models are trained seperatedly for each dataset following the instructions from model authors. The models should be placed in /test/data/${model_name}_${dataset_name}/model/${attack_mode}.

The training code is not in this repo but we provide pretrained models via Google Drives. Download the ZIP file and unzip it into directory test/data

Run normal prediction as well as the attack

Normal prediction, adversarial attack, and evaluation are done through API normal_test, adv_attack, and evaluate_loss implemented in test_utils.py. As a quick start, we can execute test.py to run the whole pipeline.

python test.py --help

The script contains following parameters:

For executing normal tests or attacks on specific test case, see function normal_sample and attack_sample in test/test_utils.py

For developer

Add custom datasets

Similar to prediction/dataset/apolloscape.py, the developer should write a class inheriting prediction.dataset.base.BaseDataset and implement interface format_data. format_data should be a generator and use yield to output test cases in the JSON-format defined before.

Update test/config.py.

Add custom prediction models

Similar to prediction/model/GRIP/interface.py, the developer should write a class inheriting prediction.model.base.interface.Interface. The developer should implement the run interface, which accept the JSON-format test case (defined before) and a dictionary called perturbation.

The perturbation structure is defined as follows. For more details, please see the implementation of prediction.attack.gradient.GradientAttacker.

{
  "obj_id": str - the vehicle id whose trajectory is to be perturbed,
  "loss": function instance, e.g., prediction.attack.loss.attack_loss,
  "value": {obj_id: torch tensor of perturbation},
  "ready_value": {obj_id: torch tensor of perturbation after the constraint checking},
  "attack_opts": {
    "type": str - evalaution metric in ["ade", "fde", "left", "right", "front", "rear"],
    ... other parameters used in loss function
  }
}

Update test/config.py

References