Awesome
Leveraging-Trajectory-Prediction-for-Pedestrian-Video-Anomaly-Detection
Asiegbu Miracle Kanu-Asiegbu, Ram Vasudevan, and Xiaoxiao Du
Clone Repo
git clone --recurse-submodules https://github.com/akanuasiegbu/Leveraging-Trajectory-Prediction-for-Pedestrian-Video-Anomaly-Detection.git
Installation
- scipy==1.4.1
- matplotlib==3.3.1
- Pillow==7.2.0
- scikit_learn==0.23.2
- opencv-python==4.4.0.42
- jupyter
- jupyterthemes==0.20.0
- hyperas==0.4.1
- pandas==1.1.2
- seaborn==0.11.0
- tensorflow_addons==0.11.2
- tensorflow_datasets
- wandb==0.10.12
- more_itertools==8.8.0
You can also use docker with 'docker/Dockerfile'. Note that I set the PYTHONPATH inside docker file would need to adjust that path "ENV PYTHONPATH "/mnt/roahm/users/akanu/projects/anomalous_pred/custom_functions:/home/akanu".
Step 1: Download Dataset
- The extracted bounding box trajectories for Avenue and ShanghaiTech with the anomaly labels appended can be found here .
- To want to recreate the input bounding box trajectory
- Download Avenue and ShanghaiTech dataset
- Use Deep-SORT-YOLOv4 commit number a4b7d2e
Step 2: Training
We used two two models for our experiments Long Short Term Memory (LSTM) Model and BiTrap model.
Training LSTM Models
- Users can train their LSTM models on Avenue and ShanghaiTech
- Training Avenue:
python models.py
- In config.py change ```hyparams['input_seq'] and hyparams['pred_seq'] to match input/output trajectory length
- Training ShanghaiTech:
python models.py
- In config.py change ```hyparams['input_seq'] and hyparams['pred_seq'] to match input/output trajectory length
- Training Avenue:
Training BiTrap Model
- For training BiTrap models refer forked repo here.
Train on Avenue Dataset
cd bitrap_leveraging
python tools/train.py --config_file configs/avenue.yml
Train on ShanghaiTech Dataset
cd bitrap_leveraging
python tools/train.py --config_file configs/st.yml
To train/inferece on CPU or GPU, simply add DEVICE='cpu'
or DEVICE='cuda'
. By default we use GPU for both training and inferencing.
Note that you must set the input and output lengths to be the same in YML file used (INPUT_LEN
and PRED_LEN
) and bitrap_leveraging/datasets/config_for_my_data.py
(input_seq
and pred_seq
)
Step 3: Inference
Pretrained BiTrap Model:
Trained BiTrap models for Avenue and ShanghiTech can be found here
Pretrained LSTM Models:
Trained LSTM models for Avenue and ShanghiTech can be found here
LSTM Inference
We do not explictly save the LSTM trajectory outputs into a file (such as pkl). Therefore the inference and the AUC calcution step for the LSTM model are performed simultaneously. Please refer to LSTM AUC Calcuation section shown below.
BiTrap Inference
To obtain BiTrap PKL files containing the pedestrain trajectory use commands below. Test on Avenue dataset:
cd bitrap_leveraging
python tools/test.py --config_file configs/avenue.yml CKPT_DIR **DIR_TO_CKPT**
Test on ShanghaiTech dataset:
cd bitrap_leveraging
python tools/test.py --config_file configs/st.yml CKPT_DIR **DIR_TO_CKPT**
PKL Files
BiTrap pkl files can be found here.
-
Download the
output_bitrap
folder which contains the pkl file folders for Avenue and ShanghiTech dataset. -
Naming convention:
in_3_out_3_K_1
means input trajectory and output trajectory is set to 3. And K=1 means using Bitrap as unimodal.
Step 4: AUC Caluation
BiTrap AUC Calcuation
- In
experiments_code/run_bitrap_auc.py
make sureexp['model_name']='bitrap'
- Then set
hyparams['input_seq']
andhyparams['pred_seq']
to desired length - Set
hyparams['metric']
to either'giou'
,'l2'
,or'iou'
- Set
hyparams['errortype']
to either'error_summed'
or'error_flattened'
- To run change
load_pkl_file
varible located inrun_auc_bitrap.py
to desired location- Then use
python run_bitrap_auc.py
- Then use
LSTM AUC Calcuation
- In
experiments_code/run_lstm_auc.py
make sureexp['model_name']='lstm_network'
- Then set
hyparams['input_seq']
andhyparams['pred_seq']
to desired length - Set
hyparams['metric']
to either'giou'
,'l2'
,or'iou'
- Set
hyparams['errortype']
to either'error_summed'
or'error_flattened'
- To run change
pretrained_model_loc
varible located inrun_lstm_auc.py
to desired location of pretrained lstm model- Then use
python run_lstm_auc.py
- Then use
If you want to run multiple LSTM/AUC refer to run_quick.py
Citation
If you found repo useful, feel free to cite.
@INPROCEEDINGS{9660004,
author={Kanu-Asiegbu, Asiegbu Miracle and Vasudevan, Ram and Du, Xiaoxiao},
booktitle={2021 IEEE Symposium Series on Computational Intelligence (SSCI)},
title={Leveraging Trajectory Prediction for Pedestrian Video Anomaly Detection},
year={2021},
volume={},
number={},
pages={01-08},
doi={10.1109/SSCI50451.2021.9660004}}