Awesome
AcinoSet: A 3D Pose Estimation Dataset and Baseline Models for Cheetahs in the Wild<img src="https://images.squarespace-cdn.com/content/v1/57f6d51c9f74566f55ecf271/1608473251355-R6MD2DPAGXD541O6KSPO/ke17ZwdGBToddI8pDm48kDJiRRinvyl0ibURJcD42oMUqsxRUqqbr1mOJYKfIPR7LoDQ9mXPOjoJoqy81S2I8N_N4V1vUb5AoIIIbLZhVYxCRW4BPu10St3TBAUQYVKcQRhUxETRWa-oq147TtIoC7IIYHcXSEvrmlBoYmbrKNZ_GGuik8tacc4P7_d_fn_0/cheetahTurn.png?format=2500w" width="375" title="AcinoSet" alt="Cheetah" align="right" vspace = "50">
Daniel Joska, Liam Clark, Naoya Muramatsu, Ricardo Jericevich, Fred Nicolls, Alexander Mathis, Mackenzie W. Mathis, Amir Patel
AcinoSet is a dataset of free-running cheetahs in the wild that contains 119,490 frames of multi-view synchronized high-speed video footage, camera calibration files and 7,588 human-annotated frames. We utilize markerless animal pose estimation with DeepLabCut to provide 2D keypoints (in the 119K frames). Then, we use three methods that serve as strong baselines for 3D pose estimation tool development: traditional sparse bundle adjustment, an Extended Kalman Filter, and a trajectory optimization-based method we call Full Trajectory Estimation. The resulting 3D trajectories, human-checked 3D ground truth, and an interactive tool to inspect the data is also provided. We believe this dataset will be useful for a diverse range of fields such as ecology, robotics, biomechanics, as well as computer vision.
AcinoSet code by:
Prerequisites
- Anaconda
- The dependecies defined in conda_envs/*.yml
What we provide:
- 7,588 ground truth 2D frames
- 119,490 processed frames with 2D keypoint estimation outputs (H5 files as in the DLC format, and raw video)
- this is currently organized by date > animal ID > "run/attempt"
- 3D files that are processed using our FTE baseline model. These can be used for 3D GT.
- these files are called
fte.pickle
, have a related(n)_cam_scene_sba.json
file, and can be loaded in the GUI.
- these files are called
- A GUI to inspect the 3D dataset, which can be found here
The following sections document how this was created by the code within this repo:
Pre-trained DeepLabCut Model:
- You can use the
full_cheetah
model provided in the DLC Model Zoo to re-create the existing H5 files (or on new videos). - Here, we also already provide the videos and H5 outputs of all frames, here.
Labelling Cheetah Body Positions:
If you want to label more cheetah data, you can also do so within the DeepLabCut framework. We provide a conda file for an easy-install, but please see the repo for installation and instructions for use.
$ conda env create -f conda_envs/DLC.yml -n DLC
AcinoSet Setup:
Navigate to the AcinoSet folder and build the environment:
$ conda env create -f conda_envs/acinoset.yml
Launch Jupyter Lab:
$ jupyter lab
Camera Calibration and 3D Reconstruction:
Intrinsic and Extrinsic Calibration:
Open calib_with_gui.ipynb
and follow the instructions.
Alternatively, if the checkerboard points detected in calib_with_gui.ipynb
are unsatisfactory, open saveMatlabPointsForAcinoSet.m
in MATLAB and follow the instructions. Note that this requires MATLAB 2020b or later.
Optionally: Manually defining the shared points for extrinsic calibration:
You can manually define points on each video in a scene with Argus Clicker. A quick tutorial is found here.
Build the environment:
$ conda env create -f conda_envs/argus.yml
Launch Argus Clicker:
$ python
>>> import argus_gui as ag; ag.ClickerGUI()
Keyboard Shortcuts (See documentation here for more):
G
... to a specific frameX
... to switch the sync mode setting the windows to the same frameO
... to bring up the options dialogS
... to bring up a save dialog
Then you must convert the output data from Argus to work with the rest of the pipeline (here is an example):
$ python argus_converter.py \
--data_dir ../data/2019_03_07/extrinsic_calib/argus_folder
3D Reconstruction:
To reconstruct a cheetah into 3D, we offer three different pose estimation options on top of standard triangulation (TRI):
- Sparse Bundle Adjustment (SBA)
- Extended Kalman Filter (EKF)
- Full Trajectory Estimation (FTE)
You can run each option seperately. For example, simply open FTE.ipynb
and follow the instructions!
Otherwise, you can run all types of refinements in one go:
python all_optimizations.py --data_dir 2019_03_09/lily/run --start_frame 70 --end_frame 170 --dlc_thresh 0.5
NB: When running the FTE, we recommend that you use the MA86 solver. For details on how to set this up, see these instructions.
Citation
We ask that if you use our code or data, kindly cite (and note it is accepted to ICRA 2021, so please check back for an updated ref):
@misc{joska2021acinoset,
title={AcinoSet: A 3D Pose Estimation Dataset and Baseline Models for Cheetahs in the Wild},
author={Daniel Joska and Liam Clark and Naoya Muramatsu and Ricardo Jericevich and Fred Nicolls and Alexander Mathis and Mackenzie W. Mathis and Amir Patel},
year={2021},
eprint={2103.13282},
archivePrefix={arXiv},
primaryClass={cs.CV}
}