Home

Awesome

Deep Learning for 3D Point Cloud Understanding: A Survey

Our survey paper[ArXiv]

@article{lu2020deep,
  title={Deep Learning for 3D Point Cloud Understanding: A Survey},
  author={Lu, Haoming and Shi, Humphrey},
  journal={arXiv preprint arXiv:2009.08920},
  year={2020}
}

Content

Datasets

Metrics

NameFormulaExplanation
AccuracyAccuracy indicates how many predictions are correct over all predictions. ``Overall accuracy (OA)" indicates the accuracy on the entire dataset.
mACCThe mean of accuracy on different categories, useful when the categories are imbalanced.
PrecisionThe ratio of correct predictions over all predictions.
RecallThe ratio of correct predictions over positive samples in the ground truth.
F1 ScoreThe harmonic mean of precision and recall.
IoUIntersection over Union (of class/instance $i$). The intersection and union are calculated between the prediction and the ground truth.
mIoUThe mean of IoU on all classes/instances.
MOTAMulti-object tracking accuracy (MOTA) synthesizes 3 error sources: false positives, missed targets and identity switches, and the number of ground truth (as TP+FN) is used for normalization.
MOTPMulti-object tracking precision (MOTP) indicates the precision of localization. denotes the number of matches at time t, and denotes the error of the i-th pair at time t.
EPEEnd point error (EPE) is used in scene flow estimation, also referred as EPE2D/EPE3D for 2D/3D data respectively. denotes the predicted scene flow vector while denotes the ground truth.

Papers (up to ECCV 2020)

3D Object Classification

Projection-based classification

Point-based classification

3D Segmentation

Semantic segmentation

Instance segmentation

Joint training

3D Object Detection

Projection-based detection

Point-based detection

Multi-view fusion

3D Object Tracking

3D Scene Flow Estimation

3D Point Registration and Matching

Point Cloud Augmentation and Completion

Discriminative methods

Generative methods