Home

Awesome

Revisiting Point Cloud Classification: A New Benchmark Dataset and Classification Model on Real-World Data

Revisiting Point Cloud Classification: A New Benchmark Dataset and Classification Model on Real-World Data

Mikaela Angelina Uy, Quang-Hieu Pham, Binh-Son Hua, Duc Thanh Nguyen and Sai-Kit Yeung

ICCV 2019 Oral Presentation

pic-network

Introduction

This work revisits the problem of point cloud classification but on real world scans as opposed to synthetic models such as ModelNet40 that were studied in other recent works. We introduce ScanObjectNN, a new benchmark dataset containing ~15,000 object that are categorized into 15 categories with 2902 unique object instances. The raw objects are represented by a list of points with global and local coordinates, normals, colors attributes and semantic labels. We also provide part annotations, which to the best of our knowledge is the first on real-world data. From our comprehensive benchmark, we show that our dataset poses great challenges to existing point cloud classification techniques as objects from real-world scans are often cluttered with background and/or are partial due to occlusions. Our project page can be found here, and the arXiv version of our paper can be found here.

@inproceedings{uy-scanobjectnn-iccv19,
      title = {Revisiting Point Cloud Classification: A New Benchmark Dataset and Classification Model on Real-World Data},
      author = {Mikaela Angelina Uy and Quang-Hieu Pham and Binh-Son Hua and Duc Thanh Nguyen and Sai-Kit Yeung},
      booktitle = {International Conference on Computer Vision (ICCV)},
      year = {2019}
  }

ScanObjectNN Dataset

<!--### Data download [http://hkust-vgd.ust.hk/scanobjectnn/](http://hkust-vgd.ust.hk/scanobjectnn/) -->

We provide different variants of our scan dataset namely: OBJ_BG, PB_T25, PB_T25_R, PB_T50_R and PB_T50_RS as described in our paper. We released both the processed .h5 files and the raw .bin objects as described below.

h5 files

Raw files

We release all the raw object files of our ScanObjectNN dataset including all its variants.

Parts:

Code

Installation

Pre-requisites:

This code has been tested with Python 3.5, Tensorflow 1.10 and CUDA 9.0 on Ubuntu 16.04. Please follow instructions in PointNet++ to compile tf_ops in pointnet2/ and SpiderCNN/ subfolders.

Usage

Training

To train the benchmark classification models, run the following commands:

cd [method_folder]
python train.py

To see optional arguments, run:

cd [method_folder]
python train.py -h

To train using our BGA models, run:

cd [dgcnn or pointnet2]
python train_seg.py

The model files are pointnet2_cls_bga.py and dgcnn_bga.py.

Evaluation

To evaluate the benchmark classification models, run the following commands:

cd [method_folder]
python evaluate_scenennobjects.py

To evaluate our BGA models, run:

cd [dgcnn or pointnet2]
python evaluate_seg_scenennobjects.py

Generalization of real vs synthetic

To evaluate on ScanObjectNN when trained on ModelNet, run:

cd [method_folder]
python evaluate_real_trained_on_synthetic.py

To evaluate on ModelNet when trained on ScanObjectNN, run:

cd [method_folder]
python evaluate_synthetic_trained_on_real.py

The class mapping file can be found at mapping2.py, details can be found in our supplementary material. Before running these experiments, please make sure you have the trained model files and a single .h5 file for the ModelNet data. The arguments need to be specified accordingly.

Pre-trained Models

Pre-trained models can be downloaded here.

FAQ

Some commonly asked questions regarding our dataset and project can be found here. For any other inquiries, feel free to post a github issue.

References

Our released code heavily based on each methods original repositories as cited below:

License

This repository is released under MIT License (see LICENSE file for details).