Home

Awesome

PointASNL

This repository is for PointASNL introduced in the following paper

Xu Yan, Chaoda Zheng, Zhen Li*, Sheng Wang and Shuguang Cui, "PointASNL: Robust Point Clouds Processing using Nonlocal Neural Networks with Adaptive Sampling", CVPR 2020 [arxiv].

If you find our work useful in your research, please consider citing:

@inproceedings{yan2020pointasnl,
  title={Pointasnl: Robust point clouds processing using nonlocal neural networks with adaptive sampling},
  author={Yan, Xu and Zheng, Chaoda and Li, Zhen and Wang, Sheng and Cui, Shuguang},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={5589--5598},
  year={2020}
}

Getting Started

(1) Set up

Clone the repository:

git clone https://github.com/yanx27/PointASNL.git

Installation instructions for Ubuntu 16.04 (available at CUDA10):

(2) ModelNet40 Classification

Aligned ModelNet40 dataset can be found here. Since the randomness of data augmentation, the result of this code maybe slightly different from the result in paper, but it should be around 93%.

Data without Noise

It will cost relatively long time in first epoch for cache construction.

# Training 
$ python train.py --data [MODELNET40 PATH] --exp_dir PointASNL_without_noise

# Evaluation 
$ python test.py --data [MODELNET40 PATH] --model_path log/PointASNL_without_noise/best_model.ckpt

Data with Noise

Model with AS module is extremely robust for noisy data. You can use adaptive sampling by setting --AS .

# Training 
$ python train.py --data [MODELNET40 PATH] --exp_dir PointASNL_with_noise --AS

# Evaluation on noisy data 
$ python test.py --data [MODELNET40 PATH]  --model_path log/PointASNL_with_noise/best_model.ckpt --AS --noise

(3) ScanNet Segmentation

We provide two options for training on ScanNet dataset (with or without pre/post processing). With grid sampling processing, more input points and deeper network structure, our PointASNL can achieve 66.6% on ScanNet benchmark.

Data Preparation

Official ScanNet dataset can be downloaded here. If you choose training without grid sampling, you need firstly run ScanNet/prepare_scannet.py, otherwise you can skip to training step.

Data without Processing

This method converges relatively slower, and will achieve result around 63%.

# Training 
$ cd ScanNet/
$ python train_scannet.py --data [SCANNET PATH] --log_dir PointASNL

# Evaluation 
$ cd ScanNet/
$ python test_scannet.py --data [SCANNET PATH]  --model_path log/PointASNL/latest_model.ckpt 

Data with Grid Sampling

We highly recommend training with this method, although it takes a long time to process the raw data, it can achieve results around 66% and will be faster to converge. Grid sampling pre-processing will be automatically conducted before training.

# Training 
$ cd ScanNet/
$ python train_scannet_grid.py --data [SCANNET PATH] --log_dir PointASNL_grid --num_point 10240 --model pointasnl_sem_seg_res --in_radius 2

# Evaluation 
$ cd ScanNet/
$ python test_scannet_grid.py --data [SCANNET PATH]  --model_path log/PointASNL_grid/latest_model.ckpt 

Pre-trained Model

ModelmIoUDownload
pointasnl_sem_seg_res66.93ckpt-163.9M

(4) SemanticKITTI Segmentation

# Training 
$ cd SemanticKITTI/
$ python train_semantic_kitti.py --data [SemanticKITTI PATH] --log_dir PointASNL --with_remission
# or
$ python train_semantic_kitti_grid.py --data [SemanticKITTI PATH] --log_dir PointASNL_grid --prepare_data 

# Evaluation 
$ cd SemanticKITTI/
$ python test_semantic_kitti.py --data [SemanticKITTI PATH]  --model_path log/PointASNL/latest_model.ckpt  --with_remission
# or
$ python test_semantic_kitti_grid.py --data [SemanticKITTI PATH] --model_path log/PointASNL_grid/best_model.ckpt --test_area [e.g., 08]

Acknowledgement

License

This repository is released under MIT License (see LICENSE file for details).