Awesome
JS3C-Net
This is a forked version of JS3C-Net for training on CarlaSC dataset. You can check the results on our paper
Getting started with JS3C-Net on the CarlaSC dataset
You can check the information about the data and instructions on downloading on our CarlaSC dataset website. You can also check our models to do scene completion on the 3DMapping repo.
Dependencies
The dependencies is the same as mentioned in JS3C-Net repo. We tried our very best to accmodate it to newer version of CUDA toolkit, pytorch and so on but failed. So we released the docker image we used to run the JS3C-Net. The docker image can be downloaded on the drive. Everything needed for running JS3C-Net is already installed and the repo can be found in /home
. The docker command to obatin a container from this image is provided in docker_command.bash
.
Training
- To start with using CarlaSC dataset, make sure you download the dataset and unzip it.
- We provided a script
train_js3c_carla.py
for using JS3C-Net.- We provide two types of setting files. The default is the reduced-label setting, you can also enabled all-label setting by change the config file path to
carla_all.yaml
in/opt
. You can get more information about the two different settings in our paper. - Change the
data_dir
variable in the notebook. We use aTODO
comment to make it stand out. - Change the
TEST
flag toFalse
in the notebook. We use aTODO
comment to make it stand out. - There will be a folder containing the training log, weights etc. in
/Runs
folder.
- We provide two types of setting files. The default is the reduced-label setting, you can also enabled all-label setting by change the config file path to
Testing
- We include the testing function in script
test_js3c_carla.py
- Change the
MODEL_DIR
variable in the notebook to load the specific weight. We use aTODO
comment to make it stand out.
- Change the
Our SC models
You can check our MotionSC model and other implementations of SOTA SC models on the 3DMapping repo.
<br /> <br />
Sparse Single Sweep LiDAR Point Cloud Segmentation via Learning Contextual Shape Priors from Scene Completion (AAAI2021)
This repository is for JS3C-Net introduced in the following AAAI-2021 paper [arxiv paper]
Xu Yan, Jiantao Gao, Jie Li, Ruimao Zhang, Zhen Li*, Rui Huang and Shuguang Cui, "Sparse Single Sweep LiDAR Point Cloud Segmentation via Learning Contextual Shape Priors from Scene Completion".
- Semantic Segmentation and Semantic Scene Completion:
If you find our work useful in your research, please consider citing:
@inproceedings{yan2021sparse,
title={Sparse Single Sweep LiDAR Point Cloud Segmentation via Learning Contextual Shape Priors from Scene Completion},
author={Yan, Xu and Gao, Jiantao and Li, Jie and Zhang, Ruimao and Li, Zhen and Huang, Rui and Cui, Shuguang},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={35},
number={4},
pages={3101--3109},
year={2021}
}
Getting Started
Set up
Clone the repository:
git clone https://github.com/yanx27/JS3C-Net.git
Installation instructions for Ubuntu 16.04:
- Make sure <a href="https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html">CUDA</a> and <a href="https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html">cuDNN</a> are installed. Only this configurations has been tested:
- Python 3.6.9, Pytorch 1.3.1, CUDA 10.1;
- Compile the customized operators by
sh complile.sh
in/lib
. - Install spconv1.0 in
/lib/spconv
. We use the same version with PointGroup, you can install it according to the instruction. Higher version spconv may cause issues.
Data Preparation
- SemanticKITTI and SemanticPOSS datasets can be found in semantickitti-page and semanticposs-page.
- Download the files related to semantic segmentation and extract everything into the same folder.
- Use voxelizer generate ground truths of semantic scene completion, where following parameters are used. We provide pre-processed SemanticPOSS SSC labels here.
min range: 2.5
max range: 70
future scans: 70
min extent: [0, -25.6, -2]
max extent: [51.2, 25.6, 4.4]
voxel size: 0.2
- Finally, the dataset folder should be organized as follows.
SemanticKITTI(POSS)
├── dataset
│ ├── sequences
│ │ ├── 00
│ │ │ ├── labels
│ │ │ ├── velodyne
│ │ │ ├── voxels
│ │ │ ├── [OTHER FILES OR FOLDERS]
│ │ ├── 01
│ │ ├── ... ...
- Note that the data for official SemanticKITTI SSC benchmark only contains 1/5 of the whole sequence and they provide all extracted SSC data for the training set here.
- (New) In this repo, we use old version of SemanticKITTI, and there is a bug of generating SSC data contains a wrong shift on upwards direction (see issue). Therefore, we add an additional shifting to align their old version dataset here, and if you use the newest version of data, you can delete it. Also, you can check the alignment ratio by using
--debug
.
SemanticKITTI
Training
Run the following command to start the training. Output (logs) will be redirected to ./logs/JS3C-Net-kitti/
. You can ignore this step if you want to use our pretrained model in ./logs/JS3C-Net-kitti/
.
$ python train.py --gpu 0 --log_dir JS3C-Net-kitti --config opt/JS3C_default_kitti.yaml
Evaluation Semantic Segmentation
Run the following command to evaluate model on evaluation or test dataset
$ python test_kitti_segment.py --log_dir JS3C-Net-kitti --gpu 0 --dataset [val/test]
Evaluation Semantic Scene Completion
Run the following command to evaluate model on evaluation or test dataset
$ python test_kitti_ssc.py --log_dir JS3C-Net-kitti --gpu 0 --dataset [val/test]
SemanticPOSS
Results on SemanticPOSS can be easily obtained by
$ python train.py --gpu 0 --log_dir JS3C-Net-POSS --config opt/JS3C_default_POSS.yaml
$ python test_poss_segment.py --gpu 0 --log_dir JS3C-Net-POSS
Pretrained Model
We trained our model on a single Nvidia Tesla V100 GPU with batch size 6. If you want to train on the TITAN GPU, you can choose batch size as 2. Please modify dataset_dir
in args.txt
to your path.
Model | #Param | Segmentation | Completion | Checkpoint |
---|---|---|---|---|
JS3C-Net | 2.69M | 66.0 | 56.6 | 18.5MB |
Results on SemanticKITTI Benchmark
Quantitative results on SemanticKITTI Benchmark at the submisison time.
Acknowledgement
This project is not possible without multiple great opensourced codebases.
License
This repository is released under MIT License (see LICENSE file for details).