Awesome
Hierarchical Point-based Active Learning for Semi-supervised Point Cloud Semantic Segmentation (ICCV 2023)
This is the official repository for Hierarchical Point-based Active Learning for Semi-supervised Point Cloud Semantic Segmentation [arXiv][CVF]
Environmental Setup
- OS: Ubuntu 22.04
- CUDA: 11.7
- Conda environment (You can also refer to ReDAL for some details. Our code is modified on their codes. We thank the authors for sharing the codes.):
conda create -n HPAL python=3.6 -y
conda install pytorch==1.8.0 torchvision==0.9.0 cudatoolkit=11.1 -c pytorch -c conda-forge -y
conda install pytorch-scatter -c pyg -y
conda install scikit-learn=0.24.2 -y
conda install pyyaml=5.3.1 -y
conda install tqdm=4.61.1 -y
conda install pandas=1.3.2 -y
conda install pyntcloud -c conda-forge -y
conda install plyfile -c conda-forge -y
conda install cython -y
conda install h5py==2.10.0 -y
pip3 install open3d-python==0.3.0
pip3 install --upgrade git+https://github.com/mit-han-lab/torchsparse.git@v1.2.0 (depend on libsparsehash-dev)
install libsparsehash-dev (no sudo permissions):
git clone https://github.com/sparsehash/sparsehash.git
cd sparsehash
./configure --prefix=/path/you/want/local
make
make install
gedit(vim) ~/.bashrc
export CPLUS_INCLUDE_PATH=/path/you/want/local/include
source ~/.bashrc
install libsparsehash-dev (with sudo permissions):
sudo apt-get install libsparsehash-dev
- Compile cpp-related utils:
sh compile_op.sh
Data preparation
-
Fill the google form to get the dataset download link (download Stanford3dDataset_v1.2_Aligned_Version.zip)
-
S3DIS Data Preprocessing
- extract "Stanford3dDataset_v1.2_Aligned_Version.zip".
- Modify
STANFORD_3D_IN_PATH
andSTANFORD_3D_OUT_PATH
indata_preparation/data_prepare_s3dis.py
and run it:
python3 data_prepare_s3dis.py
After that, the file organization will be like:
S3DIS/
├── Area_1
│ ├── coords
│ │ ├── conferenceRoom_1.npy
│ │ ...
│ ├── labels
│ │ ├── conferenceRoom_1.npy
│ │ ...
│ ├── rgb
│ │ ├── conferenceRoom_1.npy
│ │ ...
│ └── proj
│ ├── conferenceRoom_1.pkl
│ ...
├── Area_2
...
Training and inference
A. Active semi-supervised training (Our method):
- Build the initial labelled data (Randomly labeling a small part of data):
- Modify
root
ininit_labeled_points_s3dis.py
to processed data path. - Modify
save_path
ininit_labeled_points_s3dis.py
to your desired result save path. - Modify
labeled_num_cur_pc
ininit_labeled_points_s3dis.py
to the number of initial labelled points for each point cloud. - Run
init_labeled_points_s3dis.py
:
python init_labeled_points_s3dis.py
- Modify
Then in your save_path
, you can get a dictionary that each item saves a boolean array to stand whether each point is labelled.
You can also get the initial labelled data that we use for the reported experiments in our paper by downloading from google drive. The number of initial labelled data are 1/5 of the total labelling budget in each setting.
-
Modify configuration parameters under
class ConfigS3DIS
inconfig.py
, change of the following parameters is necessary:data_path = '/your_data_path' # Data root path after preparation init_labeled_data = '/initial_data_index' # Path of initial labeled data dictionary base_path = '/results' # Path to save the training results active_strategy = 'HMMU' # Scoring strategy for active learning chosen_rate_AL = 0.02 # The percentage of selected points in each iteration, which is 1/5 of the total labeling budget in our setting.
-
Training:
python s3dis_main.py --test_area 5 --mode AL_train
B. Fully supervised Training (Upperbound):
python s3dis_main.py --test_area 5 --mode baseline_train
C. Inference (Test any trained model):
You can test any trained model by following steps:
- Get a trained model and save it to disk.
- Run
s3dis_main.py
:
python s3dis_main.py --mode test --model_path the/path/of/the/trained/model
Pretrained Models
We provide pretrained models for reported results in our paper, you can download them through following links:
label setting | 0.02% | 0.07% | 0.43% |
---|---|---|---|
mIoU(%) | 55.9 | 62.3 | 65.7 |
link | download | download | download |
Visualization
Paper and Citation
If you find our paper is useful, please cite:
@InProceedings{Xu_2023_ICCV,
author = {Xu, Zongyi and Yuan, Bo and Zhao, Shanshan and Zhang, Qianni and Gao, Xinbo},
title = {Hierarchical Point-based Active Learning for Semi-supervised Point Cloud Semantic Segmentation},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2023},
pages = {18098-18108}
}