Home

Awesome

Toronto-3D: A Large-scale Mobile LiDAR Dataset for Semantic Segmentation of Urban Roadways

[Paper] [Download] [Results] [Codes]

Image

Toronto-3D is a large-scale urban outdoor point cloud dataset acquired by an MLS system in Toronto, Canada for semantic segmentation. This dataset covers approximately 1 km of road and consists of about 78.3 million points. Here is an overview of the dataset and the tiles. The approximate location of the dataset is at (43.726, -79.417).

Image

Point clouds has 10 attributes and classified in 8 labelled object classes. There is a data preparation tip to handle UTM coordinates to avoid problems. There are also some known issues.

Details on the dataset can be found at CVPRW2020. Revisions on the labels will lead to different results from the published paper, and updated results will be updated here.

If you have questions, or any suggestions to help us improve the dataset, please contact Weikai Tan.


<a name="results"></a> Semantic segmentation results (%)

More results to be added

Default: point coordinates only

MethodOAmIoURoadRoad mrk.NaturalBldgUtil. linePoleCarFence
PointNet++84.8841.8189.270.0069.054.143.723.352.03.0
PointNet++ MSG92.5659.4792.900.0086.1382.1560.9662.8176.4114.43
PointNet++ *91.6658.0192.717.6884.3081.8367.4463.3060.925.92
DGCNN94.2461.7993.880.0091.2580.3962.4062.3288.2615.81
KPFCNN95.3969.1194.620.0696.0791.5187.6881.5685.6615.72
MS-PCNN90.0365.8993.843.8393.4682.5967.8071.9591.1222.50
TGNet94.0861.3493.540.0090.8381.5765.2662.9888.737.85
MS-TGNet95.7170.5094.4117.1995.7288.8376.0173.9794.2423.64
RandLA-Net (Hu, et al., 2021)92.9577.7194.6142.6296.8993.0186.5178.0792.8537.12
Rim et al., 202172.5566.8792.7414.7588.6693.5281.0367.7139.6556.90
MappingConvSeg (Yan, et al., 2021)93.1777.5795.0239.2796.7793.3286.3779.1189.8140.89
DiffConv (Lin & Feragen, 2022)-76.7383.3151.0669.0479.5580.4884.4176.1989.83
EyeNet (Yoo et al., 2023)94.6381.1396.9865.0297.8393.5186.7784.8694.0230.01
LACV-Net (Zeng et al., 2024)95.878.594.842.796.791.488.279.693.940.6
DCTNet (Lu et al., 2024)-81.8482.7759.5385.5186.4781.7984.0379.5596.21
Use RGB
RandLA-Net (Hu, et al., 2021) (RGB)94.3781.7796.6964.2196.9294.2488.0677.8493.3742.86
Rim et al., 2021 (RGB)83.6071.0392.8427.4389.9095.2785.5974.5044.4158.30
MappingConvSeg (Yan, et al., 2021)94.7282.8997.1567.8797.5593.7586.8882.1293.7244.11
ResDLPS-Net (Du et al., 2021)96.4980.2795.8259.8096.1090.9686.8279.9589.4143.31
LACV-Net (Zeng et al., 2024)97.482.797.166.997.393.087.383.493.443.1
Others
Han et al., 2021 (Intensity + Normal)93.6070.8092.2053.8092.8086.0072.2072.5075.7021.20

* use same radii and k as TGNet


<a name="code"></a> Codes for training your own network


<a name="attributes"></a> Point cloud attributes

<a name="classes"></a> Classes


<a name="tip"></a> Data preparation tip

The XY coordinates are stored in UTM format. The Y coordinate may exceed decimal digits in float type commonly used in point cloud processing algorithms. Directly read and process the coordinates could result in loss of detail and wrong geometric features.

I set a UTM_OFFSET = [627285, 4841948, 0] to subtract from the raw coordinates. You may use any other numbers to reduce number of digits.

Example of potential issues during grid_subsampling operation used in KPConv and RandLA-Net: both subsampled to grid size 6cm

without offsetwith offset

<a name="issues"></a> Known issues

  1. Point RGB assignments on taller vehicles.

Image

  1. Point RGB artifact assignments on moving vehicles.

Image

  1. Point acquisition on moving vehicles.

Image


<a name="download"></a> Download

Dataset can be downloaded at OneDrive or 百度网盘(提取码:aewp). Check Changelog for changes.

Toronto-3D belongs to Mobile Sensing and Geodata Science Lab, University of Waterloo. Toronto-3D is distributed under the CC BY-NC 4.0 License

Citation

Please consider citing our work:

@inproceedings{tan2020toronto3d,
    title={{Toronto-3D}: A large-scale mobile lidar dataset for semantic segmentation of urban roadways},
    author={Tan, Weikai and Qin, Nannan and Ma, Lingfei and Li, Ying and Du, Jing and Cai, Guorong and Yang, Ke and Li, Jonathan},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
    pages={202--203},
    year={2020}
}

Acknowledgements

Teledyne Optech is acknowledged for providing mobile LiDAR point cloud data collected by Maverick. Thanks Jing Du and Dr. Guorong Cai from Jimei University for point cloud labelling.

Thanks Intel ISL for including our dataset in the Open3D-ML 3D Machine Learning module.


<a name="changelog"></a> Changelog