Home

Awesome

Build Status license<br> ko-fi

cuda_voxelizer v0.6

A command-line tool to convert polygon meshes to (annotated) voxel grids.

Important: In v0.6 I replaced all GLM math types with builtin CUDA types, removing an external dependency. This is a big change. I've tested the release as well as I can, but if you encounter any weirdness, it's advised to check if you can reproduce the problem with an older version. Thanks!

Usage

Program options:

Examples

cuda_voxelizer -f bunny.ply -s 256 generates a 256 x 256 x 256 vox-based voxel model which will be stored in bunny_256.vox.

cuda_voxelizer -f torus.ply -s 64 -o obj -solid generates a solid (filled) 64 x 64 x 64 .obj voxel model which will be stored in torus_64.obj.

output_examples

Building

The build process is aimed at 64-bit executables. It's possible to build for 32-bit as well, but I'm not actively testing/supporting this. You can build using CMake or using the provided Visual Studio project. Since 2022, cuda_voxelizer builds via Github Actions as well, check the .yml config file for more info.

Dependencies

The project has the following build dependencies:

Build using CMake (Windows, Linux)

After installing dependencies, do mkdir build and cd build, followed by:

For Windows with Visual Studio:

$env:CUDAARCHS="your_cuda_compute_capability"
cmake -A x64 -DTrimesh2_INCLUDE_DIR:PATH="path_to_trimesh2_include" -DTrimesh2_LINK_DIR:PATH="path_to_trimesh2_library_dir" .. 

For Linux:

CUDAARCHS="your_cuda_compute_capability" cmake -DTrimesh2_INCLUDE_DIR:PATH="path_to_trimesh2_include" -DTrimesh2_LINK_DIR:PATH="path_to_trimesh2_library_dir" -DCUDA_ARCH:STRING="your_cuda_compute_capability" .. 

Where your_cuda_compute_capability is a string specifying your CUDA architecture (more info here and here CMake). For example: CUDAARCHS="50;61" or CUDAARCHS="60".

Finally, run

cmake --build . --parallel number_of_cores

Build using Visual Studio project (Windows)

A project solution for Visual Studio 2022 is provided in the msvc folder. It is configured for CUDA 12.1, but you can edit the project file to make it work with other CUDA versions. You can edit the custom_includes.props file to configure the library locations, and specify a place where the resulting binaries should be placed.

    <TRIMESH_DIR>C:\libs\trimesh2\</TRIMESH_DIR>
    <GLM_DIR>C:\libs\glm\</GLM_DIR>
    <BINARY_OUTPUT_DIR>D:\dev\Binaries\</BINARY_OUTPUT_DIR>

Details

cuda_voxelizer implements an optimized version of the method described in M. Schwarz and HP Seidel's 2010 paper Fast Parallel Surface and Solid Voxelization on GPU's. The morton-encoded table was based on my 2013 HPG paper Out-Of-Core construction of Sparse Voxel Octrees and the work in libmorton.

cuda_voxelizer is built with a focus on performance. Usage of the routine as a per-frame voxelization step for real-time applications is viable. These are the voxelization timings for the Stanford Bunny Model (1,55 MB, 70k triangles).

Grid sizeGPU (GTX 1050 TI)CPU (Intel i7 8750H, 12 threads)
64³0.2 ms39.8 ms
128³0.3 ms63.6 ms
256³0.6 ms118.2 ms
512³1.8 ms308.8 ms
1024³8.6 ms1047.5 ms
2048³44.6 ms4147.4 ms

Thanks

See also

Todo / Possible future work

This is on my list of "nice things to add".

Citation

If you use cuda_voxelizer in your published paper or other software, please reference it, for example as follows:

<pre> @Misc{cudavoxelizer17, author = "Jeroen Baert", title = "Cuda Voxelizer: A GPU-accelerated Mesh Voxelizer", howpublished = "\url{https://github.com/Forceflow/cuda_voxelizer}", year = "2017"} </pre>

If you end up using cuda_voxelizer in something cool, drop me an e-mail: mail (at) jeroen-baert.be

Donate

cuda_voxelizer is developed in my free time. If you want to support the project, you can do so through: