Home

Awesome

Ply File I/O Benchmark

This repository contains code for comparison of various libraries for input and output of PLY files. PLY file format has been developed at Stanford University by Greg Turk as a part of the real-world object digitization project undertaken by Stanford University in mid-90s.

Task

The task is to read and write a basic triangle mesh stored in a PLY. The triangle mesh is represented as a list of vertices and a list of triangles, indicating which triplets of vertices make a triangle.

The data structure we wish to populate is:

typedef struct vec3f
{
  float x,y,z;
} Vec3f;

typedef struct tri
{
  int index1, index2, index3;
} Tri;

typedef struct triangle_mesh
{
  int n_verts;
  int n_faces;
  Vec3f* vertices;
  Tri* faces;
} TriMesh;

The meshes we are testing were processed to contain only the position attribute and only triangle faces. As such this task aims to measure the speed at which each library is able to exchange simple triangular mesh data. Each of the meshes is tested using both binary and text(ASCII) formats, as supported by PLY file format. For binary we use little-endian.

Mesh size vs. number of meshes

This benchmark focuses on rather large meshes (15k - 28 million triangles). The use case this benchmark analyzes is to minimize the time taken to load such large meshes. If your task is to read a lot of smaller .ply files, then this benchmark might not be reflective of your situation.

For an alternative task, where a large number of smaller meshes is parsed, and where meshes might have more varied per-vertex attribute list, please see the excellent ply-parsing-perf benchmark by Vilya Harvey.

Known list size

Given that in our task we process only triangular meshes, it would be good to let the application know this information. Some libraries (see below) allow passing the expected size of list properties, leading to non-negligible speed-up in parsing. As such, where applicable, this feature has been enabled.

Test Models

The table below lists models used for this benchmark, along with the source.

Model NameN. VerticesN. TrisSource
suzanne795815744Blender
scannet_scene0402_0093834177518Scannet
angel237018474048Large Geometric Models Archvive
blade8829541765388Large Geometric Models Archvive
hand327323654666Large Geometric Models Archvive
horse4848596966Large Geometric Models Archvive
armadillo172974345944Stanford 3D Scaning Repository
bunny3594769451Stanford 3D Scaning Repository
dragon437645871414Stanford 3D Scaning Repository
happy_buddha5436521087716Stanford 3D Scaning Repository
lucy1402787228055742Stanford 3D Scaning Repository
xyzrgb_dragon36096007219045Stanford 3D Scaning Repository
xyzrgb_statuette499999610000000Stanford 3D Scaning Repository
bust_of_sappho140864281724Thingiverse
statue9995171999038Sketchfab
speeder_bike14733412947046Sketchfab
armchair1155823102Sketchfab
bust_of_angelique_dhannetaire250000500000Sketchfab

Libraries

Below is a list of libraries used in this benchmark:

LibraryAuthorLanguageKnown list sizeNotes
turkplyGreg Turkc:x:Original PLY library link
rplyDiego Nehabc:x:
msh_plyMaciej Halberc:heavy_check_mark:
happlyNicolas Sharpc++:x:
miniplyVilya Harveyc++:heavy_check_mark:Only supports reading PLY files
micro_plyNick Klingensmithc++:x:Only supports reading ASCII PLY files
nanoplyvcglibc++:x:
plylibvcglibc++:x:PLY reading/writing used by Meshlab(?)
tinyplyDimitri Diakopoulosc++:heavy_check_mark:This benchmark includes versions 2.1, 2.2 and 2.3 of this library.

For the usage examples, as well as some additional comments about each of the libraries please check the tests/*_test.c(pp) files.

Results

Below we present results for parsing PLY files storing data in both ASCII and binary format (little-endian). Times are given in milliseconds. Highlighted numbers indicate the best method in each category. As noted before, where applicable, a known list size is passed to the library.

The benchmark was compiled using MSVC 19.28.29334 with \O2 optimization flag, using AMD Ryzen 3900XT and Samsung 970 EVO PLUS.

To run the test, we run a separate program for each file that attempts to read and write the input file, and reports time taken to do so. Program for each library is run 10 times and the results are averaged. The averaged time taken for each model is used to compute the overall average time it took to process all the models.

Average Read Times

MethodASCIIBinary
happly19104.671(75.3x)589.435(16.4x)
micro_ply1131.752(4.5x)N/A
miniply253.671(1.0x)35.935(1.0x)
msh_ply2009.957(7.9x)40.885(1.1x)
nanoply8003.712(31.6x)106.312(3.0x)
plylib3157.350(12.4x)338.514(9.4x)
rply1731.580(6.8x)327.164(9.1x)
tinyply2111583.986(45.7x)1844.445(51.3x)
tinyply227561.799(29.8x)318.069(8.9x)
tinyply237500.844(29.6x)294.265(8.2x)
turkply2086.552(8.2x)549.367(15.3x)

Average Write Times

MethodASCIIBinary
happly11534.080(3.8x)1454.963(19.8x)
msh_ply4178.405(1.4x)73.406(1.0x)
nanoply8772.179(2.9x)107.735(1.5x)
plylib3045.147(1.0x)315.647(4.3x)
rply3966.512(1.3x)261.940(3.6x)
tinyply219667.221(3.2x)1449.753(19.7x)
tinyply229870.520(3.2x)526.407(7.2x)
tinyply239653.622(3.2x)560.677(7.6x)
turkply4017.640(1.3x)624.668(8.5x)

Notes:

Per model I/O times:

ASCIIRead Times TableRead Times ImageWrite Times TableWrite Times Image
BinaryRead Times TableRead Times ImageWrite Times TableWrite Times Image

Note that the images show the read time on a log scale, since the performance of different libraries is significantly different.

LOC

Another metric we can use for deciding a library is the ease of use. Why LOC is by no means a perfect metric to measure ease of use, it does reflect how much code one needs to type to get basic PLY I/O done. Also, note that these numbers report only simple versions of reading function without any error reporting, etc.

LibraryRead LOCWrite LOC
miniply35N/A
micro_ply25N/A
msh_ply2923
nanoply2329
plylib7865
rply6923
happly1726
tinyply1710
turkply5239