Home

Awesome

ShapenetRender_more_variation

A new shapenet rendering 2D image dataset that also contains deph map, normal map and albedo map.

Please cite our paperDISN: Deep Implicit Surface Network for High-quality Single-view 3D Reconstruction (NeurIPS 2019) if you plan to download the rendered images or use our code to render by yourself.

@inProceedings{xu2019disn,
  title={DISN: Deep Implicit Surface Network for High-quality Single-view 3D Reconstruction},
  author={Xu, Qiangeng and Wang, Weiyue and Ceylan, Duygu and Mech, Radomir and Neumann, Ulrich},
  booktitle={NeurIPS},
  year={2019}
}

Code contact: Qiangeng Xu* and Weiyue Wang*

Also please cite Shapenet's original paper as well.

Dataset Intro:

The categories included are: cat_ids = { "watercraft": "04530566", "rifle": "04090263", "display": "03211117", "lamp": "03636649", "speaker": "03691459", "cabinet": "02933112", "chair": "03001627", "bench": "02828884", "car": "02958343", "airplane": "02691156", "sofa": "04256520", "table": "04379243", "phone": "04401088" }

Our rendering is based on the convention of 3DR2N2's 2d image rendering.

albedoRGBDepthnormal
<img src="samples/albedo_1176dff7f0ec879719d740e0f6a9a113/hard/32.png" width="200px" /><img src="samples/image_1176dff7f0ec879719d740e0f6a9a113/hard/32.png" width="200px"/><img src="samples/depth_1176dff7f0ec879719d740e0f6a9a113/hard/32.png" width="200px"/><img src="samples/normal_1176dff7f0ec879719d740e0f6a9a113/hard/32.png" width="200px" />

In each folder, there is a meta file: rendering_metadata.txt: each line represent a parameter:

camera Yawcamera Rollcamera Pitchdistance ratio (0 to 1)Focal length in mmSensor size in mmmax real distancex_randy_randz_rand
74.7710078631887437.0779326626872500.645120213742106435321.75-0.1529439091682434-0.130565717816352840.0746786817908287

Dataset download:

image.tar

albedo.tar

depth.tar

normal.tar

Or you can run the generation script by yourself :

  install blender 2.79 and go to its python3.5m to install pip3, then install numpy and opencv
  
  python -u render_batch --model_root_dir {model root dir} --render_root_dir {where you store images} --filelist_dir {which models you want to render} --blender_location {you} --num_thread {10} --shapenetversion {support v1, v2} --debug {False}

Transformation matrix calculation:

Please refer to cam_read.py