Awesome
3D Machine Learning
In recent years, tremendous amount of progress is being made in the field of 3D Machine Learning, which is an interdisciplinary field that fuses computer vision, computer graphics and machine learning. This repo is derived from my study notes and will be used as a place for triaging new research papers.
I'll use the following icons to differentiate 3D representations:
- :camera: Multi-view Images
- :space_invader: Volumetric
- :game_die: Point Cloud
- :gem: Polygonal Mesh
- :pill: Primitive-based
To find related papers and their relationships, check out Connected Papers, which provides a neat way to visualize the academic field in a graph representation.
Get Involved
To contribute to this Repo, you may add content through pull requests or open an issue to let me know.
:star: :star: :star: :star: :star: :star: :star: :star: :star: :star: :star: :star:<br> We have also created a Slack workplace for people around the globe to ask questions, share knowledge and facilitate collaborations. Together, I'm sure we can advance this field as a collaborative effort. Join the community with this link. <br>:star: :star: :star: :star: :star: :star: :star: :star: :star: :star: :star: :star:
Table of Contents
- Courses
- Datasets
- 3D Pose Estimation
- Single Object Classification
- Multiple Objects Detection
- Scene/Object Semantic Segmentation
- 3D Geometry Synthesis/Reconstruction
- Texture/Material Analysis and Synthesis
- Style Learning and Transfer
- Scene Synthesis/Reconstruction
- Scene Understanding
Available Courses
Stanford CS231A: Computer Vision-From 3D Reconstruction to Recognition (Winter 2018)
UCSD CSE291-I00: Machine Learning for 3D Data (Winter 2018)
Stanford CS468: Machine Learning for 3D Data (Spring 2017)
MIT 6.838: Shape Analysis (Spring 2017)
Princeton COS 526: Advanced Computer Graphics (Fall 2010)
Princeton CS597: Geometric Modeling and Analysis (Fall 2003)
Paper Collection for 3D Understanding
CreativeAI: Deep Learning for Graphics
<a name="datasets" />Datasets
To see a survey of RGBD datasets, check out Michael Firman's collection as well as the associated paper, RGBD Datasets: Past, Present and Future. Point Cloud Library also has a good dataset catalogue.
<a name="3d_models" />3D Models
<b>Princeton Shape Benchmark (2003)</b> [Link] <br>1,814 models collected from the web in .OFF format. Used to evaluating shape-based retrieval and analysis algorithms.
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Princeton%20Shape%20Benchmark%20(2003).jpeg" /></p><b>Dataset for IKEA 3D models and aligned images (2013)</b> [Link] <br>759 images and 219 models including Sketchup (skp) and Wavefront (obj) files, good for pose estimation.
<p align="center"><img width="50%" src="http://ikea.csail.mit.edu/web_img/ikea_object.png" /></p><b>Open Surfaces: A Richly Annotated Catalog of Surface Appearance (SIGGRAPH 2013)</b> [Link] <br>OpenSurfaces is a large database of annotated surfaces created from real-world consumer photographs. Our annotation framework draws on crowdsourcing to segment surfaces from photos, and then annotate them with rich surface properties, including material, texture and contextual information.
<p align="center"><img width="50%" src="http://opensurfaces.cs.cornell.edu/static/img/teaser4-web.jpg" /></p><b>PASCAL3D+ (2014)</b> [Link] <br>12 categories, on average 3k+ objects per category, for 3D object detection and pose estimation.
<p align="center"><img width="50%" src="http://cvgl.stanford.edu/projects/pascal3d+/pascal3d.png" /></p><b>ModelNet (2015)</b> [Link] <br>127915 3D CAD models from 662 categories <br>ModelNet10: 4899 models from 10 categories <br>ModelNet40: 12311 models from 40 categories, all are uniformly orientated
<p align="center"><img width="50%" src="http://3dvision.princeton.edu/projects/2014/ModelNet/thumbnail.jpg" /></p><b>ShapeNet (2015)</b> [Link] <br>3Million+ models and 4K+ categories. A dataset that is large in scale, well organized and richly annotated. <br>ShapeNetCore [Link]: 51300 models for 55 categories.
<p align="center"><img width="50%" src="http://msavva.github.io/files/shapenet.png" /></p><b>A Large Dataset of Object Scans (2016)</b> [Link] <br>10K scans in RGBD + reconstructed 3D models in .PLY format.
<p align="center"><img width="50%" src="http://redwood-data.org/3dscan/img/teaser.jpg" /></p><b>ObjectNet3D: A Large Scale Database for 3D Object Recognition (2016)</b> [Link] <br>100 categories, 90,127 images, 201,888 objects in these images and 44,147 3D shapes. <br>Tasks: region proposal generation, 2D object detection, joint 2D detection and 3D object pose estimation, and image-based 3D shape retrieval
<p align="center"><img width="50%" src="http://cvgl.stanford.edu/projects/objectnet3d/ObjectNet3D.png" /></p><b>Thingi10K: A Dataset of 10,000 3D-Printing Models (2016)</b> [Link] <br>10,000 models from featured “things” on thingiverse.com, suitable for testing 3D printing techniques such as structural analysis , shape optimization, or solid geometry operations.
<p align="center"><img width="50%" src="https://pbs.twimg.com/media/DRbxWnqXkAEEH0g.jpg:large" /></p><b>ABC: A Big CAD Model Dataset For Geometric Deep Learning</b> [Link][Paper] <br>This work introduce a dataset for geometric deep learning consisting of over 1 million individual (and high quality) geometric models, each associated with accurate ground truth information on the decomposition into patches, explicit sharp feature annotations, and analytic differential properties.<br>
<p align="center"><img width="50%" src="https://cs.nyu.edu/~zhongshi/img/abc-dataset.png" /></p>:game_die: <b>ScanObjectNN: A New Benchmark Dataset and Classification Model on Real-World Data (ICCV 2019)</b> [Link] <br> This work introduce ScanObjectNN, a new real-world point cloud object dataset based on scanned indoor scene data. The comprehensive benchmark in this work shows that this dataset poses great challenges to existing point cloud classification techniques as objects from real-world scans are often cluttered with background and/or are partial due to occlusions. Three key open problems for point cloud object classification are identified, and a new point cloud classification neural network that achieves state-of-the-art performance on classifying objects with cluttered background is proposed. <br>
<p align="center"><img width="50%" src="https://hkust-vgd.github.io/scanobjectnn/images/objects_teaser.png" /></p><b>VOCASET: Speech-4D Head Scan Dataset (2019(</b> [Link][Paper] <br>VOCASET, is a 4D face dataset with about 29 minutes of 4D scans captured at 60 fps and synchronized audio. The dataset has 12 subjects and 480 sequences of about 3-4 seconds each with sentences chosen from an array of standard protocols that maximize phonetic diversity.
<p align="center"><img width="50%" src="https://github.com/TimoBolkart/voca/blob/master/gif/vocaset.gif" /></p><b>3D-FUTURE: 3D FUrniture shape with TextURE (2020)</b> [Link] <br>3D-FUTURE contains 20,000+ clean and realistic synthetic scenes in 5,000+ diverse rooms, which include 10,000+ unique high quality 3D instances of furniture with high resolution informative textures developed by professional designers.
<p align="center"><img width="50%" src="https://img.alicdn.com/tfs/TB1HTSfz4v1gK0jSZFFXXb0sXXa-1999-1037.png" /></p><b>Fusion 360 Gallery Dataset (2020)</b> [Link][Paper] <br>The Fusion 360 Gallery Dataset contains rich 2D and 3D geometry data derived from parametric CAD models. The Reconstruction Dataset provides sequential construction sequence information from a subset of simple 'sketch and extrude' designs. The Segmentation Dataset provides a segmentation of 3D models based on the CAD modeling operation, including B-Rep format, mesh, and point cloud.
<p align="center"><img width="50%" src="https://raw.githubusercontent.com/AutodeskAILab/Fusion360GalleryDataset/master/docs/images/reconstruction_teaser.jpg" /> <img width="50%" src="https://raw.githubusercontent.com/AutodeskAILab/Fusion360GalleryDataset/master/docs/images/segmentation_example.jpg" /></p><b>Mechanical Components Benchmark (2020)</b>[Link][Paper] <br>MCB is a large-scale dataset of 3D objects of mechanical components. It has a total number of 58,696 mechanical components with 68 classes.
<p align="center"><img width="50%" src="https://mechanical-components.herokuapp.com/static/img/main_figure.png" /> </p><b>Combinatorial 3D Shape Dataset (2020)</b> [Link][Paper] <br>Combinatorial 3D Shape Dataset is composed of 406 instances of 14 classes. Each object in our dataset is considered equivalent to a sequence of primitive placement. Compared to other 3D object datasets, our proposed dataset contains an assembling sequence of unit primitives. It implies that we can quickly obtain a sequential generation process that is a human assembling mechanism. Furthermore, we can sample valid random sequences from a given combinatorial shape after validating the sampled sequences. To sum up, the characteristics of our combinatorial 3D shape dataset are (i) combinatorial, (ii) sequential, (iii) decomposable, and (iv) manipulable.
<p align="center"> <img width="65%" src="imgs/combinatorial_3d_shape_dataset.png" /> </p> <a name="3d_scenes" />3D Scenes
<b>NYU Depth Dataset V2 (2012)</b> [Link] <br>1449 densely labeled pairs of aligned RGB and depth images from Kinect video sequences for a variety of indoor scenes.
<p align="center"><img width="50%" src="https://cs.nyu.edu/~silberman/images/nyu_depth_v2_labeled.jpg" /></p><b>SUNRGB-D 3D Object Detection Challenge</b> [Link] <br>19 object categories for predicting a 3D bounding box in real world dimension <br>Training set: 10,355 RGB-D scene images, Testing set: 2860 RGB-D images
<p align="center"><img width="50%" src="http://rgbd.cs.princeton.edu/3dbox.png" /></p><b>SceneNN (2016)</b> [Link] <br>100+ indoor scene meshes with per-vertex and per-pixel annotation.
<p align="center"><img width="50%" src="https://cdn-ak.f.st-hatena.com/images/fotolife/r/robonchu/20170611/20170611155625.png" /></p><b>ScanNet (2017)</b> [Link] <br>An RGB-D video dataset containing 2.5 million views in more than 1500 scans, annotated with 3D camera poses, surface reconstructions, and instance-level semantic segmentations.
<p align="center"><img width="50%" src="http://www.scan-net.org/img/annotations.png" /></p><b>Matterport3D: Learning from RGB-D Data in Indoor Environments (2017)</b> [Link] <br>10,800 panoramic views (in both RGB and depth) from 194,400 RGB-D images of 90 building-scale scenes of private rooms. Instance-level semantic segmentations are provided for region (living room, kitchen) and object (sofa, TV) categories.
<p align="center"><img width="50%" src="https://niessner.github.io/Matterport/teaser.png" /></p><b>SUNCG: A Large 3D Model Repository for Indoor Scenes (2017)</b> [Link] <br>The dataset contains over 45K different scenes with manually created realistic room and furniture layouts. All of the scenes are semantically annotated at the object level.
<p align="center"><img width="50%" src="http://suncg.cs.princeton.edu/figures/data_full.png" /></p><b>MINOS: Multimodal Indoor Simulator (2017)</b> [Link] <br>MINOS is a simulator designed to support the development of multisensory models for goal-directed navigation in complex indoor environments. MINOS leverages large datasets of complex 3D environments and supports flexible configuration of multimodal sensor suites. MINOS supports SUNCG and Matterport3D scenes.
<p align="center"><img width="50%" src="http://vladlen.info/wp-content/uploads/2017/12/MINOS.jpg" /></p><b>Facebook House3D: A Rich and Realistic 3D Environment (2017)</b> [Link] <br>House3D is a virtual 3D environment which consists of 45K indoor scenes equipped with a diverse set of scene types, layouts and objects sourced from the SUNCG dataset. All 3D objects are fully annotated with category labels. Agents in the environment have access to observations of multiple modalities, including RGB images, depth, segmentation masks and top-down 2D map views.
<p align="center"><img width="50%" src="https://user-images.githubusercontent.com/1381301/33509559-87c4e470-d6b7-11e7-8266-27c940d5729a.jpg" /></p><b>HoME: a Household Multimodal Environment (2017)</b> [Link] <br>HoME integrates over 45,000 diverse 3D house layouts based on the SUNCG dataset, a scale which may facilitate learning, generalization, and transfer. HoME is an open-source, OpenAI Gym-compatible platform extensible to tasks in reinforcement learning, language grounding, sound-based navigation, robotics, multi-agent learning.
<p align="center"><img width="50%" src="https://home-platform.github.io/assets/overview.png" /></p><b>AI2-THOR: Photorealistic Interactive Environments for AI Agents</b> [Link] <br>AI2-THOR is a photo-realistic interactable framework for AI agents. There are a total 120 scenes in version 1.0 of the THOR environment covering four different room categories: kitchens, living rooms, bedrooms, and bathrooms. Each room has a number of actionable objects.
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/AI2-Thor.jpeg" /></p><b>UnrealCV: Virtual Worlds for Computer Vision (2017)</b> [Link][Paper] <br>An open source project to help computer vision researchers build virtual worlds using Unreal Engine 4.
<p align="center"><img width="50%" src="http://unrealcv.org/images/homepage_teaser.png" /></p><b>Gibson Environment: Real-World Perception for Embodied Agents (2018 CVPR) </b> [Link] <br>This platform provides RGB from 1000 point clouds, as well as multimodal sensor data: surface normal, depth, and for a fraction of the spaces, semantics object annotations. The environment is also RL ready with physics integrated. Using such datasets can further narrow down the discrepency between virtual environment and real world.
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Gibson%20Environment-%20Real-World%20Perception%20for%20Embodied%20Agents%20(2018%20CVPR)%20.jpeg" /></p><b>InteriorNet: Mega-scale Multi-sensor Photo-realistic Indoor Scenes Dataset</b> [Link] <br>System Overview: an end-to-end pipeline to render an RGB-D-inertial benchmark for large scale interior scene understanding and mapping. Our dataset contains 20M images created by pipeline: (A) We collect around 1 million CAD models provided by world-leading furniture manufacturers. These models have been used in the real-world production. (B) Based on those models, around 1,100 professional designers create around 22 million interior layouts. Most of such layouts have been used in real-world decorations. (C) For each layout, we generate a number of configurations to represent different random lightings and simulation of scene change over time in daily life. (D) We provide an interactive simulator (ViSim) to help for creating ground truth IMU, events, as well as monocular or stereo camera trajectories including hand-drawn, random walking and neural network based realistic trajectory. (E) All supported image sequences and ground truth.
<p align="center"><img width="50%" src="https://interiornet.org/items/InteriorNet.jpg" /></p><b>Semantic3D</b>[Link] <br>Large-Scale Point Cloud Classification Benchmark, which provides a large labelled 3D point cloud data set of natural scenes with over 4 billion points in total, and also covers a range of diverse urban scenes.
<p align="center"><img width="50%" src="http://www.semantic3d.net/img/full_resolution/sg27_8.jpg" /></p><b>Structured3D: A Large Photo-realistic Dataset for Structured 3D Modeling</b> [Link]
<p align="center"><img width="50%" src="https://structured3d-dataset.org/static/img/teaser.png" /></p><b>3D-FRONT: 3D Furnished Rooms with layOuts and semaNTics</b> [Link] <br>Contains 10,000 houses (or apartments) and ~70,000 rooms with layout information.
<p align="center"><img width="50%" src="https://img.alicdn.com/tfs/TB131XOJeL2gK0jSZPhXXahvXXa-2992-2751.jpg" /></p><b>3ThreeDWorld(TDW): A High-Fidelity, Multi-Modal Platform for Interactive Physical Simulation</b> [Link]
<p align="center"><img width="50%" src="http://www.threedworld.org/img/gallery/gallery-1.jpg" /></p><b>MINERVAS: Massive INterior EnviRonments VirtuAl Synthesis</b> [Link]
<p align="center"><img width="50%" src="https://coohom.github.io/MINERVAS/static/img/teaser.png" /></p> <a name="pose_estimation" />3D Pose Estimation
<b>Category-Specific Object Reconstruction from a Single Image (2014)</b> [Paper]
<p align="center"><img width="50%" src="http://people.eecs.berkeley.edu/~akar/basisshapes_highres.png" /></p><b>Viewpoints and Keypoints (2015)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Viewpoints%20and%20Keypoints.jpeg" /></p><b>Render for CNN: Viewpoint Estimation in Images Using CNNs Trained with Rendered 3D Model Views (2015 ICCV)</b> [Paper]
<p align="center"><img width="50%" src="https://shapenet.cs.stanford.edu/projects/RenderForCNN/images/teaser.jpg" /></p><b>PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization (2015)</b> [Paper]
<p align="center"><img width="50%" src="http://mi.eng.cam.ac.uk/projects/relocalisation/images/map.png" /></p><b>Modeling Uncertainty in Deep Learning for Camera Relocalization (2016)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Modeling%20Uncertainty%20in%20Deep%20Learning%20for%20Camera%20Relocalization.jpeg" /></p><b>Robust camera pose estimation by viewpoint classification using deep learning (2016)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Robust%20camera%20pose%20estimation%20by%20viewpoint%20classification%20using%20deep%20learning.jpeg" /></p><b>Image-based localization using lstms for structured feature correlation (2017 ICCV)</b> [Paper]
<p align="center"><img width="50%" src="./imgs/Image-based localization using LSTMs for structured feature correlation.png" /></p><b>Image-Based Localization Using Hourglass Networks (2017 ICCV Workshops)</b> [Paper]
<p align="center"><img width="50%" src="./imgs/Image-Based Localization Using Hourglass Networks.png" /></p><b>Geometric loss functions for camera pose regression with deep learning (2017 CVPR)</b> [Paper]
<p align="center"><img width="50%" src="http://mi.eng.cam.ac.uk/~cipolla/images/pose-net.png" /></p><b>Generic 3D Representation via Pose Estimation and Matching (2017)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Generic%203D%20Representation%20via%20Pose%20Estimation%20and%20Matching.jpeg" /></p><b>3D Bounding Box Estimation Using Deep Learning and Geometry (2017)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/3D%20Bounding%20Box%20Estimation%20Using%20Deep%20Learning%20and%20Geometry.png" /></p><b>6-DoF Object Pose from Semantic Keypoints (2017)</b> [Paper]
<p align="center"><img width="50%" src="https://www.seas.upenn.edu/~pavlakos/projects/object3d/files/object3d-teaser.png" /></p><b>Relative Camera Pose Estimation Using Convolutional Neural Networks (2017)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Relative%20Camera%20Pose%20Estimation%20Using%20Convolutional%20Neural%20Networks.png" /></p><b>3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions (2017)</b> [Paper]
<p align="center"><img width="50%" src="http://3dmatch.cs.princeton.edu/img/overview.jpg" /></p><b>Single Image 3D Interpreter Network (2016)</b> [Paper] [Code]
<p align="center"><img width="50%" src="http://3dinterpreter.csail.mit.edu/images/spotlight_3dinn_large.jpg" /></p><b>Multi-view Consistency as Supervisory Signal for Learning Shape and Pose Prediction (2018 CVPR)</b> [Paper]
<p align="center"><img width="50%" src="https://shubhtuls.github.io/mvcSnP/resources/images/teaser.png" /></p><b>PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes (2018)</b> [Paper]
<p align="center"><img width="50%" src="https://yuxng.github.io/PoseCNN.png" /></p><b>Feature Mapping for Learning Fast and Accurate 3D Pose Inference from Synthetic Images (2018 CVPR)</b> [Paper]
<p align="center"><img width="40%" src="https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTnpyajEhbhrPMc0YpEQzqE8N9E7CW_EVWYA3Bxg46oUEYFf9XvkA" /></p><b>Pix3D: Dataset and Methods for Single-Image 3D Shape Modeling (2018 CVPR)</b> [Paper]
<p align="center"><img width="50%" src="http://pix3d.csail.mit.edu/images/spotlight_pix3d.jpg" /></p><b>3D Pose Estimation and 3D Model Retrieval for Objects in the Wild (2018 CVPR)</b> [Paper]
<p align="center"><img width="50%" src="https://www.tugraz.at/fileadmin/user_upload/Institute/ICG/Documents/team_lepetit/images/grabner/pose_retrieval_overview.png" /></p><b>Deep Object Pose Estimation for Semantic Robotic Grasping of Household Objects (2018)</b> [Paper]
<p align="center"><img width="50%" src="https://research.nvidia.com/sites/default/files/publications/forwebsite1_0.png" /></p><b>MocapNET2: a real-time method that estimates the 3D human pose directly in the popular Bio Vision Hierarchy (BVH) format (2021)</b> [Paper], [Code]
<p align="center"><img width="50%" src="https://raw.githubusercontent.com/FORTH-ModelBasedTracker/MocapNET/master/doc/mnet2.png" /></p> <a name="single_classification" />Single Object Classification
:space_invader: <b>3D ShapeNets: A Deep Representation for Volumetric Shapes (2015)</b> [Paper]
<p align="center"><img width="50%" src="https://ai2-s2-public.s3.amazonaws.com/figures/2016-11-08/3ed23386284a5639cb3e8baaecf496caa766e335/1-Figure1-1.png" /></p>:space_invader: <b>VoxNet: A 3D Convolutional Neural Network for Real-Time Object Recognition (2015)</b> [Paper] [Code]
<p align="center"><img width="50%" src="http://www.dimatura.net/research/voxnet/car_voxnet_side.png" /></p>:camera: <b>Multi-view Convolutional Neural Networks for 3D Shape Recognition (2015)</b> [Paper]
<p align="center"><img width="50%" src="http://vis-www.cs.umass.edu/mvcnn/images/mvcnn.png" /></p>:camera: <b>DeepPano: Deep Panoramic Representation for 3-D Shape Recognition (2015)</b> [Paper]
<p align="center"><img width="30%" src="https://ai2-s2-public.s3.amazonaws.com/figures/2016-11-08/5a1b5d31905d8cece7b78510f51f3d8bbb063063/1-Figure3-1.png" /></p>:space_invader::camera: <b>FusionNet: 3D Object Classification Using Multiple Data Representations (2016)</b> [Paper]
<p align="center"><img width="30%" src="https://ai2-s2-public.s3.amazonaws.com/figures/2016-11-08/0aab8fbcef1f0a14f5653d170ca36f4e5aae8010/6-Figure5-1.png" /></p>:space_invader::camera: <b>Volumetric and Multi-View CNNs for Object Classification on 3D Data (2016)</b> [Paper] [Code]
<p align="center"><img width="40%" src="http://graphics.stanford.edu/projects/3dcnn/teaser.jpg" /></p>:space_invader: <b>Generative and Discriminative Voxel Modeling with Convolutional Neural Networks (2016)</b> [Paper] [Code]
<p align="center"><img width="50%" src="http://davidstutz.de/wordpress/wp-content/uploads/2017/02/brock_vae.png" /></p>:gem: <b>Geometric deep learning on graphs and manifolds using mixture model CNNs (2016)</b> [Link]
<p align="center"><img width="50%" src="https://i2.wp.com/preferredresearch.jp/wp-content/uploads/2017/08/monet.png?resize=581%2C155&ssl=1" /></p>:space_invader: <b>3D GAN: Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling (2016)</b> [Paper] [Code]
<p align="center"><img width="50%" src="http://3dgan.csail.mit.edu/images/model.jpg" /></p>:space_invader: <b>Generative and Discriminative Voxel Modeling with Convolutional Neural Networks (2017)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/ajbrock/Generative-and-Discriminative-Voxel-Modeling/blob/master/doc/GUI3.png" /></p>:space_invader: <b>FPNN: Field Probing Neural Networks for 3D Data (2016)</b> [Paper] [Code]
<p align="center"><img width="30%" src="https://ai2-s2-public.s3.amazonaws.com/figures/2016-11-08/15ca7adccf5cd4dc309cdcaa6328f4c429ead337/1-Figure2-1.png" /></p>:space_invader: <b>OctNet: Learning Deep 3D Representations at High Resolutions (2017)</b> [Paper] [Code]
<p align="center"><img width="30%" src="https://is.tuebingen.mpg.de/uploads/publication/image/18921/img03.png" /></p>:space_invader: <b>O-CNN: Octree-based Convolutional Neural Networks for 3D Shape Analysis (2017)</b> [Paper] [Code]
<p align="center"><img width="50%" src="http://wang-ps.github.io/O-CNN_files/teaser.png" /></p>:space_invader: <b>Orientation-boosted voxel nets for 3D object recognition (2017)</b> [Paper] [Code]
<p align="center"><img width="50%" src="https://lmb.informatik.uni-freiburg.de/Publications/2017/SZB17a/teaser_w.png" /></p>:game_die: <b>PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation (2017)</b> [Paper] [Code]
<p align="center"><img width="40%" src="https://web.stanford.edu/~rqi/papers/pointnet.png" /></p>:game_die: <b>PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space (2017)</b> [Paper] [Code]
<p align="center"><img width="40%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/PointNet%2B%2B-%20Deep%20Hierarchical%20Feature%20Learning%20on%20Point%20Sets%20in%20a%20Metric%20Space.png" /></p>:camera: <b>Feedback Networks (2017)</b> [Paper] [Code]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Feedback%20Networks.png" /></p>:game_die: <b>Escape from Cells: Deep Kd-Networks for The Recognition of 3D Point Cloud Models (2017)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Escape From Cells.png" /></p>:game_die: <b>Dynamic Graph CNN for Learning on Point Clouds (2018)</b> [Paper]
<p align="center"><img width="50%" src="https://liuziwei7.github.io/homepage_files/dynamicgcnn_logo.png" /></p>:game_die: <b>PointCNN (2018)</b> [Paper]
<p align="center"><img width="50%" src="http://yangyan.li/images/paper/pointcnn.png" /></p>:game_die::camera: <b>A Network Architecture for Point Cloud Classification via Automatic Depth Images Generation (2018 CVPR)</b> [Paper]
<p align="center"><img width="50%" src="https://s3-us-west-1.amazonaws.com/disneyresearch/wp-content/uploads/20180619114732/A-Network-Architecture-for-Point-Cloud-Classification-via-Automatic-Depth-Images-Generation-Image-600x317.jpg" /></p>:game_die::space_invader: <b>PointGrid: A Deep Network for 3D Shape Understanding (CVPR 2018) </b> [Paper] [Code]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/PointGrid-%20A%20Deep%20Network%20for%203D%20Shape%20Understanding%20(2018).jpeg" /></p>:gem: <b> MeshNet: Mesh Neural Network for 3D Shape Representation (AAAI 2019) </b> [Paper] [Code]
<p align="center"><img width="50%" src="http://www.gaoyue.org/en_tsinghua/resrc/meshnet.jpg" /></p>:game_die: <b>SpiderCNN (2018)</b> [Paper][Code]
<p align="center"><img width="50%" src="http://5b0988e595225.cdn.sohucs.com/images/20181109/45c3b670e67f43b288791c650fb7fb0b.jpeg" /></p>:game_die: <b>PointConv (2018)</b> [Paper][Code]
<p align="center"><img width="50%" src="https://pics4.baidu.com/feed/8b82b9014a90f603272fe29f88ef061fb251ed49.jpeg?token=b23e1dbbaeaf12ffe3d168bd997a8d66&s=01307D328FE07C010C69C1CE0000D0B3" /></p>:gem: <b>MeshCNN (SIGGRAPH 2019)</b> [Paper][Code]
<p align="center"><img width="50%" src="https://github.com/ranahanocka/MeshCNN/blob/master/docs/imgs/alien.gif?raw=true" /></p>:game_die: <b>SampleNet: Differentiable Point Cloud Sampling (CVPR 2020)</b> [Paper] [Code]
<p align="center"><img width="50%" src="https://github.com/itailang/SampleNet/blob/master/doc/teaser.png" /></p> <a name="multiple_detection" />Multiple Objects Detection
<b>Sliding Shapes for 3D Object Detection in Depth Images (2014)</b> [Paper]
<p align="center"><img width="50%" src="http://slidingshapes.cs.princeton.edu/teaser.jpg" /></p><b>Object Detection in 3D Scenes Using CNNs in Multi-view Images (2016)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Object%20Detection%20in%203D%20Scenes%20Using%20CNNs%20in%20Multi-view%20Images.png" /></p><b>Deep Sliding Shapes for Amodal 3D Object Detection in RGB-D Images (2016)</b> [Paper] [Code]
<p align="center"><img width="50%" src="http://3dvision.princeton.edu/slide/DSS.jpg" /></p><b>Three-Dimensional Object Detection and Layout Prediction using Clouds of Oriented Gradients (2016)</b> [CVPR '16 Paper] [CVPR '18 Paper] [T-PAMI '19 Paper]
<p align="center"><img width="50%" src="https://github.com/luvegood/3D-Machine-Learning/blob/master/imgs/Three-Dimensional%20Object%20Detection%20and%20Layout%20Prediction%20using%20Clouds%20of%20Oriented%20Gradients.png" /></p><b>DeepContext: Context-Encoding Neural Pathways for 3D Holistic Scene Understanding (2016)</b> [Paper]
<p align="center"><img width="50%" src="http://deepcontext.cs.princeton.edu/teaser.png" /></p><b>SUN RGB-D: A RGB-D Scene Understanding Benchmark Suite (2017)</b> [Paper]
<p align="center"><img width="50%" src="http://rgbd.cs.princeton.edu/teaser.jpg" /></p><b>VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection (2017)</b> [Paper]
<p align="center"><img width="50%" src="https://pbs.twimg.com/media/DPMtLhHXUAcQUj2.jpg" /></p><b>Frustum PointNets for 3D Object Detection from RGB-D Data (CVPR2018)</b> [Paper]
<p align="center"><img width="50%" src="http://stanford.edu/~rqi/frustum-pointnets/images/teaser.jpg" /></p><b>A^2-Net: Molecular Structure Estimation from Cryo-EM Density Volumes (AAAI2019)</b> [Paper]
<p align="center"><img width="50%" src="imgs/a-square-net-min.jpg" /></p><b>Stereo R-CNN based 3D Object Detection for Autonomous Driving (CVPR2019)</b> [Paper]
<p align="center"><img width="50%" src="https://www.groundai.com/media/arxiv_projects/515338/system_newnew.png" /></p><b>Deep Hough Voting for 3D Object Detection in Point Clouds (ICCV2019)</b> [Paper] [code]
<p align="center"><img width="50%" src="https://github.com/facebookresearch/votenet/blob/master/doc/teaser.jpg" /></p> <a name="segmentation" />Scene/Object Semantic Segmentation
<b>Learning 3D Mesh Segmentation and Labeling (2010)</b> [Paper]
<p align="center"><img width="50%" src="https://ai2-s2-public.s3.amazonaws.com/figures/2016-11-08/0bf390e2a14f74bcc8838d5fb1c0c4cc60e92eb7/7-Figure7-1.png" /></p><b>Unsupervised Co-Segmentation of a Set of Shapes via Descriptor-Space Spectral Clustering (2011)</b> [Paper]
<p align="center"><img width="30%" src="http://people.scs.carleton.ca/~olivervankaick/cosegmentation/results6.png" /></p><b>Single-View Reconstruction via Joint Analysis of Image and Shape Collections (2015)</b> [Paper] [Code]
<p align="center"><img width="50%" src="http://vladlen.info/wp-content/uploads/2015/05/single-view.png" /></p><b>3D Shape Segmentation with Projective Convolutional Networks (2017)</b> [Paper] [Code]
<p align="center"><img width="50%" src="http://people.cs.umass.edu/~kalo/papers/shapepfcn/teaser.jpg" /></p><b>Learning Hierarchical Shape Segmentation and Labeling from Online Repositories (2017)</b> [Paper]
<p align="center"><img width="50%" src="http://cs.stanford.edu/~ericyi/project_page/hier_seg/figures/teaser.jpg" /></p>:space_invader: <b>ScanNet (2017)</b> [Paper] [Code]
<p align="center"><img width="50%" src="http://www.scan-net.org/img/voxel-predictions.jpg" /></p>:game_die: <b>PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation (2017)</b> [Paper] [Code]
<p align="center"><img width="40%" src="https://web.stanford.edu/~rqi/papers/pointnet.png" /></p>:game_die: <b>PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space (2017)</b> [Paper] [Code]
<p align="center"><img width="40%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/PointNet%2B%2B-%20Deep%20Hierarchical%20Feature%20Learning%20on%20Point%20Sets%20in%20a%20Metric%20Space.png" /></p>:game_die: <b>3D Graph Neural Networks for RGBD Semantic Segmentation (2017)</b> [Paper]
<p align="center"><img width="40%" src="http://www.fonow.com/Images/2017-10-18/66372-20171018115809740-2125227250.jpg" /></p>:game_die: <b>3DCNN-DQN-RNN: A Deep Reinforcement Learning Framework for Semantic Parsing of Large-scale 3D Point Clouds (2017)</b> [Paper]
<p align="center"><img width="40%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/3DCNN-DQN-RNN.png" /></p>:game_die::space_invader: <b>Semantic Segmentation of Indoor Point Clouds using Convolutional Neural Networks (2017)</b> [Paper]
<p align="center"><img width="55%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Semantic Segmentation of Indoor Point Clouds using Convolutional Neural Networks.png" /></p>:game_die::space_invader: <b>SEGCloud: Semantic Segmentation of 3D Point Clouds (2017)</b> [Paper]
<p align="center"><img width="55%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/SEGCloud.png" /></p>:game_die::space_invader: <b>Large-Scale 3D Shape Reconstruction and Segmentation from ShapeNet Core55 (2017)</b> [Paper]
<p align="center"><img width="40%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Core55.png" /></p>:game_die: <b>Pointwise Convolutional Neural Networks (CVPR 2018)</b> [Link] <br> We propose pointwise convolution that performs on-the-fly voxelization for learning local features of a point cloud.
<p align="center"><img width="50%" src="http://pointwise.scenenn.net/images/teaser.png" /></p>:game_die: <b>Dynamic Graph CNN for Learning on Point Clouds (2018)</b> [Paper]
<p align="center"><img width="50%" src="https://liuziwei7.github.io/homepage_files/dynamicgcnn_logo.png" /></p>:game_die: <b>PointCNN (2018)</b> [Paper]
<p align="center"><img width="50%" src="http://yangyan.li/images/paper/pointcnn.png" /></p>:camera::space_invader: <b>3DMV: Joint 3D-Multi-View Prediction for 3D Semantic Scene Segmentation (2018)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/angeladai/3DMV/blob/master/images/teaser.jpg" /></p>:space_invader: <b>ScanComplete: Large-Scale Scene Completion and Semantic Segmentation for 3D Scans (2018)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/angeladai/ScanComplete/blob/master/images/teaser_mesh.jpg" /></p>:game_die::camera: <b>SPLATNet: Sparse Lattice Networks for Point Cloud Processing (2018)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/SPLATNet-%20Sparse%20Lattice%20Networks%20for%20Point%20Cloud%20Processing.jpeg" /></p>:game_die::space_invader: <b>PointGrid: A Deep Network for 3D Shape Understanding (CVPR 2018) </b> [Paper] [Code]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/PointGrid-%20A%20Deep%20Network%20for%203D%20Shape%20Understanding%20(2018).jpeg" /></p>:game_die: <b>PointConv (2018)</b> [Paper][Code]
<p align="center"><img width="50%" src="https://pics4.baidu.com/feed/8b82b9014a90f603272fe29f88ef061fb251ed49.jpeg?token=b23e1dbbaeaf12ffe3d168bd997a8d66&s=01307D328FE07C010C69C1CE0000D0B3" /></p>:game_die: <b>SpiderCNN (2018)</b> [Paper][Code]
<p align="center"><img width="50%" src="http://5b0988e595225.cdn.sohucs.com/images/20181109/45c3b670e67f43b288791c650fb7fb0b.jpeg" /></p>:space_invader: <b>3D-SIS: 3D Semantic Instance Segmentation of RGB-D Scans (CVPR 2019)</b> [Paper][Code]
<p align="center"><img width="50%" src="http://www.niessnerlab.org/papers/2019/6sis/teaser.jpg" /></p>:game_die: <b>Real-time Progressive 3D Semantic Segmentation for Indoor Scenes (WACV 2019)</b> [Link] <br> We propose an efficient yet robust technique for on-the-fly dense reconstruction and semantic segmentation of 3D indoor scenes. Our method is built atop an efficient super-voxel clustering method and a conditional random field with higher-order constraints from structural and object cues, enabling progressive dense semantic segmentation without any precomputation.
<p align="center"><img width="50%" src="https://pqhieu.github.io/media/images/wacv19/thumbnail.gif" /></p>:game_die: <b>JSIS3D: Joint Semantic-Instance Segmentation of 3D Point Clouds (CVPR 2019)</b> [Link] <br> We jointly address the problems of semantic and instance segmentation of 3D point clouds with a multi-task pointwise network that simultaneously performs two tasks: predicting the semantic classes of 3D points and embedding the points into high-dimensional vectors so that points of the same object instance are represented by similar embeddings. We then propose a multi-value conditional random field model to incorporate the semantic and instance labels and formulate the problem of semantic and instance segmentation as jointly optimising labels in the field model.
<p align="center"><img width="50%" src="./imgs/jsis3d.png" /></p>:game_die: <b>ShellNet: Efficient Point Cloud Convolutional Neural Networks using Concentric Shells Statistics (ICCV 2019)</b> [Link] <br> We propose an efficient end-to-end permutation invariant convolution for point cloud deep learning. We use statistics from concentric spherical shells to define representative features and resolve the point order ambiguity, allowing traditional convolution to perform efficiently on such features.
<p align="center"><img width="50%" src="https://hkust-vgd.github.io/shellnet/images/shellconv_new.png" /></p>:game_die: <b>Rotation Invariant Convolutions for 3D Point Clouds Deep Learning (3DV 2019)</b> [Link] <br> We introduce a novel convolution operator for point clouds that achieves rotation invariance. Our core idea is to use low-level rotation invariant geometric features such as distances and angles to design a convolution operator for point cloud learning.
<p align="center"><img width="50%" src="https://hkust-vgd.github.io/riconv/images/RIO_cam.png" /></p> <a name="3d_synthesis" />3D Model Synthesis/Reconstruction
<a name="3d_synthesis_model_based" />Parametric Morphable Model-based methods
<b>A Morphable Model For The Synthesis Of 3D Faces (1999)</b> [Paper][Code]
<p align="center"><img width="40%" src="http://mblogthumb3.phinf.naver.net/MjAxNzAzMTdfMjcz/MDAxNDg5NzE3MzU0ODI3.9lQioLxwoGmtoIVXX9sbVOzhezoqgKMKiTovBnbUFN0g.sXN5tG4Kohgk7OJEtPnux-mv7OAoXVxxCyo3SGZMc6Yg.PNG.atelierjpro/031717_0222_DataDrivenS4.png?type=w420" /></p><b>FLAME: Faces Learned with an Articulated Model and Expressions (2017)</b> [Paper][Code (Chumpy)][Code (TF)] [Code (PyTorch)] <br>FLAME is a lightweight and expressive generic head model learned from over 33,000 of accurately aligned 3D scans. The model combines a linear identity shape space (trained from 3800 scans of human heads) with an articulated neck, jaw, and eyeballs, pose-dependent corrective blendshapes, and additional global expression blendshapes. The code demonstrates how to 1) reconstruct textured 3D faces from images, 2) fit the model to 3D landmarks or registered 3D meshes, or 3) generate 3D face templates for speech-driven facial animation.
<p align="center"> <img width="50%" src="https://github.com/TimoBolkart/TF_FLAME/blob/master/gifs/model_variations.gif"></p><b>The Space of Human Body Shapes: Reconstruction and Parameterization from Range Scans (2003)</b> [Paper]
<p align="center"><img width="50%" src="https://ai2-s2-public.s3.amazonaws.com/figures/2016-11-08/46d39b0e21ae956e4bcb7a789f92be480d45ee12/7-Figure10-1.png" /></p><b>SMPL-X: Expressive Body Capture: 3D Hands, Face, and Body from a Single Image (2019)</b> [Paper][Video][Code]
<p align="center"> <img width="50%" src="https://github.com/vchoutas/smplify-x/blob/master/images/teaser_fig.png"></p><b>PIFuHD: Multi-Level Pixel Aligned Implicit Function for High-Resolution 3D Human Digitization (CVPR 2020)</b> [Paper][Video][Code]
<p align="center"> <img width="50%" src=""></p><b>ExPose: Monocular Expressive Body Regression through Body-Driven Attention (2020)</b> [Paper][Video][Code]
<p align="center"> <img width="50%" src="https://github.com/vchoutas/expose/blob/master/images/expose.png"></p><b>Category-Specific Object Reconstruction from a Single Image (2014)</b> [Paper]
<p align="center"><img width="50%" src="http://people.eecs.berkeley.edu/~akar/categoryShapes/images/teaser.png" /></p>:game_die: <b>DeformNet: Free-Form Deformation Network for 3D Shape Reconstruction from a Single Image (2017)</b> [Paper]
<p align="center"><img width="50%" src="https://chrischoy.github.io/images/publication/deformnet/model.png" /></p>:gem: <b>Mesh-based Autoencoders for Localized Deformation Component Analysis (2017)</b> [Paper]
<p align="center"><img width="50%" src="http://qytan.com/img/point_conv.jpg" /></p>:gem: <b>Exploring Generative 3D Shapes Using Autoencoder Networks (Autodesk 2017)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Exploring%20Generative%203D%20Shapes%20Using%20Autoencoder%20Networks.jpeg" /></p>:gem: <b>Using Locally Corresponding CAD Models for Dense 3D Reconstructions from a Single Image (2017)</b> [Paper]
<p align="center"><img width="50%" src="https://chenhsuanlin.bitbucket.io/images/rp/r02.png" /></p>:gem: <b>Compact Model Representation for 3D Reconstruction (2017)</b> [Paper]
<p align="center"><img width="50%" src="https://jhonykaesemodel.com/img/headers/overview.png" /></p>:gem: <b>Image2Mesh: A Learning Framework for Single Image 3D Reconstruction (2017)</b> [Paper]
<p align="center"><img width="50%" src="https://pbs.twimg.com/media/DW5VhjpW4AAESHO.jpg" /></p>:gem: <b>Learning free-form deformations for 3D object reconstruction (2018)</b> [Paper]
<p align="center"><img width="50%" src="https://jhonykaesemodel.com/learning_ffd_overview.png" /></p>:gem: <b>Variational Autoencoders for Deforming 3D Mesh Models(2018 CVPR)</b> [Paper]
<p align="center"><img width="50%" src="http://humanmotion.ict.ac.cn/papers/2018P5_VariationalAutoencoders/TeaserImage.jpg" /></p>:gem: <b>Lions and Tigers and Bears: Capturing Non-Rigid, 3D, Articulated Shape from Images (2018 CVPR)</b> [Paper]
<p align="center"><img width="50%" src="https://3c1703fe8d.site.internapcdn.net/newman/gfx/news/hires/2018/realisticava.jpg" /></p> <a name="3d_synthesis_template_based" />Part-based Template Learning methods
<b>Modeling by Example (2004)</b> [Paper]
<p align="center"><img width="20%" src="http://gfx.cs.princeton.edu/pubs/Funkhouser_2004_MBE/chair.jpg" /></p><b>Model Composition from Interchangeable Components (2007)</b> [Paper]
<p align="center"><img width="40%" src="http://www.cs.ubc.ca/labs/imager/tr/2007/Vlad_Shuffler/teaser.jpg" /></p><b>Data-Driven Suggestions for Creativity Support in 3D Modeling (2010)</b> [Paper]
<p align="center"><img width="50%" src="http://vladlen.info/wp-content/uploads/2011/12/creativity.png" /></p><b>Photo-Inspired Model-Driven 3D Object Modeling (2011)</b> [Paper]
<p align="center"><img width="50%" src="http://kevinkaixu.net/projects/photo-inspired/overview.PNG" /></p><b>Probabilistic Reasoning for Assembly-Based 3D Modeling (2011)</b> [Paper]
<p align="center"><img width="50%" src="http://vladlen.info/wp-content/uploads/2011/12/highlight9.png" /></p><b>A Probabilistic Model for Component-Based Shape Synthesis (2012)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/test1/blob/master/imgs/A%20Probabilistic%20Model%20for%20Component-Based%20Shape%20Synthesis.png" /></p><b>Structure Recovery by Part Assembly (2012)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/test1/blob/master/imgs/Structure%20Recovery%20by%20Part%20Assembly.png" /></p><b>Fit and Diverse: Set Evolution for Inspiring 3D Shape Galleries (2012)</b> [Paper]
<p align="center"><img width="50%" src="http://kevinkaixu.net/projects/civil/teaser.png" /></p><b>AttribIt: Content Creation with Semantic Attributes (2013)</b> [Paper]
<p align="center"><img width="30%" src="http://gfx.cs.princeton.edu/gfx/pubs/Chaudhuri_2013_ACC/teaser.jpg" /></p><b>Learning Part-based Templates from Large Collections of 3D Shapes (2013)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/test1/blob/master/imgs/Learning%20Part-based%20Templates%20from%20Large%20Collections%20of%203D%20Shapes.png" /></p><b>Topology-Varying 3D Shape Creation via Structural Blending (2014)</b> [Paper]
<p align="center"><img width="50%" src="https://i.ytimg.com/vi/Xc4qf7v6a-w/maxresdefault.jpg" /></p><b>Estimating Image Depth using Shape Collections (2014)</b> [Paper]
<p align="center"><img width="50%" src="http://vecg.cs.ucl.ac.uk/Projects/SmartGeometry/image_shape_net/paper_docs/pipeline.jpg" /></p><b>Single-View Reconstruction via Joint Analysis of Image and Shape Collections (2015)</b> [Paper]
<p align="center"><img width="50%" src="http://vladlen.info/wp-content/uploads/2015/05/single-view.png" /></p><b>Interchangeable Components for Hands-On Assembly Based Modeling (2016)</b> [Paper]
<p align="center"><img width="30%" src="https://github.com/timzhang642/test1/blob/master/imgs/Interchangeable%20Components%20for%20Hands-On%20Assembly%20Based%20Modeling.png" /></p><b>Shape Completion from a Single RGBD Image (2016)</b> [Paper]
<p align="center"><img width="40%" src="http://tianjiashao.com/Images/2015/completion.jpg" /></p> <a name="3d_synthesis_dl_based" />Deep Learning Methods
:camera: <b>Learning to Generate Chairs, Tables and Cars with Convolutional Networks (2014)</b> [Paper]
<p align="center"><img width="50%" src="https://zo7.github.io/img/2016-09-25-generating-faces/chairs-model.png" /></p>:camera: <b>Weakly-supervised Disentangling with Recurrent Transformations for 3D View Synthesis (2015, NIPS)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/jimeiyang/deepRotator/blob/master/demo_img.png" /></p>:game_die: <b>Analysis and synthesis of 3D shape families via deep-learned generative models of surfaces (2015)</b> [Paper]
<p align="center"><img width="50%" src="https://people.cs.umass.edu/~hbhuang/publications/bsm/bsm_teaser.jpg" /></p>:camera: <b>Weakly-supervised Disentangling with Recurrent Transformations for 3D View Synthesis (2015)</b> [Paper] [Code]
<p align="center"><img width="50%" src="https://ai2-s2-public.s3.amazonaws.com/figures/2016-11-08/042993c46294a542946c9c1706b7b22deb1d7c43/2-Figure1-1.png" /></p>:camera: <b>Multi-view 3D Models from Single Images with a Convolutional Network (2016)</b> [Paper] [Code]
<p align="center"><img width="50%" src="https://ai2-s2-public.s3.amazonaws.com/figures/2016-11-08/3d7ca5ad34f23a5fab16e73e287d1a059dc7ef9a/4-Figure2-1.png" /></p>:camera: <b>View Synthesis by Appearance Flow (2016)</b> [Paper] [Code]
<p align="center"><img width="50%" src="https://ai2-s2-public.s3.amazonaws.com/figures/2016-11-08/12280506dc8b5c3ca2db29fc3be694d9a8bef48c/6-Figure2-1.png" /></p>:space_invader: <b>Voxlets: Structured Prediction of Unobserved Voxels From a Single Depth Image (2016)</b> [Paper] [Code]
<p align="center"><img width="30%" src="https://i.ytimg.com/vi/1wy4y2GWD5o/maxresdefault.jpg" /></p>:space_invader: <b>3D-R2N2: 3D Recurrent Reconstruction Neural Network (2016)</b> [Paper] [Code]
<p align="center"><img width="50%" src="http://3d-r2n2.stanford.edu/imgs/overview.png" /></p>:space_invader: <b>Perspective Transformer Nets: Learning Single-View 3D Object Reconstruction without 3D Supervision (2016)</b> [Paper]
<p align="center"><img width="70%" src="https://sites.google.com/site/skywalkeryxc/_/rsrc/1481104596238/perspective_transformer_nets/network_arch.png" /></p>:space_invader: <b>TL-Embedding Network: Learning a Predictable and Generative Vector Representation for Objects (2016)</b> [Paper]
<p align="center"><img width="50%" src="https://rohitgirdhar.github.io/GenerativePredictableVoxels/assets/webteaser.jpg" /></p>:space_invader: <b>3D GAN: Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling (2016)</b> [Paper]
<p align="center"><img width="50%" src="http://3dgan.csail.mit.edu/images/model.jpg" /></p>:space_invader: <b>3D Shape Induction from 2D Views of Multiple Objects (2016)</b> [Paper]
<p align="center"><img width="50%" src="https://ai2-s2-public.s3.amazonaws.com/figures/2016-11-08/e78572eeef8b967dec420013c65a6684487c13b2/2-Figure2-1.png" /></p>:camera: <b>Unsupervised Learning of 3D Structure from Images (2016)</b> [Paper]
<p align="center"><img width="50%" src="https://adriancolyer.files.wordpress.com/2016/12/unsupervised-3d-fig-10.jpeg?w=600" /></p>:space_invader: <b>Generative and Discriminative Voxel Modeling with Convolutional Neural Networks (2016)</b> [Paper] [Code]
<p align="center"><img width="50%" src="http://davidstutz.de/wordpress/wp-content/uploads/2017/02/brock_vae.png" /></p>:camera: <b>Multi-view Supervision for Single-view Reconstruction via Differentiable Ray Consistency (2017)</b> [Paper]
<p align="center"><img width="50%" src="https://shubhtuls.github.io/drc/resources/images/teaserChair.png" /></p>:camera: <b>Synthesizing 3D Shapes via Modeling Multi-View Depth Maps and Silhouettes with Deep Generative Networks (2017)</b> [Paper] [Code]
<p align="center"><img width="50%" src="https://jiajunwu.com/images/spotlight_3dvae.jpg" /></p>:space_invader: <b>Shape Completion using 3D-Encoder-Predictor CNNs and Shape Synthesis (2017)</b> [Paper] [Code]
<p align="center"><img width="50%" src="http://graphics.stanford.edu/projects/cnncomplete/teaser.jpg" /></p>:space_invader: <b>Octree Generating Networks: Efficient Convolutional Architectures for High-resolution 3D Outputs (2017)</b> [Paper] [Code]
<p align="center"><img width="50%" src="https://ai2-s2-public.s3.amazonaws.com/figures/2016-11-08/6c2a292bb018a8742cbb0bbc5e23dd0a454ffe3a/2-Figure2-1.png" /></p>:space_invader: <b>Hierarchical Surface Prediction for 3D Object Reconstruction (2017)</b> [Paper]
<p align="center"><img width="50%" src="http://bair.berkeley.edu/blog/assets/hsp/image_2.png" /></p>:space_invader: <b>OctNetFusion: Learning Depth Fusion from Data (2017)</b> [Paper] [Code]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/OctNetFusion-%20Learning%20Depth%20Fusion%20from%20Data.jpeg" /></p>:game_die: <b>A Point Set Generation Network for 3D Object Reconstruction from a Single Image (2017)</b> [Paper] [Code]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/A%20Point%20Set%20Generation%20Network%20for%203D%20Object%20Reconstruction%20from%20a%20Single%20Image%20(2017).jpeg" /></p>:game_die: <b>Learning Representations and Generative Models for 3D Point Clouds (2017)</b> [Paper] [Code]
<p align="center"><img width="50%" src="https://github.com/optas/latent_3d_points/blob/master/doc/images/teaser.jpg" /></p>:game_die: <b>Shape Generation using Spatially Partitioned Point Clouds (2017)</b> [Paper]
<p align="center"><img width="50%" src="http://mgadelha.me/sppc/fig/abstract.png" /></p>:game_die: <b>PCPNET Learning Local Shape Properties from Raw Point Clouds (2017)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/PCPNET%20Learning%20Local%20Shape%20Properties%20from%20Raw%20Point%20Clouds%20(2017).jpeg" /></p>:camera: <b>Transformation-Grounded Image Generation Network for Novel 3D View Synthesis (2017)</b> [Paper] [Code]
<p align="center"><img width="50%" src="https://eng.ucmerced.edu/people/jyang44/pics/view_synthesis.gif" /></p>:camera: <b>Tag Disentangled Generative Adversarial Networks for Object Image Re-rendering (2017)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Tag%20Disentangled%20Generative%20Adversarial%20Networks%20for%20Object%20Image%20Re-rendering.jpeg" /></p>:camera: <b>3D Shape Reconstruction from Sketches via Multi-view Convolutional Networks (2017)</b> [Paper] [Code]
<p align="center"><img width="50%" src="https://people.cs.umass.edu/~zlun/papers/SketchModeling/SketchModeling_teaser.png" /></p>:space_invader: <b>Interactive 3D Modeling with a Generative Adversarial Network (2017)</b> [Paper]
<p align="center"><img width="50%" src="https://pbs.twimg.com/media/DCsPKLqXoAEBd-V.jpg" /></p>:camera::space_invader: <b>Weakly supervised 3D Reconstruction with Adversarial Constraint (2017)</b> [Paper] [Code]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Weakly%20supervised%203D%20Reconstruction%20with%20Adversarial%20Constraint%20(2017).jpeg" /></p>:camera: <b>SurfNet: Generating 3D shape surfaces using deep residual networks (2017)</b> [Paper]
<p align="center"><img width="50%" src="https://3dadept.com/wp-content/uploads/2017/07/Screenshot-from-2017-07-26-145521-e1501077539723.png" /></p>:camera: <b>Learning to Reconstruct Symmetric Shapes using Planar Parameterization of 3D Surface (2019)</b> [Paper] [Code]
<p align="center"><img width="50%" src="https://github.com/hrdkjain/LearningSymmetricShapes/blob/master/Images/teaser.png" /></p>:pill: <b>GRASS: Generative Recursive Autoencoders for Shape Structures (SIGGRAPH 2017)</b> [Paper] [Code] [code]
<p align="center"><img width="50%" src="http://kevinkaixu.net/projects/grass/teaser.jpg" /></p>:pill: <b> 3D-PRNN: Generating Shape Primitives with Recurrent Neural Networks (2017)</b> [Paper][code]
<p align="center"><img width="50%" src="https://github.com/zouchuhang/3D-PRNN/blob/master/figs/teasor.jpg" /></p>:gem: <b>Neural 3D Mesh Renderer (2017)</b> [Paper] [Code]
<p align="center"><img width="50%" src="https://pbs.twimg.com/media/DPSm-4HWkAApEZd.jpg" /></p>:game_die::space_invader: <b>Large-Scale 3D Shape Reconstruction and Segmentation from ShapeNet Core55 (2017)</b> [Paper]
<p align="center"><img width="40%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Core55.png" /></p>:space_invader: <b>Pix2vox: Sketch-Based 3D Exploration with Stacked Generative Adversarial Networks (2017)</b> [Code]
<p align="center"><img width="50%" src="https://github.com/maxorange/pix2vox/blob/master/img/sample.gif" /></p>:camera::space_invader: <b>What You Sketch Is What You Get: 3D Sketching using Multi-View Deep Volumetric Prediction (2017)</b> [Paper]
<p align="center"><img width="50%" src="https://arxiv-sanity-sanity-production.s3.amazonaws.com/render-output/31631/x1.png" /></p>:camera::space_invader: <b>MarrNet: 3D Shape Reconstruction via 2.5D Sketches (2017)</b> [Paper]
<p align="center"><img width="50%" src="http://marrnet.csail.mit.edu/images/model.jpg" /></p>:camera::space_invader::game_die: <b>Learning a Multi-View Stereo Machine (2017 NIPS)</b> [Paper]
<p align="center"><img width="50%" src="http://bair.berkeley.edu/static/blog/unified-3d/Network.png" /></p>:space_invader: <b>3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions (2017)</b> [Paper]
<p align="center"><img width="50%" src="http://3dmatch.cs.princeton.edu/img/overview.jpg" /></p>:space_invader: <b>Scaling CNNs for High Resolution Volumetric Reconstruction from a Single Image (2017)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/frankhjwx/3D-Machine-Learning/blob/master/imgs/Scaling%20CNN%20Reconstruction.png" /></p>:pill: <b>ComplementMe: Weakly-Supervised Component Suggestions for 3D Modeling (2017)</b> [Paper]
<p align="center"><img width="50%" src="https://mhsung.github.io/assets/images/complement-me/figure_2.png" /></p>:space_invader: <b>Learning Descriptor Networks for 3D Shape Synthesis and Analysis (2018 CVPR)</b> [Project] [Paper] [Code]
An energy-based 3D shape descriptor network is a deep energy-based model for volumetric shape patterns. The maximum likelihood training of the model follows an “analysis by synthesis” scheme and can be interpreted as a mode seeking and mode shifting process. The model can synthesize 3D shape patterns by sampling from the probability distribution via MCMC such as Langevin dynamics. Experiments demonstrate that the proposed model can generate realistic 3D shape patterns and can be useful for 3D shape analysis.
<p align="center"><img width="60%" src="http://www.stat.ucla.edu/~jxie/3DEBM/files/3D_syn.png" /></p>:game_die: <b>PU-Net: Point Cloud Upsampling Network (2018)</b> [Paper] [Code]
<p align="center"><img width="50%" src="http://appsrv.cse.cuhk.edu.hk/~lqyu/indexpics/Pu-Net.png" /></p>:camera::space_invader: <b>Multi-view Consistency as Supervisory Signal for Learning Shape and Pose Prediction (2018 CVPR)</b> [Paper]
<p align="center"><img width="50%" src="https://shubhtuls.github.io/mvcSnP/resources/images/teaser.png" /></p>:camera::game_die: <b>Object-Centric Photometric Bundle Adjustment with Deep Shape Prior (2018)</b> [Paper]
<p align="center"><img width="50%" src="https://chenhsuanlin.bitbucket.io/images/rp/r06.png" /></p>:camera::game_die: <b>Learning Efficient Point Cloud Generation for Dense 3D Object Reconstruction (2018 AAAI)</b> [Paper]
<p align="center"><img width="50%" src="https://chenhsuanlin.bitbucket.io/images/rp/r05.png" /></p>:gem: <b>Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images (2018)</b> [Paper]
<p align="center"><img width="50%" src="https://www.groundai.com/media/arxiv_projects/188911/x2.png.750x0_q75_crop.png" /></p>:gem: <b>AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation (2018 CVPR)</b> [Paper] [Code]
<p align="center"><img width="50%" src="http://imagine.enpc.fr/~groueixt/atlasnet/imgs/teaser.small.png" /></p>:space_invader::gem: <b>Deep Marching Cubes: Learning Explicit Surface Representations (2018 CVPR)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/frankhjwx/3D-Machine-Learning/blob/master/imgs/Deep%20Marching%20Cubes.png" /></p>:space_invader: <b>Im2Avatar: Colorful 3D Reconstruction from a Single Image (2018)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/syb7573330/im2avatar/blob/master/misc/demo_teaser.png" /></p>:gem: <b>Learning Category-Specific Mesh Reconstruction from Image Collections (2018)</b> [Paper]
<p align="center"><img width="50%" src="https://akanazawa.github.io/cmr/resources/images/teaser.png" /></p>:pill: <b>CSGNet: Neural Shape Parser for Constructive Solid Geometry (2018)</b> [Paper]
<p align="center"><img width="50%" src="https://pbs.twimg.com/media/DR-RgbaU8AEyjeW.jpg" /></p>:space_invader: <b>Text2Shape: Generating Shapes from Natural Language by Learning Joint Embeddings (2018)</b> [Paper]
<p align="center"><img width="50%" src="http://text2shape.stanford.edu/figures/pull.png" /></p>:space_invader::gem::camera: <b>Multi-View Silhouette and Depth Decomposition for High Resolution 3D Object Representation (2018)</b> [Paper] [Code]
<p align="center"><img width="60%" src="imgs/decomposition_new.png" /> <img width="60%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Multi-View%20Silhouette%20and%20Depth%20Decomposition%20for%20High%20Resolution%203D%20Object%20Representation.png" /></p>:space_invader::gem::camera: <b>Pixels, voxels, and views: A study of shape representations for single view 3D object shape prediction (2018 CVPR)</b> [Paper]
<p align="center"><img width="60%" src="imgs/pixels-voxels-views-rgb2mesh.png" /> </p>:camera::game_die: <b>Neural scene representation and rendering (2018)</b> [Paper]
<p align="center"><img width="50%" src="http://www.arimorcos.com/static/images/publication_images/gqn_image.png" /></p>:pill: <b>Im2Struct: Recovering 3D Shape Structure from a Single RGB Image (2018 CVPR)</b> [Paper]
<p align="center"><img width="50%" src="https://kevinkaixu.net/images/publications/niu_cvpr18.jpg" /></p>:game_die: <b>FoldingNet: Point Cloud Auto-encoder via Deep Grid Deformation (2018 CVPR)</b> [Paper]
<p align="center"><img width="50%" src="http://simbaforrest.github.io/fig/FoldingNet.jpg" /></p>:camera::space_invader: <b>Pix3D: Dataset and Methods for Single-Image 3D Shape Modeling (2018 CVPR)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Pix3D%20-%20Dataset%20and%20Methods%20for%20Single-Image%203D%20Shape%20Modeling%20(2018%20CVPR).png" /></p>:gem: <b>3D-RCNN: Instance-level 3D Object Reconstruction via Render-and-Compare (2018 CVPR)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/3D-RCNN-%20Instance-level%203D%20Object%20Reconstruction%20via%20Render-and-Compare%20(2018%20CVPR).jpeg" /></p>:space_invader: <b>Matryoshka Networks: Predicting 3D Geometry via Nested Shape Layers (2018 CVPR)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Matryoshka%20Networks-%20Predicting%203D%20Geometry%20via%20Nested%20Shape%20Layers%20(2018%20CVPR).jpeg" /></p>:gem: <b> Deformable Shape Completion with Graph Convolutional Autoencoders (2018 CVPR)</b> [Paper]
<p align="center"><img width="50%" src="https://orlitany.github.io/OL_files/shapeComp.png" /></p>:space_invader: <b>Global-to-Local Generative Model for 3D Shapes (SIGGRAPH Asia 2018)</b> [Paper][Code]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Global-to-Local%20Generative%20Model%20for%203D%20Shapes.jpg" /></p>:gem::game_die::space_invader: <b>ALIGNet: Partial-Shape Agnostic Alignment via Unsupervised Learning (TOG 2018)</b> [Paper] [Code]
<p align="center"><img width="50%" src="https://github.com/ranahanocka/ALIGNet/blob/master/docs/rep.png" /></p>:game_die::space_invader: <b>PointGrid: A Deep Network for 3D Shape Understanding (CVPR 2018) </b> [Paper] [Code]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/PointGrid-%20A%20Deep%20Network%20for%203D%20Shape%20Understanding%20(2018).jpeg" /></p>:game_die: <b>GAL: Geometric Adversarial Loss for Single-View 3D-Object Reconstruction (2018)</b> [Paper]
<p align="center"><img width="50%" src="https://media.springernature.com/original/springer-static/image/chp%3A10.1007%2F978-3-030-01237-3_49/MediaObjects/474213_1_En_49_Fig2_HTML.gif" /></p>:game_die: <b>Visual Object Networks: Image Generation with Disentangled 3D Representation (2018)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Visual%20Object%20Networks-%20Image%20Generation%20with%20Disentangled%203D%20Representation%20(2018).jpeg" /></p>:space_invader: <b>Learning to Infer and Execute 3D Shape Programs (2019))</b> [Paper]
<p align="center"><img width="50%" src="http://shape2prog.csail.mit.edu/shape_files/teaser.jpg" /></p>:space_invader: <b>Learning to Infer and Execute 3D Shape Programs (2019))</b> [Paper]
<p align="center"><img width="50%" src="https://pbs.twimg.com/media/DxFaW-mU8AEo9wc.jpg" /></p>:gem: <b>Learning View Priors for Single-view 3D Reconstruction (CVPR 2019)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Learning%20View%20Priors%20for%20Single-view%203D%20Reconstruction.png" /></p>:gem::game_die: <b>Learning Embedding of 3D models with Quadric Loss (BMVC 2019)</b> [Paper] [Code]
<p align="center"><img width="50%" src="https://www.ics.uci.edu/~agarwal/bmvc_2019.png" /></p>:game_die: <b>CompoNet: Learning to Generate the Unseen by Part Synthesis and Composition (ICCV 2019)</b> [Paper][Code]
<p align="center"><img width="50%" src="https://raw.githubusercontent.com/nschor/CompoNet/master/images/network_architecture.png" /></p><b>CoMA: Convolutional Mesh Autoencoders (2018)</b> [Paper][Code (TF)][Code (PyTorch)][Code (PyTorch)] <br>CoMA is a versatile model that learns a non-linear representation of a face using spectral convolutions on a mesh surface. CoMA introduces mesh sampling operations that enable a hierarchical mesh representation that captures non-linear variations in shape and expression at multiple scales within the model.
<p align="center"> <img width="50%" src="https://coma.is.tue.mpg.de/uploads/ckeditor/pictures/91/content_coma_faces.jpg"></p><b>RingNet: 3D Face Reconstruction from Single Images (2019)</b> [Paper][Code]
<p align="center"> <img width="50%" src="https://github.com/soubhiksanyal/RingNet/blob/master/gif/celeba_reconstruction.gif"></p><b>VOCA: Voice Operated Character Animation (2019)</b> [Paper][Video][Code] <br>VOCA is a simple and generic speech-driven facial animation framework that works across a range of identities. The codebase demonstrates how to synthesize realistic character animations given an arbitrary speech signal and a static character mesh.
<p align="center"> <img width="50%" src="https://github.com/TimoBolkart/voca/blob/master/gif/speech_driven_animation.gif"></p>:gem: <b>Learning to Predict 3D Objects with an Interpolation-based Differentiable Renderer</b> [Paper][Site][Code]
<p align="center"> <img width="50%" src="https://nv-tlabs.github.io/DIB-R/figures/model2a-2.png"> </p>:gem: <b>Soft Rasterizer: A Differentiable Renderer for Image-based 3D Reasoning</b> [Paper][Code]
<p align="center"> <img width="50%" src="https://raw.githubusercontent.com/ShichenLiu/SoftRas/master/data/media/teaser/teaser.png"> </p><b>NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis</b> [Project][Paper][Code]
<p align="center"> <img width="50%" src="https://uploads-ssl.webflow.com/51e0d73d83d06baa7a00000f/5e700ef6067b43821ed52768_pipeline_website-01-p-800.png"> </p>:gem::game_die: <b>GAMesh: Guided and Augmented Meshing for Deep Point Networks (3DV 2020)</b> [Project] [Paper] [Code]
<p align="center"><img width="50%" src="https://www.ics.uci.edu/~agarwal/3DV_2020.png" /></p>:space_invader: <b>Generative VoxelNet: Learning Energy-Based Models for 3D Shape Synthesis and Analysis (2020 TPAMI)</b> [Paper]
This paper proposes a deep 3D energy-based model to represent volumetric shapes. The maximum likelihood training of the model follows an “analysis by synthesis” scheme. Experiments demonstrate that the proposed model can generate high-quality 3D shape patterns and can be useful for a wide variety of 3D shape analysis.
<p align="center"><img width="60%" src="imgs/voxelnet.png" /></p>:game_die: <b>Generative PointNet: Deep Energy-Based Learning on Unordered Point Sets for 3D Generation, Reconstruction and Classification (2021 CVPR) </b> [Project] [Paper] [Code]
Generative PointNet is an energy-based model of unordered point clouds, where the energy function is parameterized by an input-permutation-invariant bottom-up neural network. The model can be trained by MCMC-based maximum likelihood learning, or a short-run MCMC toward the energy-based model as a flow-like generator for point cloud reconstruction and interpolation. The learned point cloud representation can be useful for point cloud classification.
<p align="center"><img width="60%" src="imgs/gpointnet.png" /></p>:game_die: :gem: <b>Shape My Face: Registering 3D Face Scans by Surface-to-Surface Translation</b> [Paper] [Code]
Shape My Face (SMF) is a point cloud to mesh auto-encoder for the registration of raw human face scans, and the generation of synthetic human faces. SMF leverages a modified PointNet encoder with a visual attention module and differentiable surface sampling to be independent of the original surface representation and reduce the need for pre-processing. Mesh convolution decoders are combined with a specialized PCA model of the mouth, and smoothly blended based on geodesic distances, to create a compact model that is highly robust to noise. SMF is applied to register and perform expression transfer on scans captured in-the-wild with an iPhone depth camera represented either as meshes or point clouds.
<p align="center"><img width="60%" src="imgs/ShapeMyFace.png" /></p>:game_die: <b>Learning Implicit Fields for Generative Shape Modeling (2019)</b> [Paper] [Code]
We advocate the use of implicit fields for learning generative models of shapes and introduce an implicit field decoder, called IM-NET, for shape generation, aimed at improving the visual quality of the generated shapes. An implicit field assigns a value to each point in 3D space, so that a shape can be extracted as an iso-surface. IM-NET is trained to perform this assignment by means of a binary classifier. Specifically, it takes a point coordinate, along with a feature vector encoding a shape, and outputs a value which indicates whether the point is outside the shape or not. By replacing conventional decoders by our implicit decoder for representation learning (via IM-AE) and shape generation (via IM-GAN), we demonstrate superior results for tasks such as generative shape modeling, interpolation, and single-view 3D reconstruction, particularly in terms of visual quality.
<p align="center"><img width="60%" src="imgs/IM_NET.png" /></p> <a name="material_synthesis" />Texture/Material Analysis and Synthesis
<b>Texture Synthesis Using Convolutional Neural Networks (2015)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Texture%20Synthesis%20Using%20Convolutional%20Neural%20Networks.jpeg" /></p><b>Two-Shot SVBRDF Capture for Stationary Materials (SIGGRAPH 2015)</b> [Paper]
<p align="center"><img width="50%" src="https://mediatech.aalto.fi/publications/graphics/TwoShotSVBRDF/teaser.png" /></p><b>Reflectance Modeling by Neural Texture Synthesis (2016)</b> [Paper]
<p align="center"><img width="50%" src="https://mediatech.aalto.fi/publications/graphics/NeuralSVBRDF/teaser.png" /></p><b>Modeling Surface Appearance from a Single Photograph using Self-augmented Convolutional Neural Networks (2017)</b> [Paper]
<p align="center"><img width="50%" src="http://msraig.info/~sanet/teaser.jpg" /></p><b>High-Resolution Multi-Scale Neural Texture Synthesis (2017)</b> [Paper]
<p align="center"><img width="50%" src="https://wxs.ca/research/multiscale-neural-synthesis/multiscale-gram-marble.jpg" /></p><b>Reflectance and Natural Illumination from Single Material Specular Objects Using Deep Learning (2017)</b> [Paper]
<p align="center"><img width="50%" src="http://www.vision.ee.ethz.ch/~georgous/images/tpami17_teaser2.png" /></p><b>Joint Material and Illumination Estimation from Photo Sets in the Wild (2017)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Joint%20Material%20and%20Illumination%20Estimation%20from%20Photo%20Sets%20in%20the%20Wild.jpeg" /></p><b>JWhat Is Around The Camera? (2017)</b> [Paper]
<p align="center"><img width="50%" src="https://homes.cs.washington.edu/~krematas/my_images/arxiv16b_teaser.jpg" /></p><b>TextureGAN: Controlling Deep Image Synthesis with Texture Patches (2018 CVPR)</b> [Paper]
<p align="center"><img width="50%" src="http://texturegan.eye.gatech.edu/img/paper_figure.png" /></p><b>Gaussian Material Synthesis (2018 SIGGRAPH)</b> [Paper]
<p align="center"><img width="50%" src="https://i.ytimg.com/vi/VM2ysCnD9GA/maxresdefault.jpg" /></p><b>Non-stationary Texture Synthesis by Adversarial Expansion (2018 SIGGRAPH)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/jessemelpolio/non-stationary_texture_syn/blob/master/imgs/teaser.png" /></p><b>Synthesized Texture Quality Assessment via Multi-scale Spatial and Statistical Texture Attributes of Image and Gradient Magnitude Coefficients (2018 CVPR)</b> [Paper]
<p align="center"><img width="50%" src="https://user-images.githubusercontent.com/12434910/39275366-e18c7c1c-4899-11e8-8e61-05072618bbce.PNG" /></p><b>LIME: Live Intrinsic Material Estimation (2018 CVPR)</b> [Paper]
<p align="center"><img width="50%" src="https://web.stanford.edu/~zollhoef/papers/CVPR18_Material/teaser.png" /></p><b>Single-Image SVBRDF Capture with a Rendering-Aware Deep Network (2018)</b> [Paper]
<p align="center"><img width="50%" src="https://team.inria.fr/graphdeco/files/2018/08/teaser_v0.png" /></p><b>PhotoShape: Photorealistic Materials for Large-Scale Shape Collections (2018)</b> [Paper]
<p align="center"><img width="50%" src="https://keunhong.com/publications/photoshape/teaser.jpg" /></p><b>Learning Material-Aware Local Descriptors for 3D Shapes (2018)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Learning%20Material-Aware%20Local%20Descriptors%20for%203D%20Shapes%20(2018).jpeg" /></p><b>FrankenGAN: Guided Detail Synthesis for Building Mass Models using Style-Synchonized GANs (2018 SIGGRAPH Asia)</b> [Paper]
<p align="center"><img width="50%" src="http://geometry.cs.ucl.ac.uk/projects/2018/frankengan/paper_docs/teaser.jpg" /></p> <a name="style_transfer" />Style Learning and Transfer
<b>Style-Content Separation by Anisotropic Part Scales (2010)</b> [Paper]
<p align="center"><img width="50%" src="https://sites.google.com/site/kevinkaixu/_/rsrc/1472852123106/publications/style_b.jpg?height=145&width=400" /></p><b>Design Preserving Garment Transfer (2012)</b> [Paper]
<p align="center"><img width="30%" src="https://hal.inria.fr/hal-00695903v2/file/02_WomanToAll.jpg" /></p><b>Analogy-Driven 3D Style Transfer (2014)</b> [Paper]
<p align="center"><img width="50%" src="http://www.chongyangma.com/publications/st/2014_st_teaser.png" /></p><b>Elements of Style: Learning Perceptual Shape Style Similarity (2015)</b> [Paper] [Code]
<p align="center"><img width="50%" src="https://people.cs.umass.edu/~zlun/papers/StyleSimilarity/StyleSimilarity_teaser.jpg" /></p><b>Functionality Preserving Shape Style Transfer (2016)</b> [Paper] [Code]
<p align="center"><img width="50%" src="https://people.cs.umass.edu/~zlun/papers/StyleTransfer/StyleTransfer_teaser.jpg" /></p><b>Unsupervised Texture Transfer from Images to Model Collections (2016)</b> [Paper]
<p align="center"><img width="50%" src="http://geometry.cs.ucl.ac.uk/projects/2016/texture_transfer/paper_docs/teaser.png" /></p><b>Learning Detail Transfer based on Geometric Features (2017)</b> [Paper]
<p align="center"><img width="50%" src="http://surfacedetails.cs.princeton.edu/images/teaser.png" /></p><b>Co-Locating Style-Defining Elements on 3D Shapes (2017)</b> [Paper]
<p align="center"><img width="50%" src="http://s2017.siggraph.org/sites/default/files/styles/large/public/images/events/c118-e100-publicimage_0-itok=yO8OegQO.png" /></p><b>Neural 3D Mesh Renderer (2017)</b> [Paper] [Code]
<p align="center"><img width="50%" src="https://pbs.twimg.com/media/DPSm-4HWkAApEZd.jpg" /></p><b>Appearance Modeling via Proxy-to-Image Alignment (2018)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Appearance%20Modeling%20via%20Proxy-to-Image%20Alignment.png" /></p>:gem: <b>Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images (2018)</b> [Paper]
<p align="center"><img width="50%" src="https://pbs.twimg.com/media/DaIuEnfU0AAqesA.jpg" /></p><b>Automatic Unpaired Shape Deformation Transfer (SIGGRAPH Asia 2018)</b> [Paper]
<p align="center"><img width="50%" src="http://geometrylearning.com/ausdt/imgs/teaser.png" /></p><b>3DSNet: Unsupervised Shape-to-Shape 3D Style Transfer (2020)</b> [Paper] [Code]
<p align="center"><img width="50%" src="https://github.com/ethz-asl/3dsnet/blob/main/docs/chairs.jpg" /></p> <a name="scene_synthesis" />Scene Synthesis/Reconstruction
<b>Make It Home: Automatic Optimization of Furniture Arrangement (2011, SIGGRAPH)</b> [Paper]
<p align="center"><img width="40%" src="https://www.cs.umb.edu/~craigyu/img/papers/furniture.gif" /></p><b>Interactive Furniture Layout Using Interior Design Guidelines (2011)</b> [Paper]
<p align="center"><img width="50%" src="http://vis.berkeley.edu/papers/furnitureLayout/furnitureBig.jpg" /></p><b>Synthesizing Open Worlds with Constraints using Locally Annealed Reversible Jump MCMC (2012)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Synthesizing%20Open%20Worlds%20with%20Constraints%20using%20Locally%20Annealed%20Reversible%20Jump%20MCMC%20(2012).jpeg" /></p><b>Example-based Synthesis of 3D Object Arrangements (2012 SIGGRAPH Asia)</b> [Paper]
<p align="center"><img width="60%" src="http://graphics.stanford.edu/projects/scenesynth/img/teaser.jpg" /></p><b>Sketch2Scene: Sketch-based Co-retrieval and Co-placement of 3D Models (2013)</b> [Paper]
<p align="center"><img width="40%" src="http://sunweilun.github.io/images/paper/sketch2scene_thumb.jpg" /></p><b>Action-Driven 3D Indoor Scene Evolution (2016)</b> [Paper]
<p align="center"><img width="50%" src="https://maruitx.github.io/project/adise/teaser.jpg" /></p><b>The Clutterpalette: An Interactive Tool for Detailing Indoor Scenes (2015)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/The%20Clutterpalette-%20An%20Interactive%20Tool%20for%20Detailing%20Indoor%20Scenes.png" /></p><b>Image2Scene: Transforming Style of 3D Room (2015)</b> [Paper]
<p align="center"><img width="60%" src="imgs/Image2Scene.jpg" /></p><b>Relationship Templates for Creating Scene Variations (2016)</b> [Paper]
<p align="center"><img width="50%" src="http://geometry.cs.ucl.ac.uk/projects/2016/relationship-templates/paper_docs/teaser.png" /></p><b>IM2CAD (2017)</b> [Paper]
<p align="center"><img width="50%" src="http://i.imgur.com/KhtOeuB.jpg" /></p><b>Predicting Complete 3D Models of Indoor Scenes (2017)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Predicting%20Complete%203D%20Models%20of%20Indoor%20Scenes.png" /></p><b>Complete 3D Scene Parsing from Single RGBD Image (2017)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Complete%203D%20Scene%20Parsing%20from%20Single%20RGBD%20Image.jpeg" /></p><b>Raster-to-Vector: Revisiting Floorplan Transformation (2017, ICCV)</b> [Paper] [Code]
<p align="center"><img width="50%" src="https://www.cse.wustl.edu/~chenliu/floorplan-transformation/teaser.png" /></p><b>Fully Convolutional Refined Auto-Encoding Generative Adversarial Networks for 3D Multi Object Scenes (2017)</b> [Blog]
<p align="center"><img width="50%" src="https://cdn-images-1.medium.com/max/1600/1*NckW2hfgbHhEP3P8Z5ZLjQ.png" /></p><b>Adaptive Synthesis of Indoor Scenes via Activity-Associated Object Relation Graphs (2017 SIGGRAPH Asia)</b> [Paper]
<p align="center"><img width="50%" src="https://sa2017.siggraph.org/images/events/c121-e45-publicimage.jpg" /></p><b>Automated Interior Design Using a Genetic Algorithm (2017)</b> [Paper]
<p align="center"><img width="50%" src="http://www.peterkan.com/pictures/teaserq.jpg" /></p><b>SceneSuggest: Context-driven 3D Scene Design (2017)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/SceneSuggest%20-Context-driven%203D%20Scene%20Design%20(2017).png" /></p><b>A fully end-to-end deep learning approach for real-time simultaneous 3D reconstruction and material recognition (2017)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/A%20fully%20end-to-end%20deep%20learning%20approach%20for%20real-time%20simultaneous%203D%20reconstruction%20and%20material%20recognition%20(2017).png" /></p><b>Human-centric Indoor Scene Synthesis Using Stochastic Grammar (2018, CVPR)</b>[Paper] [Supplementary] [Code]
<p align="center"><img width="50%" src="http://web.cs.ucla.edu/~syqi/publications/thumbnails/cvpr2018synthesis.gif" /></p>:camera::game_die: <b>FloorNet: A Unified Framework for Floorplan Reconstruction from 3D Scans (2018)</b> [Paper] [Code]
<p align="center"><img width="50%" src="http://art-programmer.github.io/floornet/teaser.png" /></p>:space_invader: <b>ScanComplete: Large-Scale Scene Completion and Semantic Segmentation for 3D Scans (2018)</b> [Paper]
<p align="center"><img width="50%" src="https://niessnerlab.org/papers/2018/3scancomplete/teaser.jpg" /></p><b>Deep Convolutional Priors for Indoor Scene Synthesis (2018)</b> [Paper]
<p align="center"><img width="50%" src="http://msavva.github.io/files/deepsynth.png" /></p>:camera: <b>Fast and Flexible Indoor scene synthesis via Deep Convolutional Generative Models (2018)</b> [Paper] [Code]
<p align="center"><img width="80%" src="imgs/Fast%20and%20Flexible%20Indoor%20scene%20synthesis%20via%20Deep%20Convolutional%20Generative%20Models.jpg" ></p><b>Configurable 3D Scene Synthesis and 2D Image Rendering with Per-Pixel Ground Truth using Stochastic Grammars (2018)</b> [Paper]
<p align="center"><img width="50%" src="https://media.springernature.com/original/springer-static/image/art%3A10.1007%2Fs11263-018-1103-5/MediaObjects/11263_2018_1103_Fig5_HTML.jpg" /></p><b>Holistic 3D Scene Parsing and Reconstruction from a Single RGB Image (ECCV 2018)</b> [Paper]
<p align="center"><img width="50%" src="http://web.cs.ucla.edu/~syqi/publications/thumbnails/eccv2018scene.png" /></p><b>Language-Driven Synthesis of 3D Scenes from Scene Databases (SIGGRAPH Asia 2018)</b> [Paper]
<p align="center"><img width="50%" src="http://www.sfu.ca/~agadipat/publications/2018/T2S/teaser.png" /></p><b>Deep Generative Modeling for Scene Synthesis via Hybrid Representations (2018)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Deep%20Generative%20Modeling%20for%20Scene%20Synthesis%20via%20Hybrid%20Representations%20(2018).jpeg" /></p><b>GRAINS: Generative Recursive Autoencoders for INdoor Scenes (2018)</b> [Paper]
<p align="center"><img width="50%" src="https://www.groundai.com/media/arxiv_projects/373503/new_pics/teaserfig.jpg.750x0_q75_crop.jpg" /></p><b>SEETHROUGH: Finding Objects in Heavily Occluded Indoor Scene Images (2018)</b> [Paper]
<p align="center"><img width="50%" src="http://geometry.cs.ucl.ac.uk/projects/2018/seethrough/paper_docs/result_plate.png" /></p><b>:space_invader: Scan2CAD: Learning CAD Model Alignment in RGB-D Scans (CVPR 2019)</b> [Paper] [Code]
<p align="center"><img width="50%" src="http://www.niessnerlab.org/papers/2019/5scan2cad/teaser.jpg" /></p><b>:gem: Scan2Mesh: From Unstructured Range Scans to 3D Meshes (CVPR 2019)</b> [Paper]
<p align="center"><img width="50%" src="http://www.niessnerlab.org/papers/2019/4scan2mesh/teaser.jpg" /></p><b>:space_invader: 3D-SIC: 3D Semantic Instance Completion for RGB-D Scans (arXiv 2019)</b> [Paper]
<p align="center"><img width="50%" src="http://www.niessnerlab.org/papers/2019/z1sic/teaser.jpg" /></p><b>:space_invader: End-to-End CAD Model Retrieval and 9DoF Alignment in 3D Scans (arXiv 2019)</b> [Paper]
<p align="center"><img width="50%" src="http://www.niessnerlab.org/papers/2019/z2end2end/teaser.jpg" /></p><b>A Survey of 3D Indoor Scene Synthesis (2020) </b> [Paper]
<p align="center"><img width="60%" src="https://github.com/julyrashchenko/3D-Machine-Learning/blob/master/imgs/A%20Survey%20of%203D%20Indoor%20Scene%20Synthesis.jpg" /></p><b>:pill: :camera: PlanIT: Planning and Instantiating Indoor Scenes with Relation Graph and Spatial Prior Networks (2019) </b> [Paper] [Code]
<p align="center"><img src="imgs/PlanIT.jpg"></p><b>:space_invader: Feature-metric Registration: A Fast Semi-Supervised Approach for Robust Point Cloud Registration without Correspondences (CVPR 2020)</b> [Paper][Code]
<p align="center"><img width="50%" src="https://github.com/XiaoshuiHuang/xiaoshuihuang.github.io/blob/master/research/2020-feature-metric.png?raw=true" /></p><b>:pill: Human-centric metrics for indoor scene assessment and synthesis (2020) </b> [Paper]
<p align="center"><img width="60%" src="imgs/Human-centric%20metrics%20for%20indoor%20scene%20assessment%20and%20synthesis.jpg" /></p><b> SceneCAD: Predicting Object Alignments and Layouts in RGB-D Scans (2020) </b> [Paper]
<p align="center"><img width="60%" src="imgs/SceneCAD.jpg" /></p> <a name="scene_understanding" />Scene Understanding (Another more detailed repository)
<b>Recovering the Spatial Layout of Cluttered Rooms (2009)</b> [Paper]
<p align="center"><img width="60%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Recovering%20the%20Spatial%20Layout%20of%20Cluttered%20Rooms.png" /></p><b>Characterizing Structural Relationships in Scenes Using Graph Kernels (2011 SIGGRAPH)</b> [Paper]
<p align="center"><img width="60%" src="https://graphics.stanford.edu/~mdfisher/papers/graphKernelTeaser.png" /></p><b>Understanding Indoor Scenes Using 3D Geometric Phrases (2013)</b> [Paper]
<p align="center"><img width="30%" src="http://cvgl.stanford.edu/projects/3dgp/images/title.png" /></p><b>Organizing Heterogeneous Scene Collections through Contextual Focal Points (2014 SIGGRAPH)</b> [Paper]
<p align="center"><img width="60%" src="http://kevinkaixu.net/projects/focal/overlapping_clusters.jpg" /></p><b>SceneGrok: Inferring Action Maps in 3D Environments (2014, SIGGRAPH)</b> [Paper]
<p align="center"><img width="50%" src="http://graphics.stanford.edu/projects/scenegrok/scenegrok.png" /></p><b>PanoContext: A Whole-room 3D Context Model for Panoramic Scene Understanding (2014)</b> [Paper]
<p align="center"><img width="50%" src="http://panocontext.cs.princeton.edu/teaser.jpg" /></p><b>Learning Informative Edge Maps for Indoor Scene Layout Prediction (2015)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Learning%20Informative%20Edge%20Maps%20for%20Indoor%20Scene%20Layout%20Prediction.png" /></p><b>Rent3D: Floor-Plan Priors for Monocular Layout Estimation (2015)</b> [Paper]
<p align="center"><img width="50%" src="http://www.cs.toronto.edu/~fidler/projects/layout-res.jpg" /></p><b>A Coarse-to-Fine Indoor Layout Estimation (CFILE) Method (2016)</b> [Paper]
<p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/A%20Coarse-to-Fine%20Indoor%20Layout%20Estimation%20(CFILE)%20Method%20(2016).png" /></p><b>DeLay: Robust Spatial Layout Estimation for Cluttered Indoor Scenes (2016)</b> [Paper]
<p align="center"><img width="30%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/DeLay-Robust%20Spatial%20Layout%20Estimation%20for%20Cluttered%20Indoor%20Scenes.png" /></p><b>3D Semantic Parsing of Large-Scale Indoor Spaces (2016)</b> [Paper] [Code]
<p align="center"><img width="50%" src="http://buildingparser.stanford.edu/images/teaser.png" /></p><b>Single Image 3D Interpreter Network (2016)</b> [Paper] [Code]
<p align="center"><img width="50%" src="http://3dinterpreter.csail.mit.edu/images/spotlight_3dinn_large.jpg" /></p><b>Deep Multi-Modal Image Correspondence Learning (2016)</b> [Paper]
<p align="center"><img width="50%" src="http://art-programmer.github.io/floorplan-matching/teaser.png" /></p><b>Physically-Based Rendering for Indoor Scene Understanding Using Convolutional Neural Networks (2017)</b> [Paper] [Code] [Code] [Code] [Code]
<p align="center"><img width="50%" src="https://pbs.twimg.com/media/C0YERJOXEAA69xN.jpg" /></p><b>RoomNet: End-to-End Room Layout Estimation (2017)</b> [Paper]
<p align="center"><img width="50%" src="https://pbs.twimg.com/media/C7Z29GsV0AASEvR.jpg" /></p><b>SUN RGB-D: A RGB-D Scene Understanding Benchmark Suite (2017)</b> [Paper]
<p align="center"><img width="50%" src="http://rgbd.cs.princeton.edu/teaser.jpg" /></p><b>Semantic Scene Completion from a Single Depth Image (2017)</b> [Paper] [Code]
<p align="center"><img width="50%" src="http://sscnet.cs.princeton.edu/teaser.jpg" /></p><b>Factoring Shape, Pose, and Layout from the 2D Image of a 3D Scene (2018 CVPR)</b> [Paper] [Code]
<p align="center"><img width="50%" src="https://shubhtuls.github.io/factored3d/resources/images/teaser.png" /></p><b>LayoutNet: Reconstructing the 3D Room Layout from a Single RGB Image (2018 CVPR)</b> [Paper] [Code]
<p align="center"><img width="50%" src="http://p0.ifengimg.com/pmop/2018/0404/A1D0CAE48130C918FE624FA60495F237C67172F6_size63_w797_h755.jpeg" /></p><b>PlaneNet: Piece-wise Planar Reconstruction from a Single RGB Image (2018 CVPR)</b> [Paper] [Code]
<p align="center"><img width="50%" src="http://art-programmer.github.io/images/planenet.png" /></p><b>Cross-Domain Self-supervised Multi-task Feature Learning using Synthetic Imagery (2018 CVPR)</b> [Paper] <p align="center"><img width="50%" src="https://jason718.github.io/project/cvpr18/files/concept_pic.png" /></p>
<b>Pano2CAD: Room Layout From A Single Panorama Image (2018 CVPR)</b> [Paper] <p align="center"><img width="50%" src="https://www.groundai.com/media/arxiv_projects/58924/figures/Compare_2b.png" /></p>
<b>Automatic 3D Indoor Scene Modeling from Single Panorama (2018 CVPR)</b> [Paper] <p align="center"><img width="50%" src="https://github.com/timzhang642/3D-Machine-Learning/blob/master/imgs/Automatic%203D%20Indoor%20Scene%20Modeling%20from%20Single%20Panorama%20(2018%20CVPR).jpeg" /></p>
<b>Single-Image Piece-wise Planar 3D Reconstruction via Associative Embedding (2019 CVPR)</b> [Paper] [Code] <p align="center"><img width="50%" src="https://github.com/svip-lab/PlanarReconstruction/blob/master/misc/pipeline.jpg" /></p>
<b>3D-Aware Scene Manipulation via Inverse Graphics (NeurIPS 2018)</b> [Paper] [Code] <p align="center"><img width="50%" src="http://3dsdn.csail.mit.edu/images/teaser.png" /></p>
:gem: <b>3D Scene Reconstruction with Multi-layer Depth and Epipolar Transformers (ICCV 2019)</b> [Paper] <p align="center"><img width="50%" src="https://research.dshin.org/iccv19/multi-layer-depth/figures/overview_1.png" /><br><img width="50%" src="https://research.dshin.org/iccv19/multi-layer-depth/figures/voxelization00.jpg" /></p>
<b>PerspectiveNet: 3D Object Detection from a Single RGB Image via Perspective Points (NIPS 2019)</b> [Paper] <p align="center"><img width="50%" src="https://storage.googleapis.com/groundai-web-prod/media/users/user_288036/project_402358/images/x1.png" /></p>
<b>Holistic++ Scene Understanding: Single-view 3D Holistic Scene Parsing and Human Pose Estimation with Human-Object Interaction and Physical Commonsense (ICCV 2019)</b> [Paper & Code] <p align="center"><img width="50%" src="https://yixchen.github.io/holisticpp/file/pg.png" /></p>