Awesome
GenDR - The Generalized Differentiable Renderer
Official implementation for our CVPR 2022 Paper "GenDR: A Generalized Differentiable Renderer".
Paper @ ArXiv, Video @ Youtube.
๐ป Installation
gendr
can be installed via pip from PyPI with
pip install gendr
โ ๏ธ Note that
gendr
requires CUDA, the CUDA Toolkit (for compilation), andtorch>=1.9.0
(matching the CUDA version).
Alternatively, GenDR may be installed from source, e.g., in a virtual environment like
virtualenv -p python3 .env1
. .env1/bin/activate
pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html
pip install .
Make sure that the CUDA version of PyTorch (e.g., cu111
for CUDA 11.1) matches the locally installed version.
However, on some machines, compiling works only with specific subversions that may be unequal to the local subversion,
so a potential quick fix is trying different PyTorch version and CUDA subversion combinations.
๐ฉโ๐ป Documentation
A differentiable renderer may be defined as follows
import gendr
diff_renderer = gendr.GenDR(
image_size=256,
dist_func='uniform',
dist_scale=0.01,
dist_squared=False,
aggr_alpha_func='probabilistic',
aggr_rgb_func='hard',
)
In the following, we provide the entire set of arguments of GenDR
.
The most important parameters are marked in bold.
For the essential parameters dist_func
and aggr_alpha_func
, we give a set of options.
For a reference, see the paper.
-
image_size
the size of the rendered image (default: 256) -
background_color
(default: [0, 0, 0]) -
anti_aliasing
render it at 2x the resolution and average to reduce aliasing (default: False) -
dist_func
the distribution used for the differentiable occlusion test (default: uniform)hard
hard, non-differentiable rendering, Dirac delta distribution, Heaviside function (aliasheaviside
)uniform
uniform distributioncubic_hermite
Cubic-Hermite sigmoid functionwigner_semicircle
Wigner Semicircle distributiongaussian
Gaussian Distributionlaplace
Laplace Distributionlogistic
logistic Distributiongudermannian
Gudermannian function, hyperbolic secant distribution (aliashyperbolic_secant
)cauchy
Cauchy distributionreciprocal
reciprocal sigmoid functiongumbel_max
Gumbel-max distributiongumbel_min
Gumbel-min distributionexponential
exponential distributionexponential_rev
exponential distribution (reversed / mirrored)gamma
gamma distributiongamma_rev
gamma distribution (reversed / mirrored)levy
Levy distributionlevy_rev
Levy distribution (reversed / mirrored)
-
dist_scale
the scale parameter of the distribution, tau in the paper (default: 1e-2) -
dist_squared
optionally, use the square-root distribution ofdist_func
(default: False) -
dist_shape
for some distributions, we need a shape parameter (default: None) -
dist_shift
for some distributions, we need an optional shift parameter (default: None or 0) -
dist_eps
pixels further away thandist_scale*dist_eps
are ignored for performance reasons (default: 1e4) -
aggr_alpha_func
the t-conorm used to aggregate occlusion values (default: probabilistic)hard
to be used withdist_func='hard'
max
maximum T-conormprobabilistic
probabilistic T-conormeinstein
Einstein sum T-conormhamacher
Hamacher T-conormfrank
Frank T-conormyager
Yager T-conormaczel_alsina
Aczel-Alsina T-conormdombi
Dombi T-conormschweizer_sklar
Schweizer-Sklar T-conorm
-
aggr_alpha_t_conorm_p
for some t-conorms, we need a shape parameter (default: None) -
aggr_rgb_func
(default: softmax) -
aggr_rgb_eps
(default: 1e-3) -
aggr_rgb_gamma
(default: 1e-3) -
near
value for the viewing frustum (default: 1) -
far
value for the viewing frustum (default: 100) -
double_side
render all faces from both sides (default: False) -
texture_type
type of texture sampling (default: surface; options: surface, vertex)
๐งช Experiments
๐ผ Shape Optimization (opt_shape.py
)
python experiments/opt_shape.py -sq --gif
๐ฝ Camera Pose Optimization (opt_camera.py
)
python experiments/opt_camera.py -sq --gif
โ๏ธ Single-View 3D Reconstruction (train_reconstruction.py
)
Optimal default parameters for --dist_scale
are automatically used in the script for the set of distributions
and t-conorms that are benchmarked on this task in the paper.
python experiments/train_reconstruction.py --distribution uniform --t_conorm probabilistic
๐ Citing
@inproceedings{petersen2022gendr,
title={{GenDR: A Generalized Differentiable Renderer}},
author={Petersen, Felix and Goldluecke, Bastian and Borgelt, Christian and Deussen, Oliver},
booktitle={IEEE/CVF International Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2022}
}
License
gendr
is released under the MIT license. See LICENSE for additional details about it.