Home

Awesome

LatentSwap3D: Semantic Edits on 3D Image GANs

<a href="https://enis.dev/latentswap3d/"><img src="https://img.shields.io/static/v1?label=Project&message=Website&color=red" height=20.5></a> <a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-yellow.svg"></a> Edit In Colab Edit Real Face In Colab

3D GANs have the ability to generate latent codes for entire 3D volumes rather than only 2D images. These models offer desirable features like high-quality geometry and multi-view consistency, but, unlike their 2D counterparts, complex semantic image editing tasks for 3D GANs have only been partially explored. To address this problem, we propose LatentSwap3D, a semantic edit approach based on latent space discovery that can be used with any off-the-shelf 3D or 2D GAN model and on any dataset. LatentSwap3D relies on identifying the latent code dimensions corresponding to specific attributes by feature ranking using a random forest classifier. It then performs the edit by swapping the selected dimensions of the image being edited with the ones from an automatically selected reference image. Compared to other latent space control-based edit methods, which were mainly designed for 2D GANs, our method on 3D GANs provides remarkably consistent semantic edits in a disentangled manner and outperforms others both qualitatively and quantitatively. We show results on seven 3D GANs (pi-GAN, GIRAFFE, StyleSDF, MVCGAN, EG3D, StyleNeRF, and VolumeGAN) and on five datasets (FFHQ, AFHQ, Cats, MetFaces, and CompCars).

Geting Started

Installation

$ git clone --recurse-submodules -j8 git@github.com:enisimsar/latentswap3d.git

Install the dependencies in env.yml

$ conda env create -f env.yml
$ conda activate latent-3d

Quickstart

For a quick demo, see DEMO.

Hydra Usage

The repository uses Hydra framework to manage experiments. We provide seven main experiments:

Hydra will output experiment results under outputs folder.

Example for MVCGAN

python gen.py hparams.batch_size=1 num_samples=10000 generator=mvcgan generator.class_name=FFHQ
OUTPUT_PATH=outputs/run/src.generators.MVCGANGenerator/FFHQ/2022-11-23
python predict.py hparams.batch_size=50 load_path=$OUTPUT_PATH generator=mvcgan generator.class_name=FFHQ
python dci.py load_path=$OUTPUT_PATH generator=mvcgan generator.class_name=FFHQ
python find.py load_path=$OUTPUT_PATH generator=mvcgan generator.class_name=FFHQ
python tune.py load_path=$OUTPUT_PATH generator=mvcgan generator.class_name=FFHQ
python manipulate.py load_path=$OUTPUT_PATH generator=mvcgan generator.class_name=FFHQ

Citation

If you use this code for your research, please cite our paper:

@InProceedings{Simsar_2023_ICCV,
    author    = {Simsar, Enis and Tonioni, Alessio and Ornek, Evin Pinar and Tombari, Federico},
    title     = {LatentSwap3D: Semantic Edits on 3D Image GANs},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
    month     = {October},
    year      = {2023},
    pages     = {2899-2909}
}