Awesome
RoomTex
This is the implementation of RoomTex: Texturing Compositional Indoor Scenes via Iterative Inpainting.
Project Page | Paper
<div align=center> <img src="teaser.jpg" width="100%"/> </div>Installation
Tested on A100, V100. If your GPU memory is not enough, you can reduce the batch_size
&batch_count
in configs
files.
conda create -n RoomTex python=3.8
conda activate RoomTex
pip install -r requirements.txt
other versions of python and pytorch should also work fine.
Quickstart
Stable Diffusion and ControlNet
Deploy stable-diffusion-webui
as stable-diffusion-webui
Download SDXL model from
Base model | Refiner model | VAE model | SDXL-controlnet depth model
Modify the SDXL webui as sdxl/modify_code/readme.txt
then, run stable-diiffusion-webui on nowebui mode as
CUDA_VISIBLE_DEVICES=0 bash webui.sh --nowebui --port 7860
you can choose the GPU device by setting CUDA_VISIBLE_DEVICES
and the port by setting --port
Then you can use the stable diffusion through this port
Room mesh
Example room meshes are provided in demo/objects/livingroom/
folder.
Object mesh can also be generated by using the script from Shap-E .
Empty room mesh from sketch.
python utils/mesh/gene_room.py --cfg demo/configs/livingroom.yaml
saved in demo/objects/
. use generated room, modify the config
file to add room mesh path on room_mesh_path
and boundary_mesh_path
Room mesh can also use 3D-FRONT dataset.
Prepare the panorama and objects inpainting view depth of the scene
python scripts/prepare_depth.py --cfg demo/configs/livingroom.yaml
saved in config['save_path']
Generate the panorama of the scene (port is the sd webui port)
python gene_img/pano/pano_text2img.py --cfg demo/configs/livingroom.yaml --port 7860
saved in config['save_path'] + '/pano/image'
. choose one and modify the config
file to add the panorama path on pano_all_2K
Refine the room panorama
python gene_img/pano/refine_pano.py --cfg demo/configs/livingroom.yaml --port 7860
Reproject the panorama to initial perspective image
python scripts/prepare_pers.py --cfg demo/configs/livingroom.yaml
Genetate objects by iterative inpainting
python scripts/iterative_gene.py --cfg demo/configs/livingroom.yaml --port 7860 --id 0
python scripts/adornment_refine.py --cfg demo/configs/livingroom.yaml --port 7860 --id 0
port is the sd webui port, id is the object id.
If you have multify GPUs, you can run it parallelly. For example, first establish N webui services
CUDA_VISIBLE_DEVICES=0 bash webui.sh --nowebui --port 7860
...
CUDA_VISIBLE_DEVICES=N bash webui.sh --nowebui --port N
Then run iterative inpainting like:
python scripts/iterative_gene.py --cfg demo/configs/livingroom.yaml --port N --id n
...
Render figures of the scene, need to set poses
python scripts/render/render.py
saved in config['save_path'] + '/Figure'
Citation
@article{wang2024roomtex,
title={RoomTex: Texturing Compositional Indoor Scenes via Iterative Inpainting},
author={Qi Wang and Ruijie Lu and Xudong Xu and Jingbo Wang and Michael Yu Wang and Bo Dai and Gang Zeng and Dan Xu},
year={2024},
eprint={2406.02461},
archivePrefix={arXiv},
primaryClass={cs.CV}
}