Awesome
TeethDreamer: 3D Teeth Reconstruction from Five Intra-oral Photographs
Chenfan Xu*, Zhentao Liu*, Yuan Liu, Yulong Dou, Jiamin Wu, Jiepeng Wang, Minjiao Wang, Dingggang Shen, and Zhiming Cui<sup>+</sup>.
[Paper]|[Project Page]
- Inference code and pretrained models.
- Training code.
Demo
- Five intra-oral photos in
example/oral
. - Segmented teeth images in
example/teeth
. - Generated images and Reconstruction.
Getting started
- Install packages in
requirements.txt
. We test our model on a 40G A100 GPU with 11.6 CUDA and 1.12.0 pytorch.
conda create -n TeethDreamer
conda activate TeethDreamer
pip install torch==1.12.0+cu116 torchvision==0.13.0+cu116 torchaudio==0.12.0 --extra-index-url https://download.pytorch.org/whl/cu116
pip install -r requirements.txt
- Download pretrained model checkpoints
Inference
- Make sure you have the following models.
SyncDreamer
|-- ckpt
|-- ViT-L-14.ckpt
|-- TeethDreamer.ckpt
|-- zero123-xl.ckpt
|-- sam_vit_b_01ec64.pth
- Segment upper and lower teeth from your five intral-oral photos. You can use our script
seg_teeth.py
in the command line. Before that, you need renumber your five intral-oral images by 0~4 corresponding to anterior view, left buccal view, right buccal view, maxillary occlusal view, and mandibular occlusal view. And then you can manually segment upper teeth and lower teeth by clicking left mouse button in the target area within the interactive interface created by our script.
Tips: You can refer to image files in example
folder.
python seg_teeth.py --img directory/of/your/intra-oral/images \
--seg directory/to/store/segmented/images \
--suffix suffix/of/your/image/files
Tips: You need segment upper teeth for the first four intra-oral images and then lower teeth for the last four images. If unexpected regions are segmented, you can click the right mouse button to label the irrelevant area.
- Make sure you have following file structure.
|-- your_seg_dir
|-- XXX_norm_lower
|-- XXX_norm_upper
- Generate color and normal images of eight viewpoints.
python TeethDreamer.py -b configs/TeethDreamer.yaml \
--gpus 0 \
--test ckpt/TeethDreamer.ckpt \
--output directory/to/store/generated/images \
data.params.test_dir=directory/of/segmented/images
- (Optional) Segment foreground of the generated image manually which is necessary for Neus (Because the foreground mask automatically generated by
rembg
package may be wrong sometimes)
python seg_foreground.py --img path/to/your/generated/image \
--seg path/to/your/segmented/image \
- Reconstruct tooth model from generation by Neus.
cd instant-nsr-pl
python run.py --img ../example/results/generation/1832_upper_cond_000_000_000_000.png" \
--cpu 4 \
--dir ../example/results/reconstruction/ \
--normal \
--rembg
Explanation:
--img
is the path to your generated image--cpu
is the number of your CPU cores--dir
is the directory to store reconstruction--normal
indicates the generation includes normal images--rembg
indicates background removement (The foreground mask is necessary here)
Data Preparation for Training
- Make sure you have normalized tooth models which are segmented from intra-oral scanning models.
- Render color and normal images by blender scripts. We test following scripts on Windows with Blender 4.0.1
blender --background
--python normal_render.py
-- --object_path path/to/your/tooth/model
--target_dir directory/to/store/rendered/normal/images
--input_dir directory/to/store/rendered/condition/images
blender --background
--python color_render.py
-- --object_path path/to/your/tooth/model
--target_dir directory/to/store/rendered/color/images
--input_dir directory/to/store/rendered/condition/images
Explanation:
normal_render.py
only renders target normal images with fixed viewpoints set by view16
dictionary.
color_render.py
renders condition images corresponding to segmented intra-oral photos taken by dentists and target color images.
3. Make sure you have a pkl
file which includes a dictionary with train
, val
keys and corresponding lists including cases' ids such as XXX_norm_lower
and XXX_norm_upper
.
4. Check if your directory of rendering data has following structures.
Data
|-- target
|-- normal
|-- input
|-- mv-splits.pkl
Explanation:
The target
folder is the directory of your rendered color images which is the argument target_dir
in the script color_render.py
.
The normal
folder is the directory of your rendered normal images which is the argument target_dir
in the script normal_render.py
.
The input
folder is the directory of your rendered condition images which is the argument input_dir
in the script color_render.py
.
The mv-splits.pkl
file is the pkl
file mentioned in the previous step.
5. Finetune pretrained zero123 model on your own data.
python TeethDreamer.py -b configs/TeethDreamer.yaml \
--gpus 0 \
--finetune_from ckpt/zero123-xl.ckpt \
data.target_dir=path/to/your/target/folder \
data.input_dir=path/to/your/input/folder \
data.uid_set_pkl=path/to/your/pkl/file \
data.validation_dir=path/to/your/input/folder
Acknowledgement
We have intensively borrow codes from the following repositories. Many thanks to the authors for sharing their codes.
Citation
If you find this repository useful in your project, please cite the following work. :)
@InProceedings{10.1007/978-3-031-72104-5_68,
author="Xu, Chenfan and Liu, Zhentao and Liu, Yuan and Dou, Yulong and Wu, Jiamin and Wang, Jiepeng and Wang, Minjiao and Shen, Dinggang and Cui, Zhiming",
editor="Linguraru, Marius George and Dou, Qi and Feragen, Aasa and Giannarou, Stamatia and Glocker, Ben and Lekadir, Karim and Schnabel, Julia A.",
title="TeethDreamer: 3D Teeth Reconstruction fromĀ Five Intra-Oral Photographs",
booktitle="Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024",
year="2024",
publisher="Springer Nature Switzerland",
address="Cham",
pages="712--721",
isbn="978-3-031-72104-5"
}