Home

Awesome

LayoutDETR

LayoutDETR: Detection Transformer Is a Good Multimodal Layout Designer

Ning Yu, Chia-Chih Chen, Zeyuan Chen, Rui Meng<br>Gang Wu, Paul Josel, Juan Carlos Niebles, Caiming Xiong, Ran Xu<br>

Salesforce Research

arXiv 2023

paper | project page

<pre><img src='assets/teaser.png' width=200> <img src='assets/framework_architecture.png' width=400></pre>

<img src='assets/samples_ads_cgl.jpg' width=700></pre>

Abstract

Graphic layout designs play an essential role in visual communication. Yet handcrafting layout designs is skill-demanding, time-consuming, and non-scalable to batch production. Generative models emerge to make design automation scalable but it remains non-trivial to produce designs that comply with designers' multimodal desires, i.e., constrained by background images and driven by foreground content. We propose LayoutDETR that inherits the high quality and realism from generative modeling, while reformulating content-aware requirements as a detection problem: we learn to detect in a background image the reasonable locations, scales, and spatial relations for multimodal foreground elements in a layout. Our solution sets a new state-of-the-art performance for layout generation on public benchmarks and on our newly-curated ad banner dataset. We integrate our solution into a graphical system that facilitates user studies, and show that users prefer our designs over baselines by significant margins.

Prerequisites

Data preprocessing

Our ad banner dataset (14.7GB, 7,672 samples). Part of the source images are filtered from Pitt Image Ads Dataset and the others are crawled from Google image search engine with a variety of retailer brands as keywords. Download our dataset and unzip to data/ which contains three subdirectories:

To preprocess the dataset that are efficient for training, run

python dataset_tool.py \
--source=data/ads_banner_dataset/png_json_gt \
--dest=data/ads_banner_dataset/zip_3x_inpainted \
--inpaint-aug

where

Training

python train.py --gpus=8 --batch=16 \
--data=data/ads_banner_dataset/zip_3x_inpainted/train.zip \
--outdir=training-runs \
--metrics=layout_fid50k_train,layout_fid50k_val,fid50k_train,fid50k_val,overlap50k_alignment50k_layoutwise_iou50k_layoutwise_docsim50k_train,overlap50k_alignment50k_layoutwise_iou50k_layoutwise_docsim50k_val

where

Evaluation

Download the well-trained LayoutDETR model on our ad banner dataset from here (2.7GB) to checkpoints/.

python evaluate.py --gpus=8 --batch=16 \
--data=data/ads_banner_dataset/zip_1x_inpainted/val.zip \
--outdir=evaluation \
--ckpt=checkpoints/layoutdetr_ad_banner.pkl \
--metrics=layout_fid50k_val,fid50k_val,overlap50k_alignment50k_layoutwise_iou50k_layoutwise_docsim50k_val,rendering_val

where

Layout generation in the wild

python generate.py \
--ckpt=checkpoints/layoutdetr_ad_banner.pkl \
--bg='examples/Lumber 2 [header]EVERYTHING 10% OFF[body text]Friends & Family Savings Event[button]SHOP NOW[disclaimer]CODE FRIEND10.jpg' \
--bg-preprocessing=256 \
--strings='EVERYTHING 10% OFF|Friends & Family Savings Event|SHOP NOW|CODE FRIEND10' \
--string-labels='header|body text|button|disclaimer / footnote' \
--outfile='examples/output/Lumber 2' \
--out-postprocessing=horizontal_center_aligned

where

Citation

@article{yu2023layoutdetr,
	title={LayoutDETR: Detection Transformer Is a Good Multimodal Layout Designer},
	author={Yu, Ning and Chen, Chia-Chih and Chen, Zeyuan and Meng, Rui and Wu, Gang and Josel, Paul and Niebles, Juan Carlos and Xiong, Caiming and Xu, Ran},
	journal={arXiv preprint arXiv:2212.09877},
	year={2023}
  }

Acknowledgement