Home

Awesome

DualVector

OpenAccess[pdf][supp] Arxiv[pdf]

Code release for DualVector: Unsupervised Vector Font Synthesis with Dual-Part Representation (CVPR 2023)

Update

Requirements

In general, other versions of the packages listed below may also work, but are not tested.

Important python related packages

We recommend you to install these dependencies with pip. You can refer to requirements.txt for all the packages' version but some of them may not be used in code.

Javascript related packages

If NPM is already installed, just run

cd js
npm install

Dataset

Please download it from here and extract it to data/dvf_png.

The dataset is the same as that used in DeepVecFont, a subset from SVG-VAE. But because some of the fonts' lowercase letters are not fully contained by the images, we re-render them to images with a resolution of 1536*1024. The training data required for Multi-Implicits is also included.

Pretrained Model

We provide the pretrained models for font reconstruction and generation. You can download them and replace the checkpoint path following with their path.

Run

Train

Font Reconstruction

python train.py --config configs/reconstruct.yaml --name recon --gpu 0

This will save the training records under save/recon, including the source code, tensorboard log, image visualization at the validation epoch and the checkpoint.

Font Generation (Font Style Transfer)

Replace the following path in generation_latent_guide.yaml with the reconstruction checkpoint:

   submodule_config:
    - 
      ckpt: ./save/recon/ckpt/epoch-last.pth           # replace this line with your checkpoint
      ckpt_key: [encoder, img_decoder, decoder, encoder]
      module_name: [latent_encoder, img_decoder, decoder, img_encoder]
      freeze: [true, true, true, true]

Then train a model with the latent guidance:

python train.py --config configs/generation_latent_guide.yaml --name gen_latent --gpu 0

Finally resume from this checkpoint and train the whole model. Please select config if said Model/Optimizer configs are different.

python train.py --config configs/generation.yaml --name gen --gpu 0 --resume save/gen_latent/ckpt/epoch-last.pth

Test

Font reconstruction

First synthesize the reconstructed image and initial SVG:

python eval/eval_reconstruction.py --resume save/recon/ckpt/epoch-last.pth --outdir eval/save/test_recon # initial SVG

Then run the contour refinement:

python eval/post_refinement.py --outdir eval/save/test_recon/refined --input eval/save/test_recon/rec_init/ --fmin 0 --fmax 200 # refinement

Font generation

Similar with font reconstruction, run the following two steps:

python eval/eval_generation_multiref.py --outdir eval/save/test_gen/ --resume save/gen/ckpt/epoch-last.pth # initial SVG
python eval/post_refinement.py --outdir eval/save/test_gen/refined --input eval/save/test_gen/rec_init/ --fmin 0 --fmax 200 # refinement

Sampling new fonts

Specify the number of fonts you want:

python eval/sample_dvf.py --outdir eval/save/test_sample/svg --n-sample 10 # initial SVG

Then refine it:

python eval/post_refinement.py --outdir eval/save/test_sample/refined --input eval/save/test_sample/svg/ --fmin 0 --fmax 20 # refinement

Evaluate the metrics

Take the font reconstruction as an example. To render the glyph with a resolution of 256*256 and evaluate the L1-error, SSIM and s-IOU, run:

cd eval
python run_metrics.py --name ours --pred_lowercase --pred save/test_recon/refined --ff {0:02d}_p4.svg --fontmin 0 --fontmax 200 --glyph 52 --res 256

The result will be saved at eval/eval/save/.

Citation

If you find DualVector helpful, please consider citing:

@InProceedings{dualvector,
    author    = {Liu, Ying-Tian and Zhang, Zhifei and Guo, Yuan-Chen and Fisher, Matthew and Wang, Zhaowen and Zhang, Song-Hai},
    title     = {DualVector: Unsupervised Vector Font Synthesis With Dual-Part Representation},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2023},
    pages     = {14193-14202}
}