Home

Awesome

SynthText for (English + Japanese)

Code for generating synthetic text images as described in "Synthetic Data for Text Localisation in Natural Images", Ankush Gupta, Andrea Vedaldi, Andrew Zisserman, CVPR 2016 with support for japanese characters

TODO

Add support for chinese

Output samples

Synthetic Japanese Text Samples 1

Japanese example 1

Synthetic Japanese Text Samples 2

Japanese example 2

Synthetic Japanese Text Samples 3

Japanese example 3

Synthetic Japanese Text Samples 4

Japanese example 4

The library is written in Python. The main dependencies are:

pygame, opencv (version 3.3), PIL (Image), numpy, matplotlib, h5py, scipy

The main differences

  1. Use opencv 3.3 instead of opencv 2.4
  2. Use nltk to parse language (eng, jpn)

How to use this source

Preparation

Put your text data and font as follow

data
├── dset.h5
├── fonts
│   ├── fontlist.txt                        : your font list
│   ├── ubuntu
│   ├── ubuntucondensed
│   ├── ubuntujapanese                      : your japanese font
│   └── ubuntumono
├── models
│   ├── char_freq.cp
│   ├── colors_new.cp
│   └── font_px2pt.cp
└── newsgroup
    └── newsgroup.txt                       : your text source

Install dependencies

# For japanese
sudo apt-get install libmecab2 libmecab-dev mecab mecab-ipadic mecab-ipadic-utf8 mecab-utils

Generate font model and char model

python invert_font_size.py
python update_freq.py

mv char_freq.cp data/models/
mv font_px2pt.cp data/models/

Then go to next

SynthText

Code for generating synthetic text images as described in "Synthetic Data for Text Localisation in Natural Images", Ankush Gupta, Andrea Vedaldi, Andrew Zisserman, CVPR 2016.

Synthetic Scene-Text Image Samples Synthetic Scene-Text Samples

Generating samples

python gen.py --viz --lang ENG/JPN

This will download a data file (~56M) to the data directory. This data file includes:

This script will generate random scene-text image samples and store them in an h5 file in results/SynthText.h5. If the --viz option is specified, the generated output will be visualized as the script is being run; omit the --viz option to turn-off the visualizations. If you want to visualize the results stored in results/SynthText.h5 later, run:

python visualize_results.py

Pre-generated Dataset

A dataset with approximately 800000 synthetic scene-text images generated with this code can be found here.

Adding New Images

Segmentation and depth-maps are required to use new images as background. Sample scripts for obtaining these are available here.

For an explanation of the fields in dset.h5 (e.g.: seg,area,label), please check this comment.

Pre-processed Background Images

The 8,000 background images used in the paper, along with their segmentation and depth masks, have been uploaded here: http://zeus.robots.ox.ac.uk/textspot/static/db/<filename>, where, <filename> can be:

Note: I do not own the copyright to these images.

Generating Samples with Text in non-Latin (English) Scripts

@JarveeLee has modified the pipeline for generating samples with Chinese text here. @gachiemchiep has modified the pipeline for generating samples with Japanese text here.

Further Information

Please refer to the paper for more information, or contact me (email address in the paper).