Home

Awesome

SynthText with German Language Support

Modified from here to support german characters and text. The project was carried out during an internship at the Fraunhofer-Institute for Production Technology and Automation (IPA) in Stuttgart.

Scene-Text Image Samples generated with the code German Scene-Text Samples

Environment

OS: Windows10

python==3.8.5

opencv==4.5.1

pygame==1.9.6

Adjustments to support german text

Usage Steps

  1. Run the script add_more_data.py to download the pre-processed background images with their depth and segmentation masks and to merge them into one h5 file.

    If downloading with add_more_data.py doesn't work you can use wget in git bash terminal to download them manually (more information to use wget on windows see here).

  2. Run gen_more.py to generate new synthetic scene text images withe the pre-processed data.

    Or run gen_more.py --viz to get a visualization after each generated sample.

  3. Visualize your results with visualize_results.py.

Data Structure

data
├── bg_img                                   : pre-processed images
├── fonts
│   ├── ubuntu.ttf
│   ├── ...                                  : added fonts
│   └── fontlist.txt                         : updated fontlist
├── german_textSource
│   ├── 3M_sentences_LeipzigCorpora.txt      : added text source
│   └── words_LeipzigCorpora.csv
├── models
│   ├── char_freq.cp                         : updated character model
│   ├── colors_new.cp
│   └── font_px2pt.cp                        : updated font model
├── newsgroup
│   └── newsgroup.txt
├── depth.h5
├── dset_8000.h5                             : pre-processed data [img, depth, seg]
├── dset.h5
└── seg.h5

Parameter Settings

The rest of the README is from the original repository

Code for generating synthetic text images as described in "Synthetic Data for Text Localisation in Natural Images", Ankush Gupta, Andrea Vedaldi, Andrew Zisserman, CVPR 2016.

Synthetic Scene-Text Image Samples Synthetic Scene-Text Samples

The code in the master branch is for Python2. Python3 is supported in the python3 branch.

The main dependencies are:

pygame, opencv (cv2), PIL (Image), numpy, matplotlib, h5py, scipy

Generating samples

python gen.py --viz

This will download a data file (~56M) to the data directory. This data file includes:

This script will generate random scene-text image samples and store them in an h5 file in results/SynthText.h5. If the --viz option is specified, the generated output will be visualized as the script is being run; omit the --viz option to turn-off the visualizations. If you want to visualize the results stored in results/SynthText.h5 later, run:

python visualize_results.py

Pre-generated Dataset

A dataset with approximately 800000 synthetic scene-text images generated with this code can be found here.

Adding New Images

Segmentation and depth-maps are required to use new images as background. Sample scripts for obtaining these are available here.

For an explanation of the fields in dset.h5 (e.g.: seg,area,label), please check this comment.

Pre-processed Background Images

The 8,000 background images used in the paper, along with their segmentation and depth masks, have been uploaded here: http://www.robots.ox.ac.uk/~vgg/data/scenetext/preproc/<filename>, where, <filename> can be:

filenamessizedescriptionmd5 hash
imnames.cp180Knames of images which do not contain background text
bg_img.tar.gz8.9Gimages (filter these using imnames.cp)3eac26af5f731792c9d95838a23b5047
depth.h515Gdepth mapsaf97f6e6c9651af4efb7b1ff12a5dc1b
seg.h56.9Gsegmentation maps1605f6e629b2524a3902a5ea729e86b2

Note: due to large size, depth.h5 is also available for download as 3-part split-files of 5G each. These part files are named: depth.h5-00, depth.h5-01, depth.h5-02. Download using the path above, and put them together using cat depth.h5-0* > depth.h5.

use_preproc_bg.py provides sample code for reading this data.

Note: I do not own the copyright to these images.

Generating Samples with Text in non-Latin (English) Scripts

Further Information

Please refer to the paper for more information, or contact me (email address in the paper).