Home

Awesome

Surya

Surya is a document OCR toolkit that does:

It works on a range of documents (see usage and benchmarks for more details).

DetectionOCR
<img src="static/images/excerpt.png" width="500px"/><img src="static/images/excerpt_text.png" width="500px"/>
LayoutReading Order
<img src="static/images/excerpt_layout.png" width="500px"/><img src="static/images/excerpt_reading.jpg" width="500px"/>
Table Recognition
<img src="static/images/scanned_tablerec.png" width="500px"/><img width="500px"/>

Surya is named for the Hindu sun god, who has universal vision.

Community

Discord is where we discuss future development.

Examples

NameDetectionOCRLayoutOrderTable Rec
JapaneseImageImageImageImageImage
ChineseImageImageImageImage
HindiImageImageImageImage
ArabicImageImageImageImage
Chinese + HindiImageImageImageImage
PresentationImageImageImageImageImage
Scientific PaperImageImageImageImageImage
Scanned DocumentImageImageImageImageImage
New York TimesImageImageImageImage
Scanned FormImageImageImageImageImage
TextbookImageImageImageImage

Hosted API

There is a hosted API for all surya models available here:

Commercial usage

I want surya to be as widely accessible as possible, while still funding my development/training costs. Research and personal usage is always okay, but there are some restrictions on commercial usage.

The weights for the models are licensed cc-by-nc-sa-4.0, but I will waive that for any organization under $5M USD in gross revenue in the most recent 12-month period AND under $5M in lifetime VC/angel funding raised. You also must not be competitive with the Datalab API. If you want to remove the GPL license requirements (dual-license) and/or use the weights commercially over the revenue limit, check out the options here.

Installation

You'll need python 3.10+ and PyTorch. You may need to install the CPU version of torch first if you're not using a Mac or a GPU machine. See here for more details.

Install with:

pip install surya-ocr

Model weights will automatically download the first time you run surya.

Usage

Interactive App

I've included a streamlit app that lets you interactively try Surya on images or PDF files. Run it with:

pip install streamlit
surya_gui

OCR (text recognition)

This command will write out a json file with the detected text and bboxes:

surya_ocr DATA_PATH

The results.json file will contain a json dictionary where the keys are the input filenames without extensions. Each value will be a list of dictionaries, one per page of the input document. Each page dictionary contains:

Performance tips

Setting the RECOGNITION_BATCH_SIZE env var properly will make a big difference when using a GPU. Each batch item will use 40MB of VRAM, so very high batch sizes are possible. The default is a batch size 512, which will use about 20GB of VRAM. Depending on your CPU core count, it may help, too - the default CPU batch size is 32.

From python

from PIL import Image
from surya.ocr import run_ocr
from surya.model.detection.model import load_model as load_det_model, load_processor as load_det_processor
from surya.model.recognition.model import load_model as load_rec_model
from surya.model.recognition.processor import load_processor as load_rec_processor

image = Image.open(IMAGE_PATH)
langs = ["en"] # Replace with your languages - optional but recommended
det_processor, det_model = load_det_processor(), load_det_model()
rec_model, rec_processor = load_rec_model(), load_rec_processor()

predictions = run_ocr([image], [langs], det_model, det_processor, rec_model, rec_processor)

Compilation

The following models have support for compilation. You will need to set the following environment variables to enable compilation:

Alternatively, you can also set COMPILE_ALL=true which will compile all models.

Here are the speedups on an A10 GPU:

ModelTime per page (s)Compiled time per page (s)Speedup (%)
Recognition0.6575560.5626514.43314334
Detection0.1088080.105213.306742151
Layout0.273190.270630.93707676
Table recognition0.02190.0193811.50684932

Text line detection

This command will write out a json file with the detected bboxes.

surya_detect DATA_PATH

The results.json file will contain a json dictionary where the keys are the input filenames without extensions. Each value will be a list of dictionaries, one per page of the input document. Each page dictionary contains:

Performance tips

Setting the DETECTOR_BATCH_SIZE env var properly will make a big difference when using a GPU. Each batch item will use 440MB of VRAM, so very high batch sizes are possible. The default is a batch size 36, which will use about 16GB of VRAM. Depending on your CPU core count, it might help, too - the default CPU batch size is 6.

From python

from PIL import Image
from surya.detection import batch_text_detection
from surya.model.detection.model import load_model, load_processor

image = Image.open(IMAGE_PATH)
model, processor = load_model(), load_processor()

# predictions is a list of dicts, one per image
predictions = batch_text_detection([image], model, processor)

Layout and reading order

This command will write out a json file with the detected layout and reading order.

surya_layout DATA_PATH

The results.json file will contain a json dictionary where the keys are the input filenames without extensions. Each value will be a list of dictionaries, one per page of the input document. Each page dictionary contains:

Performance tips

Setting the LAYOUT_BATCH_SIZE env var properly will make a big difference when using a GPU. Each batch item will use 220MB of VRAM, so very high batch sizes are possible. The default is a batch size 32, which will use about 7GB of VRAM. Depending on your CPU core count, it might help, too - the default CPU batch size is 4.

From python

from PIL import Image
from surya.detection import batch_text_detection
from surya.layout import batch_layout_detection
from surya.model.detection.model import load_model as load_det_model, load_processor as load_det_processor
from surya.model.layout.model import load_model as load_layout_model
from surya.model.layout.processor import load_processor as load_layout_processor

image = Image.open(IMAGE_PATH)
model = load_layout_model()
processor = load_layout_processor()
det_model = load_det_model()
det_processor = load_det_processor()

# layout_predictions is a list of dicts, one per image
line_predictions = batch_text_detection([image], det_model, det_processor)
layout_predictions = batch_layout_detection([image], model, processor, line_predictions)

Table Recognition

This command will write out a json file with the detected table cells and row/column ids, along with row/column bounding boxes. If you want to get a formatted markdown table, check out the tabled repo.

surya_table DATA_PATH

The results.json file will contain a json dictionary where the keys are the input filenames without extensions. Each value will be a list of dictionaries, one per page of the input document. Each page dictionary contains:

Performance tips

Setting the TABLE_REC_BATCH_SIZE env var properly will make a big difference when using a GPU. Each batch item will use 150MB of VRAM, so very high batch sizes are possible. The default is a batch size 64, which will use about 10GB of VRAM. Depending on your CPU core count, it might help, too - the default CPU batch size is 8.

From python

See table_recognition.py for a code sample. Table recognition depends on extracting cells, so it is a little more involved to setup than other model types.

Limitations

Troubleshooting

If OCR isn't working properly:

Manual install

If you want to develop surya, you can install it manually:

Benchmarks

OCR

Benchmark chart tesseract

ModelTime per page (s)Avg similarity (⬆)
surya.620.97
tesseract.450.88

Full language results

Tesseract is CPU-based, and surya is CPU or GPU. I tried to cost-match the resources used, so I used a 1xA6000 (48GB VRAM) for surya, and 28 CPU cores for Tesseract (same price on Lambda Labs/DigitalOcean).

Google Cloud Vision

I benchmarked OCR against Google Cloud vision since it has similar language coverage to Surya.

Benchmark chart google cloud

Full language results

Methodology

I measured normalized sentence similarity (0-1, higher is better) based on a set of real-world and synthetic pdfs. I sampled PDFs from common crawl, then filtered out the ones with bad OCR. I couldn't find PDFs for some languages, so I also generated simple synthetic PDFs for those.

I used the reference line bboxes from the PDFs with both tesseract and surya, to just evaluate the OCR quality.

For Google Cloud, I aligned the output from Google Cloud with the ground truth. I had to skip RTL languages since they didn't align well.

Text line detection

Benchmark chart

ModelTime (s)Time per page (s)precisionrecall
surya50.20990.1961330.8210610.956556
tesseract74.45460.2908380.6314980.997694

Tesseract is CPU-based, and surya is CPU or GPU. I ran the benchmarks on a system with an A10 GPU, and a 32 core CPU. This was the resource usage:

Methodology

Surya predicts line-level bboxes, while tesseract and others predict word-level or character-level. It's hard to find 100% correct datasets with line-level annotations. Merging bboxes can be noisy, so I chose not to use IoU as the metric for evaluation.

I instead used coverage, which calculates:

First calculate coverage for each bbox, then add a small penalty for double coverage, since we want the detection to have non-overlapping bboxes. Anything with a coverage of 0.5 or higher is considered a match.

Then we calculate precision and recall for the whole dataset.

Layout analysis

Layout Typeprecisionrecall
Image0.912650.93976
List0.808490.86792
Table0.849570.96104
Text0.930190.94571
Title0.921020.95404

Time per image - .13 seconds on GPU (A10).

Methodology

I benchmarked the layout analysis on Publaynet, which was not in the training data. I had to align publaynet labels with the surya layout labels. I was then able to find coverage for each layout type:

Reading Order

88% mean accuracy, and .4 seconds per image on an A10 GPU. See methodology for notes - this benchmark is not perfect measure of accuracy, and is more useful as a sanity check.

Methodology

I benchmarked the reading order on the layout dataset from here, which was not in the training data. Unfortunately, this dataset is fairly noisy, and not all the labels are correct. It was very hard to find a dataset annotated with reading order and also layout information. I wanted to avoid using a cloud service for the ground truth.

The accuracy is computed by finding if each pair of layout boxes is in the correct order, then taking the % that are correct.

Table Recognition

ModelRow IntersectionCol IntersectionTime Per Image
Surya0.970.930.03
Table transformer0.720.840.02

Higher is better for intersection, which the percentage of the actual row/column overlapped by the predictions.

Methodology

The benchmark uses a subset of Fintabnet from IBM. It has labeled rows and columns. After table recognition is run, the predicted rows and columns are compared to the ground truth. There is an additional penalty for predicting too many or too few rows/columns.

Running your own benchmarks

You can benchmark the performance of surya on your machine.

Text line detection

This will evaluate tesseract and surya for text line detection across a randomly sampled set of images from doclaynet.

python benchmark/detection.py --max 256

Text recognition

This will evaluate surya and optionally tesseract on multilingual pdfs from common crawl (with synthetic data for missing languages).

python benchmark/recognition.py --tesseract

Layout analysis

This will evaluate surya on the publaynet dataset.

python benchmark/layout.py

Reading Order

python benchmark/ordering.py

Table Recognition

python benchmark/table_recognition.py --max 1024 --tatr

Training

Text detection was trained on 4x A6000s for 3 days. It used a diverse set of images as training data. It was trained from scratch using a modified efficientvit architecture for semantic segmentation.

Text recognition was trained on 4x A6000s for 2 weeks. It was trained using a modified donut model (GQA, MoE layer, UTF-16 decoding, layer config changes).

Thanks

This work would not have been possible without amazing open source AI work:

Thank you to everyone who makes open source AI possible.