Awesome
Detextify
What is this?
TL;DR: A Python library to remove unwanted pseudo-text from images generated by your favorite generative AI models (Stable Diffusion, Midjourney, DALL·E).
Before | After |
---|---|
So, why should I care?
We all know generative AI is the coolest thing since sliced bread 🍞.
But try using any off-the-shelf generative vision model and you'll quickly see that these systems can get... creative with interpreting your prompts.
Specifically, you'll observe all kinds of weird artifacts on your images from extra fingers on hands, to arms coming out of chests, to alien text written in random places.
For generative systems to actually be usable in downstream applications, we need to better control these outputs and mitigate unwanted effects.
We believe the next frontier for generative AI is about robustness and trust. In other words, how can we architect these systems to be controllable, relevant, and predictably consistent with our needs?
Detextify
is the first phase in our vision of robustifying generative AI.
If we get this right, we will unlock slews of new applications for generative systems that will change the landscape of human-AI collaboration. 🌎
Cute, but what are you actually doing?
Detextify
runs text detection on your image, masks the text boxes, and in-paints the masked regions
until your image is text-free. Detextify
can be run entirely on your local machine (using
Tesseract for text detection and
Stable Diffusion for in-painting), or can call existing APIs
(Azure for text detection and
OpenAI or Replicate for in-painting).
Installation
pip install detextify
Additionally:
- To run text detection locally (as opposed to using the Azure API), you need to install Tesseract.
- To run in-painting locally (as opposed to using the OpenAI or Replicate APIs), you need a GPU with CUDA and cuDNN installed.
Usage
See this Colab notebook for how to use the library, or follow the instructions below.
You can remove unwanted text from your image in just a few lines 💪:
from detextify.text_detector import TesseractTextDetector
from detextify.inpainter import LocalSDInpainter
from detextify.detextifier import Detextifier
text_detector = TesseractTextDetector("/path/to/tesseract/installation")
detextifier = Detextifier(text_detector, LocalSDInpainter())
detextifier.detextify("/my/input/image/path.png", "/my/output/image/path.png")
and 💣💥, just like that, your image is cleared of any bizarre text artifacts.
Or if you want to clean up a directory of PNG images, just wrap it in a for-loop:
import glob
from detextify.text_detector import TesseractTextDetector
from detextify.inpainter import LocalSDInpainter
from detextify.detextifier import Detextifier
text_detector = TesseractTextDetector("/path/to/tesseract/installation")
detextifier = Detextifier(text_detector, LocalSDInpainter())
for img_file in glob.glob("/path/to/dir/*.png"):
detextifier.detextify(img_file, img_file.replace(".png", "_detextified.png"))
We provide multiple implementations for text detection and in-painting (both local and API-based), and you are also free to add your own.
Text Detectors
TesseractTextDetector
(based on Tesseract) runs locally. Follow this guide to install thetesseract
library locally. On Ubuntu:
sudo apt install tesseract-ocr
sudo apt install libtesseract-dev
To find the path where it was installed (and pass it to the TesseractTextDetector
constructor):
whereis tesseract
AzureTextDetector
calls a computer vision API from Microsoft Azure. You will first need to create a Computer Vision resource via the Azure portal. Once created, take note of the endpoint and the key.
AZURE_CV_ENDPOINT = "https://your-endpoint.cognitiveservices.azure.com"
AZURE_CV_KEY = "your-azure-key"
text_detector = AzureTextDetector(AZURE_CV_ENDPOINT, AZURE_CV_KEY)
Our evaluation shows that the two text detectors produce comparable results.
In-painters
LocalSDInpainter
(implemented via Huggingface'sdiffusers
library) runs locally and requires a GPU. Defaults to Stable Diffusion v2 for in-painting.ReplicateSDInpainter
calls the Replicate API. Defaults to Stable Diffusion v2 for in-painting (and requires an API key).DalleInpainter
calls the DALL·E 2 API from OpenAI (and requires an API key).
# You only need to instantiate one of the following:
local_inpainter = LocalSDInpainter()
replicate_inpainter = ReplicateSDInpainter("your-replicate-key")
dalle_inpainter = DalleInpainter("your-openai-key")
Contributing
To contribute, clone the repository, make your changes, commit and push to your clone, and submit a pull request.
To build the library, you need to install poetry:
curl -sSL https://install.python-poetry.org | python3 -
# Add poetry to your PATH. Note the specific path will differ depending on your system.
export PATH="/home/ubuntu/.local/bin:$PATH"
# Check the installation was successful:
poetry --version
Install dependencies for detextify
:
poetry install
To execute a script, run:
poetry run python your_script.py
Please run the unit tests to make sure that your changes are not breaking the codebase:
poetry run pytest
Authors
This project was authored by Mihail Eric and Julia Turc. If you are building in the generative AI space, we want to hear from you!