Home

Awesome

Albumentations

PyPI version CI PyPI Downloads Conda Downloads Stack Overflow License: MIT

Docs | Discord | Twitter | LinkedIn

Albumentations is a Python library for image augmentation. Image augmentation is used in deep learning and computer vision tasks to increase the quality of trained models. The purpose of image augmentation is to create new training samples from the existing data.

Here is an example of how you can apply some pixel-level augmentations from Albumentations to create new images from the original one: parrot

Why Albumentations

Sponsors

<a href="https://roboflow.com/" target="_blank"><img src="https://avatars.githubusercontent.com/u/53104118?s=200&v=4" width="100"/></a>

Table of contents

Authors

Vladimir I. Iglovikov | Kaggle Grandmaster

Mikhail Druzhinin | Kaggle Expert

Alex Parinov | Kaggle Master

Alexander Buslaev — Computer Vision Engineer at Mapbox | Kaggle Master

Evegene Khvedchenya — Computer Vision Research Engineer at Piñata Farms | Kaggle Grandmaster

Installation

Albumentations requires Python 3.8 or higher. To install the latest version from PyPI:

pip install -U albumentations

Other installation options are described in the documentation.

Documentation

The full documentation is available at https://albumentations.ai/docs/.

A simple example

import albumentations as A
import cv2

# Declare an augmentation pipeline
transform = A.Compose([
    A.RandomCrop(width=256, height=256),
    A.HorizontalFlip(p=0.5),
    A.RandomBrightnessContrast(p=0.2),
])

# Read an image with OpenCV and convert it to the RGB colorspace
image = cv2.imread("image.jpg")
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Augment an image
transformed = transform(image=image)
transformed_image = transformed["image"]

Getting started

I am new to image augmentation

Please start with the introduction articles about why image augmentation is important and how it helps to build better models.

I want to use Albumentations for the specific task such as classification or segmentation

If you want to use Albumentations for a specific task such as classification, segmentation, or object detection, refer to the set of articles that has an in-depth description of this task. We also have a list of examples on applying Albumentations for different use cases.

I want to know how to use Albumentations with deep learning frameworks

We have examples of using Albumentations along with PyTorch and TensorFlow.

I want to explore augmentations and see Albumentations in action

Check the online demo of the library. With it, you can apply augmentations to different images and see the result. Also, we have a list of all available augmentations and their targets.

Who is using Albumentations

<a href="https://www.apple.com/" target="_blank"><img src="https://raw.githubusercontent.com/albumentations-team/albumentations.ai/main/html/assets/img/industry/apple.jpeg" width="100"/></a> <a href="https://research.google/" target="_blank"><img src="https://raw.githubusercontent.com/albumentations-team/albumentations.ai/main/html/assets/img/industry/google.png" width="100"/></a> <a href="https://opensource.fb.com/" target="_blank"><img src="https://raw.githubusercontent.com/albumentations-team/albumentations.ai/main/html/assets/img/industry/meta_research.png" width="100"/></a> <a href="https: //www.nvidia.com/en-us/research/" target="_blank"><img src="https://raw.githubusercontent.com/albumentations-team/albumentations.ai/main/html/assets/img/industry/nvidia_research.jpeg" width="100"/></a> <a href="https://www.amazon.science/" target="_blank"><img src="https://raw.githubusercontent.com/albumentations-team/albumentations.ai/main/html/assets/img/industry/amazon_science.png" width="100"/></a> <a href="https://opensource.microsoft.com/" target="_blank"><img src="https://raw.githubusercontent.com/albumentations-team/albumentations.ai/main/html/assets/img/industry/microsoft.png" width="100"/></a> <a href="https://engineering.salesforce.com/open-source/" target="_blank"><img src="https://raw.githubusercontent.com/albumentations-team/albumentations.ai/main/html/assets/img/industry/salesforce_open_source.png" width="100"/></a> <a href="https://stability.ai/" target="_blank"><img src="https://raw.githubusercontent.com/albumentations-team/albumentations.ai/main/html/assets/img/industry/stability.png" width="100"/></a> <a href="https://www.ibm.com/opensource/" target="_blank"><img src="https://raw.githubusercontent.com/albumentations-team/albumentations.ai/main/html/assets/img/industry/ibm.jpeg" width="100"/></a> <a href="https://huggingface.co/" target="_blank"><img src="https://raw.githubusercontent.com/albumentations-team/albumentations.ai/main/html/assets/img/industry/hugging_face.png" width="100"/></a> <a href="https://www.sony.com/en/" target="_blank"><img src="https://raw.githubusercontent.com/albumentations-team/albumentations.ai/main/html/assets/img/industry/sony.png" width="100"/></a> <a href="https://opensource.alibaba.com/" target="_blank"><img src="https://raw.githubusercontent.com/albumentations-team/albumentations.ai/main/html/assets/img/industry/alibaba.png" width="100"/></a> <a href="https://opensource.tencent.com/" target="_blank"><img src="https://raw.githubusercontent.com/albumentations-team/albumentations.ai/main/html/assets/img/industry/tencent.png" width="100"/></a> <a href="https://h2o.ai/" target="_blank"><img src="https://raw.githubusercontent.com/albumentations-team/albumentations.ai/main/html/assets/img/industry/h2o_ai.png" width="100"/></a>

See also

List of augmentations

Pixel-level transforms

Pixel-level transforms will change just an input image and will leave any additional targets such as masks, bounding boxes, and keypoints unchanged. The list of pixel-level transforms:

Spatial-level transforms

Spatial-level transforms will simultaneously change both an input image as well as additional targets such as masks, bounding boxes, and keypoints. The following table shows which additional targets are supported by each transform.

TransformImageMaskBBoxesKeypoints
Affine
BBoxSafeRandomCrop
CenterCrop
CoarseDropout
Crop
CropAndPad
CropNonEmptyMaskIfExists
D4
ElasticTransform
Flip
GridDistortion
GridDropout
GridElasticDeform
HorizontalFlip
Lambda
LongestMaxSize
MaskDropout
Morphological
NoOp
OpticalDistortion
PadIfNeeded
Perspective
PiecewiseAffine
PixelDropout
RandomCrop
RandomCropFromBorders
RandomGridShuffle
RandomResizedCrop
RandomRotate90
RandomScale
RandomSizedBBoxSafeCrop
RandomSizedCrop
Resize
Rotate
SafeRotate
ShiftScaleRotate
SmallestMaxSize
Transpose
VerticalFlip
XYMasking

Mixing-level transforms

Transforms that mix several images into one

TransformImageMaskBBoxesKeypointsGlobal Label
MixUp
OverlayElements

A few more examples of augmentations

Semantic segmentation on the Inria dataset

inria

Medical imaging

medical

Object detection and semantic segmentation on the Mapillary Vistas dataset

vistas

Keypoints augmentation

<img src="https://habrastorage.org/webt/e-/6k/z-/e-6kz-fugp2heak3jzns3bc-r8o.jpeg" width=100%>

Benchmarking results

To run the benchmark yourself, follow the instructions in benchmark/README.md

Results for running the benchmark on the first 2000 images from the ImageNet validation set using an AMD Ryzen Threadripper 3970X CPU. The table shows how many images per second can be processed on a single core; higher is better.

LibraryVersion
Python3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0]
albumentations1.4.11
imgaug0.4.0
torchvision0.18.1+rocm6.0
numpy1.26.4
opencv-python-headless4.10.0.84
scikit-image0.24.0
scipy1.14.0
pillow10.4.0
kornia0.7.3
augly1.0.0
albumentations<br><small>1.4.11</small>torchvision<br><small>0.18.1+rocm6.0</small>kornia<br><small>0.7.3</small>augly<br><small>1.0.0</small>imgaug<br><small>0.4.0</small>
HorizontalFlip8017 ± 122436 ± 2935 ± 33575 ± 44806 ± 7
VerticalFlip7366 ± 72563 ± 8943 ± 14949 ± 58159 ± 21
Rotate570 ± 12152 ± 2207 ± 1633 ± 2496 ± 2
Affine1382 ± 31162 ± 1201 ± 1-682 ± 2
Equalize1027 ± 2336 ± 277 ± 1-1183 ± 1
RandomCrop6419986 ± 5715336 ± 16811 ± 119882 ± 3565410 ± 5
RandomResizedCrop2308 ± 71046 ± 3187 ± 1--
ShiftRGB1240 ± 3-425 ± 2-1554 ± 6
Resize2314 ± 91272 ± 3201 ± 3431 ± 11715 ± 2
RandomGamma2552 ± 2232 ± 1211 ± 1-1794 ± 1
Grayscale7313 ± 41652 ± 2443 ± 22639 ± 21171 ± 23
ColorJitter396 ± 151 ± 150 ± 1224 ± 1-
PlankianJitter449 ± 1-598 ± 1--
RandomPerspective471 ± 1123 ± 1114 ± 1-478 ± 2
GaussianBlur2099 ± 2113 ± 279 ± 2165 ± 11244 ± 2
MedianBlur538 ± 1-3 ± 1-565 ± 1
MotionBlur2197 ± 9-102 ± 1-508 ± 1
Posterize2449 ± 12587 ± 3339 ± 6-1547 ± 1
JpegCompression827 ± 1-50 ± 2684 ± 1428 ± 4
GaussianNoise78 ± 1--67 ± 1128 ± 1
Elastic127 ± 13 ± 11 ± 1-130 ± 1
Normalize971 ± 2449 ± 1415 ± 1--

Contributing

To create a pull request to the repository, follow the documentation at CONTRIBUTING.md

https://github.com/albuemntations-team/albumentation/graphs/contributors

Community and Support

Comments

In some systems, in the multiple GPU regime, PyTorch may deadlock the DataLoader if OpenCV was compiled with OpenCL optimizations. Adding the following two lines before the library import may help. For more details https://github.com/pytorch/pytorch/issues/1355

cv2.setNumThreads(0)
cv2.ocl.setUseOpenCL(False)

Citing

If you find this library useful for your research, please consider citing Albumentations: Fast and Flexible Image Augmentations:

@Article{info11020125,
    AUTHOR = {Buslaev, Alexander and Iglovikov, Vladimir I. and Khvedchenya, Eugene and Parinov, Alex and Druzhinin, Mikhail and Kalinin, Alexandr A.},
    TITLE = {Albumentations: Fast and Flexible Image Augmentations},
    JOURNAL = {Information},
    VOLUME = {11},
    YEAR = {2020},
    NUMBER = {2},
    ARTICLE-NUMBER = {125},
    URL = {https://www.mdpi.com/2078-2489/11/2/125},
    ISSN = {2078-2489},
    DOI = {10.3390/info11020125}
}