Awesome
<div align="center"> <img src="imgs/joligen.svg" width="512"> </div> <h1 align="center">Generative AI Image Toolset with GANs, Diffusion and Consistency Models for Real-World Applications</h1>JoliGEN is an integrated framework for training custom generative AI image-to-image models
Main Features:
-
JoliGEN implements both GAN, Diffusion and Consistency models for unpaired and paired image to image translation tasks, including domain and style adaptation with conservation of semantics such as image and object classes, masks, ...
-
JoliGEN generative AI capabilities are targeted at real world applications such as Controled Image Generation, Augmented Reality, Dataset Smart Augmentation and object insertion, Synthetic to Real transforms.
-
JoliGEN allows for fast and stable training with astonishing results. A server with REST API is provided that allows for simplified deployment and usage.
-
JoliGEN has a large scope of options and parameters. To not get overwhelmed, follow the simple Quickstarts. There are then links to more detailed documentation on models, dataset formats, and data augmentation.
Useful links
Use cases
- AR and metaverse: replace any image element with super-realistic objects
- Image manipulation: seamlessly insert or remove objects/elements in images
- Image to image translation while preserving semantics, e.g. existing source dataset annotations
- Simulation to reality translation while preserving elements, metrics, ...
- Image generation to enrich datasets, e.g. counter dataset imbalance, increase test sets, ...
This is achieved by combining powerful and customized generator architectures, bags of discriminators, and configurable neural networks and losses that ensure conservation of fundamental elements between source and target images.
Example results
Satellite imagery inpainting
Fill up missing areas with diffusion network
Image translation while preserving the class
Mario to Sonic while preserving the action (running, jumping, ...)
Object insertion
Virtual Try-On with Diffusion
Car insertion (BDD100K) with Diffusion
Glasses insertion (FFHQ) with Diffusion
<img src="https://github.com/jolibrain/joliGEN/assets/3530657/eba7920d-4430-4f46-b65c-6cf2267457b0" alt="drawing" width="512"/> <img src="https://github.com/jolibrain/joliGEN/assets/3530657/ef908a7f-375f-4d0a-afec-72d1ee7eaafe" alt="drawing" width="512"/>Object removal
Glasses removal with GANs
Style transfer while preserving label boxes (e.g. cars, pedestrians, street signs, ...)
Day to night (BDD100K) with Transformers and GANs
Clear to snow (BDD100K) by applying a generator multiple times to add snow incrementally
Clear to overcast (BDD100K)
Clear to rainy (BDD100K)
Features
- SoTA image to image translation
- Semantic consistency: conservation of labels of many types: bounding boxes, masks, classes.
- SoTA discriminator models: projected, vision_aided, custom transformers.
- Advanced generators: real-time, transformers, hybrid transformers-CNN, Attention-based, UNet with attention, HDiT
- Multiple models based on adversarial and diffusion generation: CycleGAN, CyCADA, CUT, Palette
- GAN data augmentation mechanisms: APA, discriminator noise injection, standard image augmentation, online augmentation through sampling around bounding boxes
- Output quality metrics: FID, PSNR, KID, ...
- Server with REST API
- Support for both CPU and GPU
- Dockerized server
- Production-grade deployment in C++ via DeepDetect
Code format and Contribution
If you want to contribute please use black code format. Install:
pip install black
Usage :
black .
If you want to format the code automatically before every commit :
pip install pre-commit
pre-commit install
Authors
JoliGEN is created and developed by Jolibrain.
Code structure is inspired by pytorch-CycleGAN-and-pix2pix, CUT, AttentionGAN, MoNCE, Palette among others.
Elements from JoliGEN are supported by the French National AI program "Confiance.AI"
Contact: contact@jolibrain.com