JoliGEN is an integrated framework for training custom generative AI image-to-image models
Main Features:
-
JoliGEN implements both GAN, Diffusion and Consistency models for unpaired and paired image to image translation tasks, including domain and style adaptation with conservation of semantics such as image and object classes, masks, ...
-
JoliGEN generative AI capabilities are targeted at real world applications such as Controled Image Generation, Augmented Reality, Dataset Smart Augmentation and object insertion, Synthetic to Real transforms.
-
JoliGEN allows for fast and stable training with astonishing results. A server with REST API is provided that allows for simplified deployment and usage.
-
JoliGEN has a large scope of options and parameters. To not get overwhelmed, follow the simple Quickstarts. There are then links to more detailed documentation on models, dataset formats, and data augmentation.
- AR and metaverse: replace any image element with super-realistic objects
- Image manipulation: seamlessly insert or remove objects/elements in images
- Image to image translation while preserving semantics, e.g. existing source dataset annotations
- Simulation to reality translation while preserving elements, metrics, ...
- Image generation to enrich datasets, e.g. counter dataset imbalance, increase test sets, ...
This is achieved by combining powerful and customized generator architectures, bags of discriminators, and configurable neural networks and losses that ensure conservation of fundamental elements between source and target images.
Fill up missing areas with diffusion network
Mario to Sonic while preserving the action (running, jumping, ...)
Virtual Try-On with Diffusion
Car insertion (BDD100K) with Diffusion
Glasses insertion (FFHQ) with Diffusion
Glasses removal with GANs
Day to night (BDD100K) with Transformers and GANs
Clear to snow (BDD100K) by applying a generator multiple times to add snow incrementally
- SoTA image to image translation
- Semantic consistency: conservation of labels of many types: bounding boxes, masks, classes.
- SoTA discriminator models: projected, vision_aided, custom transformers.
- Advanced generators: real-time, transformers, hybrid transformers-CNN, Attention-based, UNet with attention, HDiT
- Multiple models based on adversarial and diffusion generation: CycleGAN, CyCADA, CUT, Palette
- GAN data augmentation mechanisms: APA, discriminator noise injection, standard image augmentation, online augmentation through sampling around bounding boxes
- Output quality metrics: FID, PSNR, KID, ...
- Server with REST API
- Support for both CPU and GPU
- Dockerized server
- Production-grade deployment in C++ via DeepDetect
If you want to contribute please use black code format. Install:
pip install black
Usage :
black .
If you want to format the code automatically before every commit :
pip install pre-commit
pre-commit install
JoliGEN is created and developed by Jolibrain.
Code structure is inspired by pytorch-CycleGAN-and-pix2pix, CUT, AttentionGAN, MoNCE, Palette among others.
Elements from JoliGEN are supported by the French National AI program "Confiance.AI"
Contact: contact@jolibrain.com