Home

Awesome

<div align="center"> <img src="https://github.com/neosr-project/neosr/assets/132400428/54e8f7fa-8705-4ea3-8b6e-c6227117044d?sanitize=true" width="480"></img>

<a href="https://discord.gg/NN2HGtJ3d6"><img src="https://github.com/neosr-project/neosr/assets/132400428/4bd54b1d-4639-4940-b9c7-b3f212aea5c8?sanitize=true" width="100"></img></a><br> Join our <a href="https://discord.gg/NN2HGtJ3d6">Discord</a>

</div>

neosr is an open-source framework for training super-resolution models. It provides a comprehensive and reproducible environment for achieving state-of-the-art image restoration results, making it suitable for both the enthusiastic community, professionals and machine learning academic researchers. It serves as a versatile platform and aims to bridge the gap between practical application and academic research in the field.

For more information see our wiki.

🤝 support the project

[!TIP] Consider supporting the project on KoFi ☕ or Patreon

💻 installation

Requires Python 3.12 and CUDA >=12.4. Clone the repository and install via poetry:

git clone https://github.com/neosr-project/neosr
cd neosr
poetry install --sync

See detailed Installation Instructions for more details.

⏩ quick start

Start training by running:

python train.py -opt options.toml

Where options.toml is a configuration file. Templates can be found in options.

[!TIP] Please read the wiki Configuration Walkthrough for an explanation of each option.

✨ features

supported archs:

archoption
Real-ESRGANesrgan
SRVGGNetCompactcompact
SwinIRswinir_small, swinir_medium
HAThat_s, hat_m, hat_l
OmniSRomnisr
SRFormersrformer_light, srformer_medium
DATdat_small, dat_medium, dat_2
DITNditn
DCTLSAdctlsa
SPANspan
Real-CUGANcugan
CRAFTcraft
SAFMNsafmn, safmn_l
RGTrgt, rgt_s
ATDatd, atd_light
PLKSRplksr, plksr_tiny
RealPLKSRrealplksr, realplksr_s
DRCTdrct, drct_l, drct_s
MSDANmsdan
SPANPlusspanplus, spanplus_sts, spanplus_s, spanplus_st
HiT-SRFhit_srf, hit_srf_medium, hit_srf_large
HMAhma, hma_medium, hma_large
MANman, man_tiny, man_light
light-SAFMN++light_safmnpp
MoSRmosr, mosr_t
GRFormergrformer, grformer_medium, grformer_large
EIMNeimn, eimn_a, eimn_l

[!NOTE] For all arch-specific parameters, read the wiki.

under testing

archoption
Swin2-MoSEswin2mose
LMLTlmlt, lmlt_tiny, lmlt_large
DCTdct
FIWHNfiwhn
KRGNkrgn
PlainUSRplainusr, plainusr_ultra, plainusr_large
HASNhasn
FlexNetflexnet, metaflexnet
CFSRcfsr

supported discriminators:

netoption
U-Net w/ SNunet
PatchGAN w/ SNpatchgan
EA2FPN (bespoke, based on A2-FPN)ea2fpn
DUnetdunet

supported optimizers:

optimizeroption
AdamAdam or adam
AdamWAdamW or adamw
NAdamNAdam or nadam
AdanAdan or adan
AdamW Win2AdamW_Win or adamw_win
ECO strategyeco, eco_iters
AdamW Schedule-Freeadamw_sf
Adan Schedule-Freeadan_sf
F-SAMfsam, FSAM

supported losses:

lossoption
L1 LossL1Loss, l1_loss
L2 LossMSELoss, mse_loss
Huber LossHuberLoss, huber_loss
CHC (Clipped Huber with Cosine Similarity Loss)chc_loss
NCC (Normalized Cross-Correlation)ncc_opt, ncc_loss
Perceptual Lossperceptual_opt, vgg_perceptual_loss
GANgan_opt, gan_loss
MS-SSIMmssim_opt mssim_loss
LDL Lossldl_opt, ldl_loss
Focal Frequencyff_opt, ff_loss
DISTSdists_opt, dists_loss
Wavelet Guidedwavelet_guided
Gradient-Weightedgw_opt, gw_loss
Perceptual Patch Lossperceptual_opt, patchloss, ipk
Consistency Loss (Oklab and CIE L*)consistency_opt, consistency_loss
KL Divergencekl_opt, kl_loss
MS-SWDmsswd_opt, msswd_loss
FDLfdl_opt, fdl_loss

supported augmentations:

augmentationoption
Rotationuse_rot
Flipuse_hflip
MixUpmixup
CutMixcutmix
ResizeMixresizemix
CutBlurcutblur

supported models:

modeldescriptionoption
ImageBase model for SISR, supports both Generator and Discriminatorimage
OTFBuilds on top of image, adding Real-ESRGAN on-the-fly degradationsotf

supported dataloaders:

loaderoption
Paired datasetspaired
Single datasets (for inference, no GT required)single
Real-ESRGAN on-the-fly degradationotf

📸 datasets

As part of neosr, I have released a dataset series called Nomos. The purpose of these datasets is to distill only the best images from the academic and community datasets. A total of 14 datasets were manually reviewed and processed, including: Adobe-MIT-5k, RAISE, LSDIR, LIU4k-v2, KONIQ-10k, Nikon LL RAW, DIV8k, FFHQ, Flickr2k, ModernAnimation1080_v2, Rawsamples, SignatureEdits, Hasselblad raw samples and Unsplash.

pie
  title Nomos-v2 distribution
  "Animal / fur" : 439
  "Interiors" : 280
  "Exteriors / misc" : 696
  "Architecture / geometric" : 1470
  "Drawing / painting / anime" : 1076
  "Humans" : 598
  "Mountain / Rocks" : 317
  "Text" : 102
  "Textures" : 439
  "Vegetation" : 574
dataset downloadsha256
nomosv2 (3GB)sha256
nomosv2.lmdb (3GB)sha256
nomosv2_lq_4x (187MB)sha256
nomosv2_lq_4x.lmdb (187MB)sha256
nomos_uni (1.3GB)sha256
nomos_uni.lmdb (1.3GB)sha256
nomos_uni_lq_4xsha256
nomos_uni_lq_4x.lmdbsha256
hfa2ksha256

community datasets

Datasets made by the upscaling community. More info can be found in author's repository.

datasetdownload
@Phhofm 4xNomosRealWebRelease page
@Phhofm FaceUpGDrive (4GB)
@Phhofm SSDIRGdrive (4.5GB)
@Phhofm ArtFacesRelease page
@Phhofm Nature DatasetRelease page
@umzi2 Digital Art (v2)Release page

📖 resources

📄 license and acknowledgements

Released under the Apache license. All licenses listed on license/readme. This code was originally based on BasicSR.

Thanks to victorca25/traiNNer, styler00dollar/Colab-traiNNer and timm for providing helpful insights into some problems.

Thanks to active contributors @Phhofm, @Sirosky, and @umzi2 for helping with tests and bug reporting.