Awesome
AI Image Signal Processing and Computational Photography
Deep learning for low-level computer vision and imaging
Computer Vision Lab, CAIDAS, University of Würzburg
<br>Topics This repository contains material for RAW image processing, RAW image reconstruction and synthesis, learned Image Signal Processing (ISP), Image Enhancement and Restoration (denoising, deblurring), Multi-lense Bokeh effect rendering, and much more! 📷
Official repository for the following works:
- Efficient Multi-Lens Bokeh Effect Rendering and Transformation at CVPR NTIRE 2023.
- Perceptual Image Enhancement for Smartphone Real-Time Applications (LPIENet) at WACV 2023.
- Reversed Image Signal Processing and RAW Reconstruction. AIM 2022 Challenge Report ECCV, AIM 2022
- Model-Based Image Signal Processors via Learnable Dictionaries AAAI 2022 Oral
- MAI 2022 Learned ISP Challenge Complete Baseline solution
- Citation and Acknowledgement | Contact for any inquiries.
News 🚀🚀
- will try to keep the repo updated on a monthly basis ✏️
- [06/2023] Lens-to-lens bokeh effect transformation and NTIRE 2023 material coming soon.
- [01/202] LPIENet material is out
- [10/2022] Reversed ISP and RAW Reconstruction material presented at AIM workshop ECCV 2022 is now available! check here
<a href="https://openaccess.thecvf.com/content/CVPR2023W/NTIRE/papers/Seizinger_Efficient_Multi-Lens_Bokeh_Effect_Rendering_and_Transformation_CVPRW_2023_paper.pdf"><img src="media/papers/bokeh-ntire23.png" width="300" border="0"></a> | <a href="https://arxiv.org/abs/2210.13552"><img src="media/papers/lpienet-wacv23.png" width="300" border="0"></a> | <a href="https://arxiv.org/abs/2210.11153"><img src="media/papers/reisp-aim22.png" width="255" border="0"></a> | <a href="https://arxiv.org/abs/2201.03210"><img src="media/papers/isp-aaai22.png" width="300" border="0"></a> |
Efficient Multi-Lens Bokeh Effect Rendering and Transformation (CVPRW '23)
This work is the state-of-the-art method for bokeh rendering and transformation and baseline of the NTIRE 2023 Bokeh Challenge.
Read the full paper at: Efficient Multi-Lens Bokeh Effect Rendering and Transformation
Perceptual Image Enhancement for Smartphone Real-Time Applications (WACV '23)
This work was presented at the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2023.
Recent advances in camera designs and imaging pipelines allow us to capture high-quality images using smartphones. However, due to the small size and lens limitations of the smartphone cameras, we commonly find artifacts or degradation in the processed images e.g., noise, diffraction artifacts, blur, and HDR overexposure. We propose LPIENet, a lightweight network for perceptual image enhancement, with the focus on deploying it on smartphones.
The code is available at lpienet including versions in Pytorch and Tensorflow. We also include the model conversion to TFLite, so you can generate the corresponding .tflite
file and run the model using the AI Benchmark
app on android devices.
In lpienet-tflite.ipynb you can find a complete tutorial to transform the model to tflite.
Contributions
- The model can process 4K images under 1s on commercial smartphones.
- We achieve competitive results in comparison to SOTA methods in relevant benchmarks for denoising, deblurring and HDR correction. For example the SIDD benchmark.
- We reduce NAFNet number of MACs (or FLOPs) by 50 times.
In this paper, we propose LPIENet, a lightweight network for perceptual image enhancement, with the focus on deploying it on smartphones. Our experiments show that, with much fewer parameters and operations, our model can deal with the mentioned artifacts and achieve competitive performance compared with state-of-the-art methods on standard benchmarks. Moreover, to prove the efficiency and reliability of our approach, we deployed the model directly on commercial smartphones and evaluated its performance. Our model can process 2K resolution images under 1 second in mid-level commercial smartphones. <br>
</p> </details> <br><a href="https://arxiv.org/abs/2210.13552"><img src="media/lpienet.png" alt="lpienet" width="800" border="0"></a>
<img src="lpienet/lpienet-app.png" width="300" border="0"> | <img src="lpienet/lpienet-plot.png" width="450" border="0"> |
Model-Based Image Signal Processors via Learnable Dictionaries (AAAI '22 Oral)
This work was presented at the 36th AAAI Conference on Artificial Intelligence, Spotlight (15%)
Project website where you can find the poster, presentation and more information.
Hybrid model-based and data-driven approach for modelling ISPs using learnable dictionaries. We explore RAW image reconstruction and improve downstream tasks like RAW Image Denoising via raw data augmentation-synthesis.
<a href="https://ojs.aaai.org/index.php/AAAI/article/view/19926/19685"><img src="mbispld/mbispld.png" alt="mbdlisp" width="800" border="0"></a>
If you have implementation questions or you need qualitative samples for comparison, please contact me. You can download the figure/illustration of our method in mbispld.
<br>AIM 2022 Reversed ISP Challenge
This work was presented at the European Conference on Computer Vision (ECCV) 2022, AIM workshop.
Track 1 - S7 | Track 2 - P20
<a href="https://data.vision.ee.ethz.ch/cvl/aim22/"><img src="https://i.ibb.co/VJ7SSQj/aim-challenge-teaser.png" alt="aim-challenge-teaser" width="500" border="0"></a>
In this challenge, we look for solutions to recover RAW readings from the camera using only the corresponding RGB images processed by the in-camera ISP. Successful solutions should generate plausible RAW images, and by doing this, other downstream tasks like Denoising, Super-resolution or Colour Constancy can benefit from such synthetic data generation. Click here to read more information about the challenge.
Starter guide and code 🔥
- aim-starter-code.ipynb - Simple dataloading and visualization of RGB-RAW pairs + other utils.
- aim-baseline.ipynb - End-to-end guide to load the data, train a simple UNet model and make your first submission!
MAI 2022 Learned ISP Challenge
You can find at mai22-learnedisp and end-to-end baseline: dataloading, training top solution, model conversion to tflite. The model achieved 23.46dB PSNR after training for a few hours. Here you can see a sample RAW input and the resultant RGB.
<img src="mai22-learnedisp/result-isp3.png" width="400" border="0">We test the model on AI Benchmark. The model average latency is 60ms using a input RAW image 544,960,4
and generating a RGB 1088,1920,3
, in a mid-level smartphone (45.4 AI-score) using Delegate GPU and FP16.
Citation and Acknowledgement
@inproceedings{conde2022model,
title={Model-Based Image Signal Processors via Learnable Dictionaries},
author={Conde, Marcos V and McDonagh, Steven and Maggioni, Matteo and Leonardis, Ales and P{\'e}rez-Pellitero, Eduardo},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={36},
number={1},
pages={481--489},
year={2022}
}
@inproceedings{conde2022aim,
title={{R}eversed {I}mage {S}ignal {P}rocessing and {RAW} {R}econstruction. {AIM} 2022 {C}hallenge {R}eport},
author={Conde, Marcos V and Timofte, Radu and others},
booktitle={Proceedings of the European Conference on Computer Vision Workshops (ECCVW)},
year={2022}
}
Contact
Marcos Conde (marcos.conde@uni-wuerzburg.de) is the contact persons and co-organizer of NTIRE and AIM challenges.