Awesome
Awesome-CVPR2024-Low-Level-Vision
I am working with Kobaayyy on the collection of Papers and Codes in CVPR2024 related to Low-Level Vision.
Please see HERE.
<!-- A Collection of Papers and Codes in CVPR2024 related to Low-Level Vision **[In Construction]** If you find some missing papers or typos, feel free to pull issues or requests. A similar collection can be found in [Here](https://github.com/Kobaayyy/Awesome-CVPR2024-CVPR2021-CVPR2020-Low-Level-Vision) ## Related collections for low-level vision - [Awesome-CVPR2023-Low-Level-Vision](https://github.com/DarrenPan/Awesome-CVPR2024-Low-Level-Vision/blob/main/CVPR2023-Low-Level-Vision.md) - [Awesome-CVPR2022-Low-Level-Vision](https://github.com/DarrenPan/Awesome-CVPR2024-Low-Level-Vision/blob/main/CVPR2022-Low-Level-Vision.md) - [Awesome-ICCV2023/2021-Low-Level-Vision](https://github.com/DarrenPan/Awesome-ICCV2023-Low-Level-Vision) - [Awesome-NeurIPS2023-2021-Low-Level-Vision](https://github.com/DarrenPan/Awesome-NeurIPS2023-Low-Level-Vision) - [Awesome-AAAI2023/2022-Low-Level-Vision](https://github.com/DarrenPan/Awesome-AAAI2023-Low-Level-Vision) - [Awesome-ECCV2022-Low-Level-Vision](https://github.com/DarrenPan/Awesome-ECCV2022-Low-Level-Vision) ## Overview - [Image Restoration](#image-restoration) - [Video Restoration](#video-restoration) - [Super Resolution](#super-resolution) - [Image Super Resolution](#image-super-resolution) - [Video Super Resolution](#video-super-resolution) - [Image Rescaling](#image-rescaling) - [Denoising](#denoising) - [Image Denoising](#image-denoising) - [Deblurring](#deblurring) - [Image Deblurring](#image-deblurring) - [Video Deblurring](#video-deblurring) - [Deraining](#deraining) - [Dehazing](#dehazing) - [HDR Imaging / Multi-Exposure Image Fusion](#hdr-imaging--multi-exposure-image-fusion) - [Frame Interpolation](#frame-interpolation) - [Image Enhancement](#image-enhancement) - [Low-Light Image Enhancement](#low-light-image-enhancement) - [Image Harmonization](#image-harmonizationcomposition) - [Image Completion/Inpainting](#image-completioninpainting) - [Image Matting](#image-matting) - [Image Compression](#image-compression) - [Image Quality Assessment](#image-quality-assessment) - [Style Transfer](#style-transfer) - [Image Editing](#image-editing) - [Image Generation/Synthesis/ Image-to-Image Translation](#image-generationsynthesis--image-to-image-translation) - [Video Generation](#video-generation) - [Others](#others) <a name="ImageRetoration"></a> # Image Restoration ## Image Reconstruction **HIR-Diff: Unsupervised Hyperspectral Image Restoration Via Improved Diffusion Models** - Paper: https://arxiv.org/abs/2402.15865 - Code: https://github.com/LiPang/HIRDiff <a name="BurstRestoration"></a> ## Burst Restoration <a name="VideoRestoration"></a> ## Video Restoration **Genuine Knowledge from Practice: Diffusion Test-Time Adaptation for Video Adverse Weather Removal** - Paper: - Code: https://github.com/scott-yjyang/DiffTTA [[Back-to-Overview](#overview)] <a name="SuperResolution"></a> # Super Resolution <a name="ImageSuperResolution"></a> ## Image Super Resolution **CAMixerSR: Only Details Need More “Attention”** - Paper: https://arxiv.org/abs/2402.19289 - Code: https://github.com/icandle/CAMixerSR **SinSR: Diffusion-Based Image Super-Resolution in a Single Step** - Paper:https://github.com/wyf0912/SinSR/blob/main/main.pdf - Code: https://github.com/wyf0912/SinSR <a name="VideoSuperResolution"></a> ## Video Super Resolution **FMA-Net: Flow-Guided Dynamic Filtering and Iterative Feature Refinement with Multi-Attention for Joint Video Super-Resolution and Deblurring** - Paper: https://arxiv.org/abs/2401.03707 - Code: https://github.com/KAIST-VICLab/FMA-Net **Enhancing Video Super-Resolution via Implicit Resampling-based Alignment** - Paper: https://arxiv.org/abs/2305.00163 - Code: https://github.com/kai422/IART [[Back-to-Overview](#overview)] <a name="Rescaling"></a> # Image Rescaling [[Back-to-Overview](#overview)] <a name="Denoising"></a> # Denoising <a name="ImageDenoising"></a> ## Image Denoising ## Video Denoising [[Back-to-Overview](#overview)] <a name="Deblurring"></a> # Deblurring <a name="ImageDeblurring"></a> ## Image Deblurring <a name="VideoDeblurring"></a> ## Video Deblurring **FMA-Net: Flow-Guided Dynamic Filtering and Iterative Feature Refinement with Multi-Attention for Joint Video Super-Resolution and Deblurring** - Paper: https://arxiv.org/abs/2401.03707 - Code: https://github.com/KAIST-VICLab/FMA-Net **Blur-aware Spatio-temporal Sparse Transformer for Video Deblurring** - Paper: - Code: https://github.com/huicongzhang/BSSTNet [[Back-to-Overview](#overview)] <a name="Deraining"></a> # Deraining [[Back-to-Overview](#overview)] <a name="Dehazing"></a> # Dehazing [[Back-to-Overview](#overview)] <a name="HDR"></a> # HDR Imaging / Multi-Exposure Image Fusion [[Back-to-Overview](#overview)] <a name="FrameInterpolation"></a> # Frame Interpolation [[Back-to-Overview](#overview)] <a name="Enhancement"></a> # Image Enhancement [[Back-to-Overview](#overview)] <a name="Harmonization"></a> # Image Harmonization/Composition [[Back-to-Overview](#overview)] <a name="Inpainting"></a> # Image Completion/Inpainting [[Back-to-Overview](#overview)] <a name="Matting"></a> # Image Matting [[Back-to-Overview](#overview)] <a name="ImageCompression"></a> # Image Compression ## Video Compression [[Back-to-Overview](#overview)] <a name="ImageQualityAssessment"></a> # Image Quality Assessment [[Back-to-Overview](#overview)] <a name="StyleTransfer"></a> # Style Transfer [[Back-to-Overview](#overview)] <a name="ImageEditing"></a> # Image Editing **PAIR-Diffusion: A Comprehensive Multimodal Object-Level Image Editor** - Paper: https://arxiv.org/abs/2303.17546 - Code: https://github.com/Picsart-AI-Research/PAIR-Diffusion **Inversion-Free Image Editing with Natural Language** - Paper: - Code: https://github.com/sled-group/InfEdit **Focus on Your Instruction: Fine-grained and Multi-instruction Image Editing by Attention Modulation** - Paper: https://arxiv.org/abs/2312.10113 - Code: https://github.com/guoqincode/Focus-on-Your-Instruction **Edit One for All: Interactive Batch Image Editing** - Paper: https://arxiv.org/abs/2401.10219 - Code: https://github.com/thaoshibe/edit-one-for-all **MACE: Mass Concept Erasure in Diffusion Models** - Paper: - Code: https://github.com/Shilin-LU/MACE ## Video Editing **VidToMe: Video Token Merging for Zero-Shot Video Editing** - Paper: https://arxiv.org/abs/2312.10656 - Code: https://github.com/VISION-SJTU/VidToMe [[Back-to-Overview](#overview)] <a name=ImageGeneration></a> # Image Generation/Synthesis / Image-to-Image Translation ## Text-to-Image / Text Guided / Multi-Modal **PIA: Your Personalized Image Animator via Plug-and-Play Modules in Text-to-Image Models** - Paper: https://github.com/open-mmlab/PIA - Code: https://arxiv.org/abs/2312.13964 **SVGDreamer: Text Guided SVG Generation with Diffusion Model** - Paper: https://arxiv.org/abs/2312.16476 - Code: https://github.com/ximinng/SVGDreamer **ECLIPSE: Revisiting the Text-to-Image Prior for Efficient Image Generation** - Paper: https://arxiv.org/abs/2312.04655 - Code: https://github.com/eclipse-t2i/eclipse-inference **Intelligent Grimm -- Open-ended Visual Storytelling via Latent Diffusion Models** - Paper: https://arxiv.org/abs/2306.00973 - Code: https://github.com/haoningwu3639/StoryGen **DreamMatcher: Appearance Matching Self-Attention for Semantically-Consistent Text-to-Image Personalization** - Paper: https://arxiv.org/abs/2402.09812 - Code: https://github.com/KU-CVLAB/DreamMatcher **InstanceDiffusion: Instance-level Control for Image Generation** - Paper: https://arxiv.org/abs/2402.03290 - Code: https://github.com/frank-xwang/InstanceDiffusion **InteractDiffusion: Interaction-Control for Text-to-Image Diffusion Model** - Paper: https://arxiv.org/abs/2312.05849 - Code: https://github.com/jiuntian/interactdiffusion?tab=readme-ov-file ## Image-to-Image / Image Guided **Coarse-to-Fine Latent Diffusion for Pose-Guided Person Image Synthesis** - Paper: https://github.com/YanzuoLu/CFLD - Code: https://arxiv.org/abs/2402.18078 ## Others for image generation **Residual Denoising Diffusion Models** - Paper: https://arxiv.org/abs/2308.13712 - Code: https://github.com/nachifur/RDDM **DemoFusion: Democratising High-Resolution Image Generation With No $$$** - Paper: https://arxiv.org/abs/2311.16973 - Code: https://github.com/PRIS-CV/DemoFusion **ElasticDiffusion: Training-free Arbitrary Size Image Generation** - Paper: https://arxiv.org/abs/2311.18822 - Code: https://github.com/MoayedHajiAli/ElasticDiffusion-official **DeepCache: Accelerating Diffusion Models for Free** - Paper: https://arxiv.org/abs/2312.00858 - Code: https://github.com/horseee/DeepCache <a name="VideoGeneration"></a> ## Video Generation **MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model** - Paper: https://arxiv.org/abs/2311.16498 - Code: https://github.com/magic-research/magic-animate **VMC: Video Motion Customization using Temporal Attention Adaption for Text-to-Video Diffusion Models** - Paper: https://arxiv.org/abs/2312.00845 - Code: https://github.com/HyeonHo99/Video-Motion-Customization **EvalCrafter: Benchmarking and Evaluating Large Video Generation Models** - Paper: https://arxiv.org/abs/2310.11440 - Code: https://github.com/evalcrafter/EvalCrafter ## Talking Head Generation **SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis** - Paper: https://arxiv.org/abs/2311.17590 - Code: https://github.com/ZiqiaoPeng/SyncTalk [[Back-to-Overview](#overview)] <a name="Others"></a> # Others **Q-Instruct: Improving Low-level Visual Abilities for Multi-modality Foundation Models** - Paper: https://arxiv.org/abs/2311.06783 - Code: https://github.com/Q-Future/Q-Instruct -->