Home

Awesome

Awesome-AAAI2023-Low-Level-Vision

A Collection of Papers and Codes in AAAI2023 related to Low-Level Vision

[Completed] If you find some missing papers or typos, feel free to pull issues or requests.

Related collections for low-level vision

Overview

<a name="ImageRetoration"></a>

Image Restoration

Memory-Oriented Structural Pruning for Efficient Image Restoration

ShadowFormer: Global Context Helps Shadow Removal

DENet: Disentangled Embedding Network for Visible Watermark Removal

Video Compression Artifact Reduction by Fusing Motion Compensation and Global Context in a Swin-CNN Based Parallel Architecture

Image Reconstruction

[Back-to-Overview]

<a name="SuperResolution"></a>

Super Resolution

Deep Parametric 3D Filters for Joint Video Denoising and Illumination Enhancement in Video Super Resolution

[Back-to-Overview]

<a name="Rescaling"></a>

Image Rescaling

Self-Asymmetric Invertible Network for Compression-Aware Image Rescaling

[Back-to-Overview]

<a name="Denoising"></a>

Denoising

Image Denoising

Adaptive Dynamic Filtering Network for Image Denoising

Self-Supervised Image Denoising Using Implicit Deep Denoiser Prior

Robust Image Denoising of No-Flash Images Guided by Consistent Flash Images

Spatial-Spectral Transformer for Hyperspectral Image Denoising

Video Denoising

Unsupervised Deep Video Denoising with Untrained Network

[Back-to-Overview]

<a name="Deblurring"></a>

Deblurring

<a name="ImageDeblurring"></a>

Image Deblurring

Dual-Domain Attention for Image Deblurring

Real-World Deep Local Motion Deblurring

Learning Single Image Defocus Deblurring with Misaligned Training Pairs

Intriguing Findings of Frequency Selection for Image Deblurring

Learnable Blur Kernel for Single-Image Defocus Deblurring in the Wild

[Back-to-Overview]

<a name="Deraining"></a>

Deraining

Hybrid CNN-Transformer Feature Fusion for Single Image Deraining

[Back-to-Overview]

<a name="HDR"></a>

HDR Imaging / Multi-Exposure Image Fusion

Improving Dynamic HDR Imaging with Fusion Transformer

Unsupervised Multi-Exposure Image Fusion Breaking Exposure Limits via Contrastive Learning

[Back-to-Overview]

<a name="FrameInterpolation"></a>

Frame Interpolation

SVFI: Spiking-Based Video Frame Interpolation for High-Speed Motion

[Back-to-Overview]

<a name="Enhancement"></a>

Image Enhancement

Polarization-Aware Low-Light Image Enhancement

Learning Semantic Degradation-Aware Guidance for Recognition-Driven Unsupervised Low-Light Image Enhancement

Low-Light Image Enhancement Network Based on Multi-Scale Feature Complementation

Ultra-High-Definition Low-Light Image Enhancement: A Benchmark and Transformer-Based Method

Low-Light Video Enhancement with Synthetic Event Guidance

[Back-to-Overview]

<a name="Harmonization"></a>

Image Harmonization/Composition

Painterly Image Harmonization in Dual Domains

[Back-to-Overview]

<a name="Inpainting"></a>

Image Completion/Inpainting

CoordFill: Efficient High-Resolution Image Inpainting via Parameterized Coordinate Querying

Generative Image Inpainting with Segmentation Confusion Adversarial Training and Contrastive Learning

DINet: Deformation Inpainting Network for Realistic Face Visually Dubbing on High Resolution Video

[Back-to-Overview]

<a name="Matting"></a>

Image Matting

Infusing Definiteness into Randomness: Rethinking Composition Styles for Deep Image Matting

[Back-to-Overview]

<a name="ImageCompression"></a>

Image Compression

Learned Distributed Image Compression with Multi-Scale Patch Matching in Feature Domain

Multi-Modality Deep Network for Extreme Learned Image Compression

[Back-to-Overview]

<a name="ImageQualityAssessment"></a>

Image Quality Assessment

Data-Efficient Image Quality Assessment with Attention-Panel Decoder

[Back-to-Overview]

<a name="StyleTransfer"></a>

Style Transfer

User-Controllable Arbitrary Style Transfer via Entropy Regularization

Frequency Domain Disentanglement for Arbitrary Neural Style Transfer

AdaCM: Adaptive ColorMLP for Real-Time Universal Photo-Realistic Style Transfer

Preserving Structural Consistency in Arbitrary Artist and Artwork Style Transfer

MicroAST: Towards Super-fast Ultra-Resolution Arbitrary Style Transfer

[Back-to-Overview]

<a name="ImageEditing"></a>

Image Editing

DE-net: Dynamic Text-Guided Image Editing Adversarial Networks

FEditNet: Few-Shot Editing of Latent Semantics in GAN Spaces

ReGANIE: Rectifying GAN Inversion Errors for Accurate Real Image Editing

Target-Free Text-Guided Image Manipulation

CLIPVG: Text-Guided Image Manipulation Using Differentiable Vector Graphics

[Back-to-Overview]

<a name=ImageGeneration></a>

Image Generation/Synthesis / Image-to-Image Translation

Text-to-Image / Text Guided / Multi-Modal

Reject Decoding via Language-Vision Models for Text-to-Image Synthesis

Scene Graph to Image Synthesis via Knowledge Consensus

Frido: Feature Pyramid Diffusion for Complex Scene Image Synthesis

Image-to-Image / Image Guided

MAGIC: Mask-Guided Image Synthesis by Inverting a Quasi-Robust Classifier

CFFT-GAN: Cross-Domain Feature Fusion Transformer for Exemplar-Based Image Translation

MIDMs: Matching Interleaved Diffusion Models for Exemplar-based Image Translation

Others for Image Generation

Few-Shot Defect Image Generation via Defect-Aware Feature Manipulation

High-Resolution GAN Inversion for Degraded Images in Large Diverse Datasets

ShiftDDPMs: Exploring Conditional Diffusion Models by Shifting Diffusion Trajectories

Video Generation

StyleTalk: One-shot Talking Head Generation with Controllable Speaking Styles

VIDM: Video Implicit Diffusion Models

[Back-to-Overview]