Awesome
CVPR 2024 论文和开源项目合集(Papers with Code)
CVPR 2024 decisions are now available on OpenReview!
注1:欢迎各位大佬提交issue,分享CVPR 2024论文和开源项目!
注2:关于往年CV顶会论文以及其他优质CV论文和大盘点,详见: https://github.com/amusi/daily-paper-computer-vision
欢迎扫码加入【CVer学术交流群】,这是最大的计算机视觉AI知识星球!每日更新,第一时间分享最新最前沿的计算机视觉、AI绘画、图像处理、深度学习、自动驾驶、医疗影像和AIGC等方向的学习资料,学起来!
【CVPR 2024 论文开源目录】
- 3DGS(Gaussian Splatting)
- Avatars
- Backbone
- CLIP
- MAE
- Embodied AI
- GAN
- GNN
- 多模态大语言模型(MLLM)
- 大语言模型(LLM)
- NAS
- OCR
- NeRF
- DETR
- Prompt
- 扩散模型(Diffusion Models)
- ReID(重识别)
- 长尾分布(Long-Tail)
- Vision Transformer
- 视觉和语言(Vision-Language)
- 自监督学习(Self-supervised Learning)
- 数据增强(Data Augmentation)
- 目标检测(Object Detection)
- 异常检测(Anomaly Detection)
- 目标跟踪(Visual Tracking)
- 语义分割(Semantic Segmentation)
- 实例分割(Instance Segmentation)
- 全景分割(Panoptic Segmentation)
- 医学图像(Medical Image)
- 医学图像分割(Medical Image Segmentation)
- 视频目标分割(Video Object Segmentation)
- 视频实例分割(Video Instance Segmentation)
- 参考图像分割(Referring Image Segmentation)
- 图像抠图(Image Matting)
- 图像编辑(Image Editing)
- Low-level Vision
- 超分辨率(Super-Resolution)
- 去噪(Denoising)
- 去模糊(Deblur)
- 自动驾驶(Autonomous Driving)
- 3D点云(3D Point Cloud)
- 3D目标检测(3D Object Detection)
- 3D语义分割(3D Semantic Segmentation)
- 3D目标跟踪(3D Object Tracking)
- 3D语义场景补全(3D Semantic Scene Completion)
- 3D配准(3D Registration)
- 3D人体姿态估计(3D Human Pose Estimation)
- 3D人体Mesh估计(3D Human Mesh Estimation)
- 医学图像(Medical Image)
- 图像生成(Image Generation)
- 视频生成(Video Generation)
- 3D生成(3D Generation)
- 视频理解(Video Understanding)
- 行为检测(Action Detection)
- 文本检测(Text Detection)
- 知识蒸馏(Knowledge Distillation)
- 模型剪枝(Model Pruning)
- 图像压缩(Image Compression)
- 三维重建(3D Reconstruction)
- 深度估计(Depth Estimation)
- 轨迹预测(Trajectory Prediction)
- 车道线检测(Lane Detection)
- 图像描述(Image Captioning)
- 视觉问答(Visual Question Answering)
- 手语识别(Sign Language Recognition)
- 视频预测(Video Prediction)
- 新视点合成(Novel View Synthesis)
- Zero-Shot Learning(零样本学习)
- 立体匹配(Stereo Matching)
- 特征匹配(Feature Matching)
- 场景图生成(Scene Graph Generation)
- 隐式神经表示(Implicit Neural Representations)
- 图像质量评价(Image Quality Assessment)
- 视频质量评价(Video Quality Assessment)
- 数据集(Datasets)
- 新任务(New Tasks)
- 其他(Others)
<a name="3DGS"></a>
3DGS(Gaussian Splatting)
Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering
- Homepage: https://city-super.github.io/scaffold-gs/
- Paper: https://arxiv.org/abs/2312.00109
- Code: https://github.com/city-super/Scaffold-GS
GPS-Gaussian: Generalizable Pixel-wise 3D Gaussian Splatting for Real-time Human Novel View Synthesis
- Homepage: https://shunyuanzheng.github.io/GPS-Gaussian
- Paper: https://arxiv.org/abs/2312.02155
- Code: https://github.com/ShunyuanZheng/GPS-Gaussian
GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians
GaussianEditor: Swift and Controllable 3D Editing with Gaussian Splatting
Deformable 3D Gaussians for High-Fidelity Monocular Dynamic Scene Reconstruction
- Homepage: https://ingra14m.github.io/Deformable-Gaussians/
- Paper: https://arxiv.org/abs/2309.13101
- Code: https://github.com/ingra14m/Deformable-3D-Gaussians
SC-GS: Sparse-Controlled Gaussian Splatting for Editable Dynamic Scenes
- Homepage: https://yihua7.github.io/SC-GS-web/
- Paper: https://arxiv.org/abs/2312.14937
- Code: https://github.com/yihua7/SC-GS
Spacetime Gaussian Feature Splatting for Real-Time Dynamic View Synthesis
- Homepage: https://oppo-us-research.github.io/SpacetimeGaussians-website/
- Paper: https://arxiv.org/abs/2312.16812
- Code: https://github.com/oppo-us-research/SpacetimeGaussians
DNGaussian: Optimizing Sparse-View 3D Gaussian Radiance Fields with Global-Local Depth Normalization
- Homepage: https://fictionarry.github.io/DNGaussian/
- Paper: https://arxiv.org/abs/2403.06912
- Code: https://github.com/Fictionarry/DNGaussian
4D Gaussian Splatting for Real-Time Dynamic Scene Rendering
GaussianDreamer: Fast Generation from Text to 3D Gaussians by Bridging 2D and 3D Diffusion Models
<a name="Avatars"></a>
Avatars
GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians
Real-Time Simulated Avatar from Head-Mounted Sensors
- Homepage: https://www.zhengyiluo.com/SimXR/
- Paper: https://arxiv.org/abs/2403.06862
<a name="Backbone"></a>
Backbone
RepViT: Revisiting Mobile CNN From ViT Perspective
TransNeXt: Robust Foveal Visual Perception for Vision Transformers
<a name="CLIP"></a>
CLIP
Alpha-CLIP: A CLIP Model Focusing on Wherever You Want
FairCLIP: Harnessing Fairness in Vision-Language Learning
- Paper: https://arxiv.org/abs/2403.19949
- Code: https://github.com/Harvard-Ophthalmology-AI-Lab/FairCLIP
<a name="MAE"></a>
MAE
<a name="Embodied-AI"></a>
Embodied AI
EmbodiedScan: A Holistic Multi-Modal 3D Perception Suite Towards Embodied AI
- Homepage: https://tai-wang.github.io/embodiedscan/
- Paper: https://arxiv.org/abs/2312.16170
- Code: https://github.com/OpenRobotLab/EmbodiedScan
MP5: A Multi-modal Open-ended Embodied System in Minecraft via Active Perception
- Homepage: https://iranqin.github.io/MP5.github.io/
- Paper: https://arxiv.org/abs/2312.07472
- Code: https://github.com/IranQin/MP5
LEMON: Learning 3D Human-Object Interaction Relation from 2D Images
<a name="GAN"></a>
GAN
<a name="OCR"></a>
OCR
An Empirical Study of Scaling Law for OCR
- Paper: https://arxiv.org/abs/2401.00028
- Code: https://github.com/large-ocr-model/large-ocr-model.github.io
ODM: A Text-Image Further Alignment Pre-training Approach for Scene Text Detection and Spotting
<a name="NeRF"></a>
NeRF
PIE-NeRF🍕: Physics-based Interactive Elastodynamics with NeRF
<a name="DETR"></a>
DETR
DETRs Beat YOLOs on Real-time Object Detection
Salience DETR: Enhancing Detection Transformer with Hierarchical Salience Filtering Refinement
<a name="Prompt"></a>
Prompt
<a name="MLLM"></a>
多模态大语言模型(MLLM)
mPLUG-Owl2: Revolutionizing Multi-modal Large Language Model with Modality Collaboration
- Paper: https://arxiv.org/abs/2311.04257
- Code: https://github.com/X-PLUG/mPLUG-Owl/tree/main/mPLUG-Owl2
Link-Context Learning for Multimodal LLMs
- Paper: https://arxiv.org/abs/2308.07891
- Code: https://github.com/isekai-portal/Link-Context-Learning/tree/main
OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allocation
Making Large Multimodal Models Understand Arbitrary Visual Prompts
- Homepage: https://vip-llava.github.io/
- Paper: https://arxiv.org/abs/2312.00784
Pink: Unveiling the power of referential comprehension for multi-modal llms
Chat-UniVi: Unified Visual Representation Empowers Large Language Models with Image and Video Understanding
OneLLM: One Framework to Align All Modalities with Language
<a name="LLM"></a>
大语言模型(LLM)
VTimeLLM: Empower LLM to Grasp Video Moments
<a name="NAS"></a>
NAS
<a name="ReID"></a>
ReID(重识别)
Magic Tokens: Select Diverse Tokens for Multi-modal Object Re-Identification
Noisy-Correspondence Learning for Text-to-Image Person Re-identification
<a name="Diffusion"></a>
扩散模型(Diffusion Models)
InstanceDiffusion: Instance-level Control for Image Generation
Residual Denoising Diffusion Models
DeepCache: Accelerating Diffusion Models for Free
DEADiff: An Efficient Stylization Diffusion Model with Disentangled Representations
-
Homepage: https://tianhao-qi.github.io/DEADiff/
SVGDreamer: Text Guided SVG Generation with Diffusion Model
InteractDiffusion: Interaction-Control for Text-to-Image Diffusion Model
MMA-Diffusion: MultiModal Attack on Diffusion Models
VMC: Video Motion Customization using Temporal Attention Adaption for Text-to-Video Diffusion Models
- Homeoage: https://video-motion-customization.github.io/
- Paper: https://arxiv.org/abs/2312.00845
- Code: https://github.com/HyeonHo99/Video-Motion-Customization
<a name="Vision-Transformer"></a>
Vision Transformer
TransNeXt: Robust Foveal Visual Perception for Vision Transformers
RepViT: Revisiting Mobile CNN From ViT Perspective
A General and Efficient Training for Transformer via Token Expansion
<a name="VL"></a>
视觉和语言(Vision-Language)
PromptKD: Unsupervised Prompt Distillation for Vision-Language Models
FairCLIP: Harnessing Fairness in Vision-Language Learning
- Paper: https://arxiv.org/abs/2403.19949
- Code: https://github.com/Harvard-Ophthalmology-AI-Lab/FairCLIP
<a name="Object-Detection"></a>
目标检测(Object Detection)
DETRs Beat YOLOs on Real-time Object Detection
Boosting Object Detection with Zero-Shot Day-Night Domain Adaptation
- Paper: https://arxiv.org/abs/2312.01220
- Code: https://github.com/ZPDu/Boosting-Object-Detection-with-Zero-Shot-Day-Night-Domain-Adaptation
YOLO-World: Real-Time Open-Vocabulary Object Detection
Salience DETR: Enhancing Detection Transformer with Hierarchical Salience Filtering Refinement
<a name="Anomaly-Detection"></a>
异常检测(Anomaly Detection)
Anomaly Heterogeneity Learning for Open-set Supervised Anomaly Detection
<a name="VT"></a>
目标跟踪(Object Tracking)
Delving into the Trajectory Long-tail Distribution for Muti-object Tracking
- Paper: https://arxiv.org/abs/2403.04700
- Code: https://github.com/chen-si-jia/Trajectory-Long-tail-Distribution-for-MOT
<a name="Semantic-Segmentation"></a>
语义分割(Semantic Segmentation)
Stronger, Fewer, & Superior: Harnessing Vision Foundation Models for Domain Generalized Semantic Segmentation
SED: A Simple Encoder-Decoder for Open-Vocabulary Semantic Segmentation
<a name="MI"></a>
医学图像(Medical Image)
Feature Re-Embedding: Towards Foundation Model-Level Performance in Computational Pathology
VoCo: A Simple-yet-Effective Volume Contrastive Learning Framework for 3D Medical Image Analysis
ChAda-ViT : Channel Adaptive Attention for Joint Representation Learning of Heterogeneous Microscopy Images
<a name="MIS"></a>
医学图像分割(Medical Image Segmentation)
<a name="Autonomous-Driving"></a>
自动驾驶(Autonomous Driving)
UniPAD: A Universal Pre-training Paradigm for Autonomous Driving
Cam4DOcc: Benchmark for Camera-Only 4D Occupancy Forecasting in Autonomous Driving Applications
Memory-based Adapters for Online 3D Scene Perception
Symphonize 3D Semantic Scene Completion with Contextual Instance Queries
A Real-world Large-scale Dataset for Roadside Cooperative Perception
Adaptive Fusion of Single-View and Multi-View Depth for Autonomous Driving
Traffic Scene Parsing through the TSP6K Dataset
<a name="3D-Point-Cloud"></a>
3D点云(3D-Point-Cloud)
<a name="3DOD"></a>
3D目标检测(3D Object Detection)
PTT: Point-Trajectory Transformer for Efficient Temporal 3D Object Detection
UniMODE: Unified Monocular 3D Object Detection
<a name="3DOD"></a>
3D语义分割(3D Semantic Segmentation)
<a name="Image-Editing"></a>
图像编辑(Image Editing)
Edit One for All: Interactive Batch Image Editing
- Homepage: https://thaoshibe.github.io/edit-one-for-all
- Paper: https://arxiv.org/abs/2401.10219
- Code: https://github.com/thaoshibe/edit-one-for-all
<a name="Video-Editing"></a>
视频编辑(Video Editing)
MaskINT: Video Editing via Interpolative Non-autoregressive Masked Transformers
-
Homepage: https://maskint.github.io
<a name="LLV"></a>
Low-level Vision
Residual Denoising Diffusion Models
Boosting Image Restoration via Priors from Pre-trained Models
<a name="SR"></a>
超分辨率(Super-Resolution)
SeD: Semantic-Aware Discriminator for Image Super-Resolution
APISR: Anime Production Inspired Real-World Anime Super-Resolution
<a name="Denoising"></a>
去噪(Denoising)
图像去噪(Image Denoising)
<a name="3D-Human-Pose-Estimation"></a>
3D人体姿态估计(3D Human Pose Estimation)
Hourglass Tokenizer for Efficient Transformer-Based 3D Human Pose Estimation
<a name="Image-Generation"></a>
图像生成(Image Generation)
InstanceDiffusion: Instance-level Control for Image Generation
ECLIPSE: A Resource-Efficient Text-to-Image Prior for Image Generations
-
Homepage: https://eclipse-t2i.vercel.app/
Instruct-Imagen: Image Generation with Multi-modal Instruction
Residual Denoising Diffusion Models
UniGS: Unified Representation for Image Generation and Segmentation
Multi-Instance Generation Controller for Text-to-Image Synthesis
SVGDreamer: Text Guided SVG Generation with Diffusion Model
InteractDiffusion: Interaction-Control for Text-to-Image Diffusion Model
Ranni: Taming Text-to-Image Diffusion for Accurate Prompt Following
<a name="Video-Generation"></a>
视频生成(Video Generation)
Vlogger: Make Your Dream A Vlog
VBench: Comprehensive Benchmark Suite for Video Generative Models
- Homepage: https://vchitect.github.io/VBench-project/
- Paper: https://arxiv.org/abs/2311.17982
- Code: https://github.com/Vchitect/VBench
VMC: Video Motion Customization using Temporal Attention Adaption for Text-to-Video Diffusion Models
- Homeoage: https://video-motion-customization.github.io/
- Paper: https://arxiv.org/abs/2312.00845
- Code: https://github.com/HyeonHo99/Video-Motion-Customization
<a name="3D-Generation"></a>
3D生成
CityDreamer: Compositional Generative Model of Unbounded 3D Cities
- Homepage: https://haozhexie.com/project/city-dreamer/
- Paper: https://arxiv.org/abs/2309.00610
- Code: https://github.com/hzxie/city-dreamer
LucidDreamer: Towards High-Fidelity Text-to-3D Generation via Interval Score Matching
<a name="Video-Understanding"></a>
视频理解(Video Understanding)
MVBench: A Comprehensive Multi-modal Video Understanding Benchmark
- Paper: https://arxiv.org/abs/2311.17005
- Code: https://github.com/OpenGVLab/Ask-Anything/tree/main/video_chat2
<a name="KD"></a>
知识蒸馏(Knowledge Distillation)
Logit Standardization in Knowledge Distillation
- Paper: https://arxiv.org/abs/2403.01427
- Code: https://github.com/sunshangquan/logit-standardization-KD
Efficient Dataset Distillation via Minimax Diffusion
<a name="Stereo-Matching"></a>
立体匹配(Stereo Matching)
Neural Markov Random Field for Stereo Matching
<a name="SGG"></a>
场景图生成(Scene Graph Generation)
HiKER-SGG: Hierarchical Knowledge Enhanced Robust Scene Graph Generation
- Homepage: https://zhangce01.github.io/HiKER-SGG/
- Paper : https://arxiv.org/abs/2403.12033
- Code: https://github.com/zhangce01/HiKER-SGG
<a name="Video-Quality-Assessment"></a>
视频质量评价(Video Quality Assessment)
KVQ: Kaleidoscope Video Quality Assessment for Short-form Videos
<a name="Datasets"></a>
数据集(Datasets)
A Real-world Large-scale Dataset for Roadside Cooperative Perception
Traffic Scene Parsing through the TSP6K Dataset
<a name="Others"></a>
其他(Others)
Object Recognition as Next Token Prediction
ParameterNet: Parameters Are All You Need for Large-scale Visual Pretraining of Mobile Networks
Seamless Human Motion Composition with Blended Positional Encodings
LL3DA: Visual Interactive Instruction Tuning for Omni-3D Understanding, Reasoning, and Planning
-
Homepage: https://ll3da.github.io/
CLOVA: A Closed-LOop Visual Assistant with Tool Usage and Update
- Homepage: https://clova-tool.github.io/
- Paper: https://arxiv.org/abs/2312.10908
MoMask: Generative Masked Modeling of 3D Human Motions
Amodal Ground Truth and Completion in the Wild
- Homepage: https://www.robots.ox.ac.uk/~vgg/research/amodal/
- Paper: https://arxiv.org/abs/2312.17247
- Code: https://github.com/Championchess/Amodal-Completion-in-the-Wild
Improved Visual Grounding through Self-Consistent Explanations
ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object
- Homepage: https://chenshuang-zhang.github.io/imagenet_d/
- Paper: https://arxiv.org/abs/2403.18775
- Code: https://github.com/chenshuang-zhang/imagenet_d
Learning from Synthetic Human Group Activities
- Homepage: https://cjerry1243.github.io/M3Act/
- Paper https://arxiv.org/abs/2306.16772
- Code: https://github.com/cjerry1243/M3Act
A Cross-Subject Brain Decoding Framework
- Homepage: https://littlepure2333.github.io/MindBridge/
- Paper: https://arxiv.org/abs/2404.07850
- Code: https://github.com/littlepure2333/MindBridge
Multi-Task Dense Prediction via Mixture of Low-Rank Experts
Contrastive Mean-Shift Learning for Generalized Category Discovery
- Homepage: https://postech-cvlab.github.io/cms/
- Paper: https://arxiv.org/abs/2404.09451
- Code: https://github.com/sua-choi/CMS