Awesome
<p align="center">
<img src="https://capsule-render.vercel.app/api?type=waving&height=115&color=2C2A2E&text=CVPR-2024-Papers§ion=header&reversal=false&textBg=false&fontAlign=50&fontSize=36&fontColor=FFFFFF&animation=scaleIn&fontAlignY=18" alt="CVPR-2024-Papers">
</p>
<table align="center">
<tr>
<td><strong>General Information</strong></td>
<td>
<a href="https://github.com/sindresorhus/awesome">
<img src="https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg" alt="Awesome">
</a>
<a href="https://cvpr.thecvf.com/Conferences/2024">
<img src="http://img.shields.io/badge/CVPR-2024-7395C5.svg" alt="Conference">
</a>
<img src="https://img.shields.io/badge/version-v1.0.0-rc0" alt="Version">
<a href ="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/LICENSE">
<img src="https://img.shields.io/badge/License-MIT-green.svg" alt="License: MIT">
</a>
</td>
</tr>
<tr>
<td><strong>Repository Size and Activity</strong></td>
<td>
<img src="https://img.shields.io/github/repo-size/DmitryRyumin/CVPR-2023-24-Papers" alt="GitHub repo size">
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/commits/main/">
<img src="https://img.shields.io/github/commit-activity/t/dmitryryumin/CVPR-2023-24-Papers" alt="GitHub commit activity (branch)">
</a>
</td>
</tr>
<tr>
<td><strong>Contribution Statistics</strong></td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/graphs/contributors">
<img src="https://img.shields.io/github/contributors/dmitryryumin/CVPR-2023-24-Papers" alt="GitHub contributors">
</a>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/issues?q=is%3Aissue+is%3Aclosed">
<img src="https://img.shields.io/github/issues-closed/DmitryRyumin/CVPR-2023-24-Papers" alt="GitHub closed issues">
</a>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/issues">
<img src="https://img.shields.io/github/issues/DmitryRyumin/CVPR-2023-24-Papers" alt="GitHub issues">
</a>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/pulls?q=is%3Apr+is%3Aclosed">
<img src="https://img.shields.io/github/issues-pr-closed/DmitryRyumin/CVPR-2023-24-Papers" alt="GitHub closed pull requests">
</a>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/pulls">
<img src="https://img.shields.io/github/issues-pr/dmitryryumin/CVPR-2023-24-Papers" alt="GitHub pull requests">
</a>
</td>
</tr>
<tr>
<td><strong>Other Metrics</strong></td>
<td>
<img src="https://img.shields.io/github/last-commit/DmitryRyumin/CVPR-2023-24-Papers" alt="GitHub last commit">
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/watchers">
<img src="https://img.shields.io/github/watchers/dmitryryumin/CVPR-2023-24-Papers?style=flat" alt="GitHub watchers">
</a>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/forks">
<img src="https://img.shields.io/github/forks/dmitryryumin/CVPR-2023-24-Papers?style=flat" alt="GitHub forks">
</a>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/stargazers">
<img src="https://img.shields.io/github/stars/dmitryryumin/CVPR-2023-24-Papers?style=flat" alt="GitHub Repo stars">
</a>
<img src="https://api.visitorbadge.io/api/combined?path=https%3A%2F%2Fgithub.com%2FDmitryRyumin%2FCVPR-2023-Papers&label=Visitors&countColor=%23263759&style=flat" alt="Visitors">
</td>
</tr>
<tr>
<td><strong>GitHub Actions</strong></td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/actions/workflows/copy_parse_markdown.yml/badge.svg">
<img src="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/actions/workflows/copy_parse_markdown.yml/badge.svg" alt="Copy Parse Markdown and Generate JSON from Source Repo">
</a>
<br />
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/actions/workflows/parse_markdown.yml/badge.svg?branch=main">
<img src="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/actions/workflows/parse_markdown.yml/badge.svg?branch=main" alt="Parse Markdown and Generate JSON">
</a>
<br />
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/actions/workflows/sync_papers_with_hf.yml">
<img src="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/actions/workflows/sync_papers_with_hf.yml/badge.svg" alt="Sync Hugging Face App">
</a>
</td>
</tr>
<tr>
<td><strong>Application</strong></td>
<td>
<a href="https://huggingface.co/spaces/DmitryRyumin/NewEraAI-Papers" style="float:left;">
<img src="https://img.shields.io/badge/🤗-NewEraAI--Papers-FFD21F.svg" alt="App" />
</a>
</td>
</tr>
<tr>
<td colspan="2" align="center"><strong>Progress Status</strong></td>
</tr>
<tr>
<td><strong>Main</strong></td>
<td>
<!-- 160/2719 -->
<div style="float:left;">
<img src="https://geps.dev/progress/6?successColor=006600" alt="" />
<img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/completed_checkmark_done.svg" width="25" alt="" />
</div>
</td>
</tr>
</table>
CVPR 2024 Papers: Explore a comprehensive collection of cutting-edge research papers presented at CVPR 2024, the premier computer vision conference. Keep up to date with the latest advances in computer vision and deep learning. Code implementations included. :star: the repository for the development of visual intelligence!
<p align="center">
<a href="https://cvpr.thecvf.com/Conferences/2024" target="_blank">
<img width="600" src="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/images/CVPR2024-banner.svg" alt="CVPR 2024">
</a>
<p>
[!TIP]
Explore the CVPR 2024 online conference list with a comprehensive collection of accepted papers.
[!TIP]
The PDF version of the CVPR 2024 Conference Programme, comprises a list of all accepted papers, their presentation order, as well as the designated presentation times.
<a href="https://github.com/DmitryRyumin/NewEraAI-Papers" style="float:left;">
<img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/arrow_click_cursor_pointer.png" width="25" alt="" />
Other collections of the best AI conferences
</a>
<br />
<br />
<a href="https://huggingface.co/spaces/DmitryRyumin/NewEraAI-Papers" style="float:left;">
<img src="https://img.shields.io/badge/🤗-NewEraAI--Papers-FFD21F.svg" alt="App" />
</a>
<br />
<br />
[!important]
Conference table will be up to date all the time.
<table>
<tr>
<td rowspan="2" align="center"><strong>Conference</strong></td>
<td colspan="2" align="center"><strong>Year</strong></td>
</tr>
<tr>
<td colspan="1" align="center"><i>2023</i></td>
<td colspan="1" align="center"><i>2024</i></td>
</tr>
<tr>
<td colspan="3" align="center"><i>Computer Vision (CV)</i></td>
</tr>
<tr>
<td>CVPR</td>
<td colspan="2" align="center"><a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers" target="_blank"><img src="https://img.shields.io/github/stars/DmitryRyumin/CVPR-2023-24-Papers?style=flat" alt="" /></a></td>
</tr>
<tr>
<td>ICCV</td>
<td align="center"><a href="https://github.com/DmitryRyumin/ICCV-2023-Papers" target="_blank"><img src="https://img.shields.io/github/stars/DmitryRyumin/ICCV-2023-Papers?style=flat" alt="" /> <img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/done.svg" width="20" alt="" /></a></td>
<td align="center"><img src="https://img.shields.io/badge/Not%20Scheduled-CC5540" alt=""/></td>
</tr>
<tr>
<td>ECCV</td>
<td align="center"><img src="https://img.shields.io/badge/Not%20Scheduled-CC5540" alt=""/></td>
<td align="center"><img src="https://img.shields.io/badge/October-white?logo=github&labelColor=b31b1b" alt="" /></td>
</tr>
<tr>
<td>WACV</td>
<td align="center">:heavy_minus_sign:</td>
<td align="center"><a href="https://github.com/DmitryRyumin/WACV-2024-Papers" target="_blank"><img src="https://img.shields.io/github/stars/DmitryRyumin/WACV-2024-Papers?style=flat" alt="" /> <img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/done.svg" width="20" alt="" /></a></td>
</tr>
<tr>
<td>FG</td>
<td align="center">:heavy_minus_sign:</td>
<td align="center"><a href="https://github.com/DmitryRyumin/FG-2024-Papers" target="_blank"><img src="https://img.shields.io/github/stars/DmitryRyumin/FG-2024-Papers?style=flat" alt="" /></a></td>
</tr>
<tr>
<td colspan="3" align="center"><i>Speech/Signal Processing (SP/SigProc)</i></td>
</tr>
<tr>
<td>ICASSP</td>
<td colspan="2" align="center"><a href="https://github.com/DmitryRyumin/ICASSP-2023-24-Papers" target="_blank"><img src="https://img.shields.io/github/stars/DmitryRyumin/ICASSP-2023-24-Papers?style=flat" alt="" /></a></td>
</tr>
<tr>
<td>INTERSPEECH</td>
<td align="center"><a href="https://github.com/DmitryRyumin/INTERSPEECH-2023-Papers" target="_blank"><img src="https://img.shields.io/github/stars/DmitryRyumin/INTERSPEECH-2023-Papers?style=flat" alt="" /> <img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/done.svg" width="20" alt="" /></a></td>
<td align="center"><img src="https://img.shields.io/badge/September-white?logo=github&labelColor=b31b1b" alt="" /></td>
</tr>
<tr>
<td>ISMIR</td>
<td align="center"><a href="https://github.com/yamathcy/ISMIR-2023-Papers" target="_blank"><img src="https://img.shields.io/github/stars/yamathcy/ISMIR-2023-Papers?style=flat" alt="" /> <img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/done.svg" width="20" alt="" /></a></td>
<td align="center">:heavy_minus_sign:</td>
</tr>
<tr>
<td colspan="3" align="center"><i>Natural Language Processing (NLP)</i></td>
</tr>
<tr>
<td>EMNLP</td>
<td align="center"><a href="https://github.com/DmitryRyumin/EMNLP-2023-Papers" target="_blank"><img src="https://img.shields.io/github/stars/DmitryRyumin/EMNLP-2023-Papers?style=flat" alt="" /></a></td>
<td align="center"><img src="https://img.shields.io/badge/December-white?logo=github&labelColor=b31b1b" alt="" /></td>
</tr>
<tr>
<td colspan="3" align="center"><i>Machine Learning (ML)</i></td>
</tr>
<tr>
<td>AAAI</td>
<td align="center">:heavy_minus_sign:</td>
<td align="center"><a href="https://github.com/DmitryRyumin/AAAI-2024-Papers" target="_blank"><img src="https://img.shields.io/github/stars/DmitryRyumin/AAAI-2024-Papers?style=flat" alt="" /></a></td>
</tr>
<tr>
<td>ICLR</td>
<td align="center">:heavy_minus_sign:</td>
<td align="center"><img src="https://img.shields.io/badge/May-white?logo=github&labelColor=b31b1b" alt="" /></td>
</tr>
<tr>
<td>ICML</td>
<td align="center">:heavy_minus_sign:</td>
<td align="center"><img src="https://img.shields.io/badge/July-white?logo=github&labelColor=b31b1b" alt="" /></td>
</tr>
<tr>
<td>NeurIPS</td>
<td align="center">:heavy_minus_sign:</td>
<td align="center"><img src="https://img.shields.io/badge/December-white?logo=github&labelColor=b31b1b" alt="" /></td>
</tr>
</table>
Contributors
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/graphs/contributors">
<img src="http://contributors.nn.ci/api?repo=DmitryRyumin/CVPR-2023-24-Papers" alt="" />
</a>
<br />
<br />
[!NOTE]
Contributions to improve the completeness of this list are greatly appreciated. If you come across any overlooked papers, please feel free to create pull requests, open issues or contact me via email. Your participation is crucial to making this repository even better.
Papers-2024 <img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/ai.svg" width="30" alt="" />
<a href="https://huggingface.co/spaces/DmitryRyumin/NewEraAI-Papers" style="float:left;">
<img src="https://img.shields.io/badge/🤗-NewEraAI--Papers-FFD21F.svg" alt="App" />
</a>
[!important]
Papers will be sorted by category as soon as the proceedings are available.
<table>
<thead>
<tr>
<th scope="col">Section</th>
<th scope="col">Papers</th>
<th scope="col"><img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/arxiv-logo.svg" width="45" alt="" /></th>
<th scope="col"><img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/github_code_developer.svg" width="27" alt="" /></th>
<th scope="col"><img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/video.svg" width="27" alt="" /></th>
</tr>
</thead>
<tbody>
<tr>
<td colspan="5" align="center"><i>Main</i></td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/image-and-video-synthesis-and-generation.md">Image and Video Synthesis and Generation</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/image-and-video-synthesis-and-generation.md"><img src="https://img.shields.io/badge/329-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/image-and-video-synthesis-and-generation.md"><img src="https://img.shields.io/badge/140-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/image-and-video-synthesis-and-generation.md"><img src="https://img.shields.io/badge/121-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/image-and-video-synthesis-and-generation.md"><img src="https://img.shields.io/badge/79-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/3d-from-multi-view-and-sensors.md">3D from Multi-View and Sensors</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/3d-from-multi-view-and-sensors.md"><img src="https://img.shields.io/badge/276-42BA16" alt="Papers"></a>
</td>
<td colspan="4" rowspan="35" align="center"><i>Will soon be added</i></td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/humans-face-body-pose-gesture-movement.md">Humans: Face, Body, Pose, Gesture, Movement</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/humans-face-body-pose-gesture-movement.md"><img src="https://img.shields.io/badge/202-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/vision-language-and-reasoning.md">Vision, Language, and Reasoning</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/vision-language-and-reasoning.md"><img src="https://img.shields.io/badge/152-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/low-level-vision.md">Low-Level Vision</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/low-level-vision.md"><img src="https://img.shields.io/badge/131-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/recognition-categorization-detection-retrieval.md">Recognition: Categorization, Detection, Retrieval</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/recognition-categorization-detection-retrieval.md"><img src="https://img.shields.io/badge/127-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/transfer-meta-low-shot-continual-or-long-tail-learning.md">Transfer, Meta, Low-Shot, Continual, or Long-Tail Learning</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/transfer-meta-low-shot-continual-or-long-tail-learning.md"><img src="https://img.shields.io/badge/123-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/multimodal-learning.md">Multimodal Learning</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/multimodal-learning.md"><img src="https://img.shields.io/badge/110-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/segmentation-grouping-and-shape-analysis.md">Segmentation, Grouping and Shape Analysis</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/segmentation-grouping-and-shape-analysis.md"><img src="https://img.shields.io/badge/107-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/3d-from-single-images.md">3D from Single Images</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/3d-from-single-images.md"><img src="https://img.shields.io/badge/106-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/datasets-and-evaluation.md">Datasets and Evaluation</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/datasets-and-evaluation.md"><img src="https://img.shields.io/badge/95-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/navigation-and-autonomous-driving.md">Navigation and Autonomous Driving</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/navigation-and-autonomous-driving.md"><img src="https://img.shields.io/badge/87-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/video-action-and-event-understanding.md">Video: Action and Event Understanding</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/video-action-and-event-understanding.md"><img src="https://img.shields.io/badge/78-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/deep-learning-architectures-and-techniques.md">Deep Learning Architectures and Techniques</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/deep-learning-architectures-and-techniques.md"><img src="https://img.shields.io/badge/69-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/medical-and-biological-vision-cell-microscopy.md">Medical and Biological Vision; Cell Microscopy</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/medical-and-biological-vision-cell-microscopy.md"><img src="https://img.shields.io/badge/66-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/adversarial-attack-and-defense.md">Adversarial Attack and Defense</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/adversarial-attack-and-defense.md"><img src="https://img.shields.io/badge/59-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/scene-analysis-and-understanding.md">Scene Analysis and Understanding</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/scene-analysis-and-understanding.md"><img src="https://img.shields.io/badge/56-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/vision-and-graphics.md">Vision and Graphics</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/vision-and-graphics.md"><img src="https://img.shields.io/badge/56-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/computational-imaging.md">Computational Imaging</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/computational-imaging.md"><img src="https://img.shields.io/badge/53-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/efficient-and-scalable-vision.md">Efficient and Scalable Vision</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/efficient-and-scalable-vision.md"><img src="https://img.shields.io/badge/51-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/self-supervised-or-unsupervised-representation-learning.md">Self-Supervised or Unsupervised Representation Learning</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/self-supervised-or-unsupervised-representation-learning.md"><img src="https://img.shields.io/badge/49-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/transparency-fairness-accountability-privacy-ethics-in-vision.md">Transparency, Fairness, Accountability, Privacy, Ethics</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/transparency-fairness-accountability-privacy-ethics-in-vision.md"><img src="https://img.shields.io/badge/49-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/vision-applications-and-systems.md">Vision Applications and Systems</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/vision-applications-and-systems.md"><img src="https://img.shields.io/badge/44-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/video-low-level-analysis-motion-and-tracking.md">Video: Low-Level Analysis, Motion, and Tracking</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/video-low-level-analysis-motion-and-tracking.md"><img src="https://img.shields.io/badge/38-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/robotics.md">Robotics</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/robotics.md"><img src="https://img.shields.io/badge/29-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/embodied-vision-active-agents-simulation.md">Embodied Vision: Active Agents, Simulation</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/embodied-vision-active-agents-simulation.md"><img src="https://img.shields.io/badge/27-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/explainable-ai-for-cv.md">Explainable AI for CV</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/explainable-ai-for-cv.md"><img src="https://img.shields.io/badge/23-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/photogrammetry-and-remote-sensing.md">Photogrammetry and Remote Sensing</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/photogrammetry-and-remote-sensing.md"><img src="https://img.shields.io/badge/19-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/physics-based-vision-and-shape-from-x.md">Physics-based Vision and Shape-from-X</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/physics-based-vision-and-shape-from-x.md"><img src="https://img.shields.io/badge/17-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/machine-learning-other-than-deep-learning.md">Machine Learning (other than Deep Learning)</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/machine-learning-other-than-deep-learning.md"><img src="https://img.shields.io/badge/16-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/biometrics.md">Biometrics</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/biometrics.md"><img src="https://img.shields.io/badge/15-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/document-analysis-and-understanding.md">Document Analysis and Understanding</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/document-analysis-and-understanding.md"><img src="https://img.shields.io/badge/14-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/others.md">Others</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/others.md"><img src="https://img.shields.io/badge/14-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/computer-vision-for-social-good.md">Computer Vision for Social Good</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/computer-vision-for-social-good.md"><img src="https://img.shields.io/badge/12-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/computer-vision-theory.md">Computer Vision Theory</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/computer-vision-theory.md"><img src="https://img.shields.io/badge/11-42BA16" alt="Papers"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/optimization-methods-other-than-deep-learning.md">Optimization Methods (other than Deep Learning)</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2024/main/optimization-methods-other-than-deep-learning.md"><img src="https://img.shields.io/badge/6-42BA16" alt="Papers"></a>
</td>
</tr>
</tbody>
</table>
Papers-2023 <img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/ai.svg" width="30" alt="" />
<a href="https://huggingface.co/spaces/DmitryRyumin/NewEraAI-Papers" style="float:left;">
<img src="https://img.shields.io/badge/🤗-NewEraAI--Papers-FFD21F.svg" alt="App" />
</a>
<table>
<thead>
<tr>
<th scope="col">Section</th>
<th scope="col">Papers</th>
<th scope="col"><img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/arxiv-logo.svg" width="45" alt="" /></th>
<th scope="col"><img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/github_code_developer.svg" width="27" alt="" /></th>
<th scope="col"><img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/video.svg" width="27" alt="" /></th>
</tr>
</thead>
<tbody>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/3d-from-multi-view-and-sensors.md">3D from Multi-View and Sensors</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/3d-from-multi-view-and-sensors.md"><img src="https://img.shields.io/badge/246-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/3d-from-multi-view-and-sensors.md"><img src="https://img.shields.io/badge/199-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/3d-from-multi-view-and-sensors.md"><img src="https://img.shields.io/badge/186-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/3d-from-multi-view-and-sensors.md"><img src="https://img.shields.io/badge/198-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/image-and-video-synthesis-and-generation.md">Image and Video Synthesis and Generation</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/image-and-video-synthesis-and-generation.md"><img src="https://img.shields.io/badge/186-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/image-and-video-synthesis-and-generation.md"><img src="https://img.shields.io/badge/159-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/image-and-video-synthesis-and-generation.md"><img src="https://img.shields.io/badge/135-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/image-and-video-synthesis-and-generation.md"><img src="https://img.shields.io/badge/142-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/humans-face-body-pose-gesture-movement.md">Humans: Face, Body, Pose, Gesture, Movement</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/humans-face-body-pose-gesture-movement.md"><img src="https://img.shields.io/badge/166-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/humans-face-body-pose-gesture-movement.md"><img src="https://img.shields.io/badge/123-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/humans-face-body-pose-gesture-movement.md"><img src="https://img.shields.io/badge/114-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/humans-face-body-pose-gesture-movement.md"><img src="https://img.shields.io/badge/139-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/transfer-meta-low-shot-continual-or-long-tail-learning.md">Transfer, Meta, Low-Shot, Continual, or Long-Tail Learning</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/transfer-meta-low-shot-continual-or-long-tail-learning.md"><img src="https://img.shields.io/badge/153-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/transfer-meta-low-shot-continual-or-long-tail-learning.md"><img src="https://img.shields.io/badge/113-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/transfer-meta-low-shot-continual-or-long-tail-learning.md"><img src="https://img.shields.io/badge/118-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/transfer-meta-low-shot-continual-or-long-tail-learning.md"><img src="https://img.shields.io/badge/109-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/recognition-categorization-detection-retrieval.md">Recognition: Categorization, Detection, Retrieval</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/recognition-categorization-detection-retrieval.md"><img src="https://img.shields.io/badge/139-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/recognition-categorization-detection-retrieval.md"><img src="https://img.shields.io/badge/101-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/recognition-categorization-detection-retrieval.md"><img src="https://img.shields.io/badge/89-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/recognition-categorization-detection-retrieval.md"><img src="https://img.shields.io/badge/98-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/vision-language-and-reasoning.md">Vision, Language, and Reasoning</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/vision-language-and-reasoning.md"><img src="https://img.shields.io/badge/118-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/vision-language-and-reasoning.md"><img src="https://img.shields.io/badge/94-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/vision-language-and-reasoning.md"><img src="https://img.shields.io/badge/85-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/vision-language-and-reasoning.md"><img src="https://img.shields.io/badge/90-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/low-level-vision.md">Low-Level Vision</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/low-level-vision.md"><img src="https://img.shields.io/badge/126-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/low-level-vision.md"><img src="https://img.shields.io/badge/84-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/low-level-vision.md"><img src="https://img.shields.io/badge/104-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/low-level-vision.md"><img src="https://img.shields.io/badge/97-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/segmentation-grouping-and-shape-analysis.md">Segmentation, Grouping and Shape Analysis</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/segmentation-grouping-and-shape-analysis.md"><img src="https://img.shields.io/badge/111-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/segmentation-grouping-and-shape-analysis.md"><img src="https://img.shields.io/badge/79-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/segmentation-grouping-and-shape-analysis.md"><img src="https://img.shields.io/badge/84-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/segmentation-grouping-and-shape-analysis.md"><img src="https://img.shields.io/badge/81-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/deep-learning-architectures-and-techniques.md">Deep Learning Architectures and Techniques</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/deep-learning-architectures-and-techniques.md"><img src="https://img.shields.io/badge/91-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/deep-learning-architectures-and-techniques.md"><img src="https://img.shields.io/badge/71-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/deep-learning-architectures-and-techniques.md"><img src="https://img.shields.io/badge/72-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/deep-learning-architectures-and-techniques.md"><img src="https://img.shields.io/badge/68-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/multimodal-learning.md">Multimodal Learning</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/multimodal-learning.md"><img src="https://img.shields.io/badge/89-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/multimodal-learning.md"><img src="https://img.shields.io/badge/76-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/multimodal-learning.md"><img src="https://img.shields.io/badge/65-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/multimodal-learning.md"><img src="https://img.shields.io/badge/56-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/3d-from-single-images.md">3D from Single Images</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/3d-from-single-images.md"><img src="https://img.shields.io/badge/91-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/3d-from-single-images.md"><img src="https://img.shields.io/badge/78-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/3d-from-single-images.md"><img src="https://img.shields.io/badge/80-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/3d-from-single-images.md"><img src="https://img.shields.io/badge/70-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/medical-and-biological-vision-cell-microscopy.md">Medical and Biological Vision; Cell Microscopy</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/medical-and-biological-vision-cell-microscopy.md"><img src="https://img.shields.io/badge/52-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/medical-and-biological-vision-cell-microscopy.md"><img src="https://img.shields.io/badge/39-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/medical-and-biological-vision-cell-microscopy.md"><img src="https://img.shields.io/badge/37-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/medical-and-biological-vision-cell-microscopy.md"><img src="https://img.shields.io/badge/36-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/video-action-and-event-understanding.md">Video: Action and Event Understanding</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/video-action-and-event-understanding.md"><img src="https://img.shields.io/badge/82-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/video-action-and-event-understanding.md"><img src="https://img.shields.io/badge/64-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/video-action-and-event-understanding.md"><img src="https://img.shields.io/badge/54-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/video-action-and-event-understanding.md"><img src="https://img.shields.io/badge/63-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/navigation-and-autonomous-driving.md">Navigation and Autonomous Driving</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/navigation-and-autonomous-driving.md"><img src="https://img.shields.io/badge/69-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/navigation-and-autonomous-driving.md"><img src="https://img.shields.io/badge/54-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/navigation-and-autonomous-driving.md"><img src="https://img.shields.io/badge/48-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/navigation-and-autonomous-driving.md"><img src="https://img.shields.io/badge/54-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/self-supervised-or-unsupervised-representation-learning.md">Self-Supervised or Unsupervised Representation Learning</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/self-supervised-or-unsupervised-representation-learning.md"><img src="https://img.shields.io/badge/71-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/self-supervised-or-unsupervised-representation-learning.md"><img src="https://img.shields.io/badge/58-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/self-supervised-or-unsupervised-representation-learning.md"><img src="https://img.shields.io/badge/57-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/self-supervised-or-unsupervised-representation-learning.md"><img src="https://img.shields.io/badge/48-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/datasets-and-evaluation.md">Datasets and Evaluation</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/datasets-and-evaluation.md"><img src="https://img.shields.io/badge/54-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/datasets-and-evaluation.md"><img src="https://img.shields.io/badge/39-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/datasets-and-evaluation.md"><img src="https://img.shields.io/badge/43-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/datasets-and-evaluation.md"><img src="https://img.shields.io/badge/36-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/scene-analysis-and-understanding.md">Scene Analysis and Understanding</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/scene-analysis-and-understanding.md"><img src="https://img.shields.io/badge/54-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/scene-analysis-and-understanding.md"><img src="https://img.shields.io/badge/42-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/scene-analysis-and-understanding.md"><img src="https://img.shields.io/badge/45-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/scene-analysis-and-understanding.md"><img src="https://img.shields.io/badge/42-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/adversarial-attack-and-defense.md">Adversarial Attack and Defense</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/adversarial-attack-and-defense.md"><img src="https://img.shields.io/badge/61-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/adversarial-attack-and-defense.md"><img src="https://img.shields.io/badge/47-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/adversarial-attack-and-defense.md"><img src="https://img.shields.io/badge/44-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/adversarial-attack-and-defense.md"><img src="https://img.shields.io/badge/40-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/efficient-and-scalable-vision.md">Efficient and Scalable Vision</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/efficient-and-scalable-vision.md"><img src="https://img.shields.io/badge/48-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/efficient-and-scalable-vision.md"><img src="https://img.shields.io/badge/36-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/efficient-and-scalable-vision.md"><img src="https://img.shields.io/badge/32-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/efficient-and-scalable-vision.md"><img src="https://img.shields.io/badge/31-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/computational-imaging.md">Computational Imaging</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/computational-imaging.md"><img src="https://img.shields.io/badge/53-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/computational-imaging.md"><img src="https://img.shields.io/badge/29-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/computational-imaging.md"><img src="https://img.shields.io/badge/31-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/computational-imaging.md"><img src="https://img.shields.io/badge/43-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/video-low-level-analysis-motion-and-tracking.md">Video: Low-Level Analysis, Motion, and Tracking</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/video-low-level-analysis-motion-and-tracking.md"><img src="https://img.shields.io/badge/46-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/video-low-level-analysis-motion-and-tracking.md"><img src="https://img.shields.io/badge/33-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/video-low-level-analysis-motion-and-tracking.md"><img src="https://img.shields.io/badge/35-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/video-low-level-analysis-motion-and-tracking.md"><img src="https://img.shields.io/badge/36-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/vision-applications-and-systems.md">Vision Applications and Systems</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/vision-applications-and-systems.md"><img src="https://img.shields.io/badge/35-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/vision-applications-and-systems.md"><img src="https://img.shields.io/badge/27-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/vision-applications-and-systems.md"><img src="https://img.shields.io/badge/26-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/vision-applications-and-systems.md"><img src="https://img.shields.io/badge/29-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/vision-and-graphics.md">Vision and Graphics</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/vision-and-graphics.md"><img src="https://img.shields.io/badge/32-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/vision-and-graphics.md"><img src="https://img.shields.io/badge/28-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/vision-and-graphics.md"><img src="https://img.shields.io/badge/22-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/vision-and-graphics.md"><img src="https://img.shields.io/badge/27-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/robotics.md">Robotics</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/robotics.md"><img src="https://img.shields.io/badge/23-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/robotics.md"><img src="https://img.shields.io/badge/18-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/robotics.md"><img src="https://img.shields.io/badge/13-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/robotics.md"><img src="https://img.shields.io/badge/18-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/transparency-fairness-accountability-privacy-ethics-in-vision.md">Transparency, Fairness, Accountability, Privacy, Ethics in Vision</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/transparency-fairness-accountability-privacy-ethics-in-vision.md"><img src="https://img.shields.io/badge/30-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/transparency-fairness-accountability-privacy-ethics-in-vision.md"><img src="https://img.shields.io/badge/22-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/transparency-fairness-accountability-privacy-ethics-in-vision.md"><img src="https://img.shields.io/badge/24-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/transparency-fairness-accountability-privacy-ethics-in-vision.md"><img src="https://img.shields.io/badge/22-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/explainable-ai-for-cv.md">Explainable AI for CV</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/explainable-ai-for-cv.md"><img src="https://img.shields.io/badge/24-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/explainable-ai-for-cv.md"><img src="https://img.shields.io/badge/21-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/explainable-ai-for-cv.md"><img src="https://img.shields.io/badge/18-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/explainable-ai-for-cv.md"><img src="https://img.shields.io/badge/19-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/embodied-vision-active-agents-simulation.md">Embodied Vision: Active Agents, Simulation</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/embodied-vision-active-agents-simulation.md"><img src="https://img.shields.io/badge/14-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/embodied-vision-active-agents-simulation.md"><img src="https://img.shields.io/badge/11-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/embodied-vision-active-agents-simulation.md"><img src="https://img.shields.io/badge/12-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/embodied-vision-active-agents-simulation.md"><img src="https://img.shields.io/badge/10-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/document-analysis-and-understanding.md">Document Analysis and Understanding</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/document-analysis-and-understanding.md"><img src="https://img.shields.io/badge/12-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/document-analysis-and-understanding.md"><img src="https://img.shields.io/badge/9-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/document-analysis-and-understanding.md"><img src="https://img.shields.io/badge/9-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/document-analysis-and-understanding.md"><img src="https://img.shields.io/badge/12-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/machine-learning-other-than-deep-learning.md">Machine Learning (other than Deep Learning)</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/machine-learning-other-than-deep-learning.md"><img src="https://img.shields.io/badge/14-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/machine-learning-other-than-deep-learning.md"><img src="https://img.shields.io/badge/8-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/machine-learning-other-than-deep-learning.md"><img src="https://img.shields.io/badge/7-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/machine-learning-other-than-deep-learning.md"><img src="https://img.shields.io/badge/7-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/physics-based-vision-and-shape-from-x.md">Physics-based Vision and Shape-from-X</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/physics-based-vision-and-shape-from-x.md"><img src="https://img.shields.io/badge/12-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/physics-based-vision-and-shape-from-x.md"><img src="https://img.shields.io/badge/8-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/physics-based-vision-and-shape-from-x.md"><img src="https://img.shields.io/badge/7-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/physics-based-vision-and-shape-from-x.md"><img src="https://img.shields.io/badge/10-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/biometrics.md">Biometrics</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/biometrics.md"><img src="https://img.shields.io/badge/11-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/biometrics.md"><img src="https://img.shields.io/badge/9-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/biometrics.md"><img src="https://img.shields.io/badge/6-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/biometrics.md"><img src="https://img.shields.io/badge/9-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/optimization-methods-other-than-deep-learning.md">Optimization Methods (other than Deep Learning)</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/optimization-methods-other-than-deep-learning.md"><img src="https://img.shields.io/badge/12-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/optimization-methods-other-than-deep-learning.md"><img src="https://img.shields.io/badge/3-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/optimization-methods-other-than-deep-learning.md"><img src="https://img.shields.io/badge/8-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/optimization-methods-other-than-deep-learning.md"><img src="https://img.shields.io/badge/8-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/photogrammetry-and-remote-sensing.md">Photogrammetry and Remote Sensing</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/photogrammetry-and-remote-sensing.md"><img src="https://img.shields.io/badge/8-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/photogrammetry-and-remote-sensing.md"><img src="https://img.shields.io/badge/6-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/photogrammetry-and-remote-sensing.md"><img src="https://img.shields.io/badge/6-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/photogrammetry-and-remote-sensing.md"><img src="https://img.shields.io/badge/6-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/computer-vision-theory.md">Computer Vision Theory</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/computer-vision-theory.md"><img src="https://img.shields.io/badge/5-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/computer-vision-theory.md"><img src="https://img.shields.io/badge/4-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/computer-vision-theory.md"><img src="https://img.shields.io/badge/3-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/computer-vision-theory.md"><img src="https://img.shields.io/badge/5-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/computer-vision-for-social-good.md">Computer Vision for Social Good</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/computer-vision-for-social-good.md"><img src="https://img.shields.io/badge/5-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/computer-vision-for-social-good.md"><img src="https://img.shields.io/badge/2-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/computer-vision-for-social-good.md"><img src="https://img.shields.io/badge/4-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/computer-vision-for-social-good.md"><img src="https://img.shields.io/badge/2-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/others.md">Others</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/others.md"><img src="https://img.shields.io/badge/12-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/others.md"><img src="https://img.shields.io/badge/5-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/others.md"><img src="https://img.shields.io/badge/9-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/sections/2023/main/others.md"><img src="https://img.shields.io/badge/7-FF0000" alt="Videos"></a>
</td>
</tr>
</tbody>
</table>
Key Terms
<p align="center">
<img width="500" src="https://github.com/DmitryRyumin/CVPR-2023-24-Papers/blob/main/images/Keywords2024.png" alt="Key Terms">
<p>
Star History
<p align="center">
<a href="https://star-history.com/#Dmitryryumin/CVPR-2023-24-Papers&Date" target="_blank">
<img width="500" src="https://api.star-history.com/svg?repos=Dmitryryumin/CVPR-2023-24-Papers&type=Date" alt="Star History Chart">
</a>
<p>