Awesome
<p align="center">
<img src="https://capsule-render.vercel.app/api?type=waving&height=115&color=2C2A2E&text=ICCV-2023-Papers§ion=header&reversal=false&textBg=false&fontAlign=50&fontSize=36&fontColor=FFFFFF&animation=scaleIn&fontAlignY=18" alt="ICCV-2023-Papers">
</p>
<table align="center">
<tr>
<td><strong>General Information</strong></td>
<td>
<a href="https://github.com/sindresorhus/awesome">
<img src="https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg" alt="Awesome">
</a>
<a href="https://iccv2023.thecvf.com">
<img src="http://img.shields.io/badge/ICCV-2023-7395C5.svg" alt="Conference">
</a>
<img src="https://img.shields.io/badge/version-v1.0.0-rc0" alt="Version">
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/LICENSE">
<img src="https://img.shields.io/badge/License-MIT-green.svg" alt="License: MIT">
</a>
</td>
</tr>
<tr>
<td><strong>Repository Size and Activity</strong></td>
<td>
<img src="https://img.shields.io/github/repo-size/DmitryRyumin/ICCV-2023-Papers" alt="GitHub repo size">
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/commits/main/">
<img src="https://img.shields.io/github/commit-activity/t/dmitryryumin/ICCV-2023-Papers" alt="GitHub commit activity (branch)">
</a>
</td>
</tr>
<tr>
<td><strong>Contribution Statistics</strong></td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/graphs/contributors">
<img src="https://img.shields.io/github/contributors/dmitryryumin/ICCV-2023-Papers" alt="GitHub contributors">
</a>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/issues?q=is%3Aissue+is%3Aclosed">
<img src="https://img.shields.io/github/issues-closed/DmitryRyumin/ICCV-2023-Papers" alt="GitHub closed issues">
</a>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/issues">
<img src="https://img.shields.io/github/issues/DmitryRyumin/ICCV-2023-Papers" alt="GitHub issues">
</a>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/pulls?q=is%3Apr+is%3Aclosed">
<img src="https://img.shields.io/github/issues-pr-closed/DmitryRyumin/ICCV-2023-Papers" alt="GitHub closed pull requests">
</a>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/pulls">
<img src="https://img.shields.io/github/issues-pr/dmitryryumin/ICCV-2023-Papers" alt="GitHub pull requests">
</a>
</td>
</tr>
<tr>
<td><strong>Other Metrics</strong></td>
<td>
<img src="https://img.shields.io/github/last-commit/DmitryRyumin/ICCV-2023-Papers" alt="GitHub last commit">
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/watchers">
<img src="https://img.shields.io/github/watchers/dmitryryumin/ICCV-2023-Papers?style=flat" alt="GitHub watchers">
</a>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/forks">
<img src="https://img.shields.io/github/forks/dmitryryumin/ICCV-2023-Papers?style=flat" alt="GitHub forks">
</a>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/stargazers">
<img src="https://img.shields.io/github/stars/dmitryryumin/ICCV-2023-Papers?style=flat" alt="GitHub Repo stars">
</a>
<img src="https://api.visitorbadge.io/api/combined?path=https%3A%2F%2Fgithub.com%2FDmitryRyumin%2FICCV-2023-Papers&label=Visitors&countColor=%23263759&style=flat" alt="Visitors">
</td>
</tr>
<tr>
<td><strong>GitHub Actions</strong></td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/actions/workflows/copy_parse_markdown.yml/badge.svg">
<img src="https://github.com/DmitryRyumin/ICCV-2023-Papers/actions/workflows/copy_parse_markdown.yml/badge.svg" alt="Copy Parse Markdown and Generate JSON from Source Repo">
</a>
<br />
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/actions/workflows/parse_markdown.yml/badge.svg?branch=main">
<img src="https://github.com/DmitryRyumin/ICCV-2023-Papers/actions/workflows/parse_markdown.yml/badge.svg?branch=main" alt="Parse Markdown and Generate JSON">
</a>
<br />
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/actions/workflows/sync_papers_with_hf.yml">
<img src="https://github.com/DmitryRyumin/ICCV-2023-Papers/actions/workflows/sync_papers_with_hf.yml/badge.svg" alt="Sync Hugging Face App">
</a>
</td>
</tr>
<tr>
<td><strong>Application</strong></td>
<td>
<a href="https://huggingface.co/spaces/DmitryRyumin/NewEraAI-Papers" style="float:left;">
<img src="https://img.shields.io/badge/🤗-NewEraAI--Papers-FFD21F.svg" alt="App" />
</a>
</td>
</tr>
<tr>
<td colspan="2" align="center"><strong>Progress Status</strong></td>
</tr>
<tr>
<td><strong>Main</strong></td>
<td>
<div style="float:left;">
<img src="https://geps.dev/progress/100?successColor=006600" alt="" />
<img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/completed_checkmark_done.svg" width="25" alt="" />
</div>
</td>
</tr>
<tr>
<td><strong>Workshops</strong></td>
<td>
<!-- 127/497 -->
<div style="float:left;">
<img src="https://geps.dev/progress/26?successColor=006600" alt="" />
<img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/completed_checkmark_done.svg" width="25" alt="" />
</div>
</td>
</tr>
</table>
ICCV 2023 Papers: Explore a comprehensive collection of cutting-edge research papers presented at ICCV 2023, the premier computer vision conference. Keep up to date with the latest advances in computer vision and deep learning. Code implementations included. :star: the repository for the development of visual intelligence!
<p align="center">
<a href="https://iccv2023.thecvf.com/" target="_blank">
<img width="600" src="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/images/ICCV2023-banner.jpg" alt="ICCV 2023">
</a>
</p>
:point_right: *
This count includes repositories on GitHub, GitLab, Hugging Face, and distributions on PyPI, while excluding Web Page or GitHub Page links.
[!TIP]
The online version of the ICCV 2023 Conference Programme, comprises a list of all accepted full papers, their presentation order, as well as the designated presentation times.
<a href="https://github.com/DmitryRyumin/NewEraAI-Papers" style="float:left;">
<img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/arrow_click_cursor_pointer.png" width="25" alt="" />
Other collections of the best AI conferences
</a>
<br />
<br />
<a href="https://huggingface.co/spaces/DmitryRyumin/NewEraAI-Papers" style="float:left;">
<img src="https://img.shields.io/badge/🤗-NewEraAI--Papers-FFD21F.svg" alt="App" />
</a>
<br />
<br />
[!important]
Conference table will be up to date all the time.
<table>
<tr>
<td rowspan="2" align="center"><strong>Conference</strong></td>
<td colspan="2" align="center"><strong>Year</strong></td>
</tr>
<tr>
<td colspan="1" align="center"><i>2023</i></td>
<td colspan="1" align="center"><i>2024</i></td>
</tr>
<tr>
<td colspan="3" align="center"><i>Computer Vision (CV)</i></td>
</tr>
<tr>
<td>CVPR</td>
<td colspan="2" align="center"><a href="https://github.com/DmitryRyumin/CVPR-2023-24-Papers" target="_blank"><img src="https://img.shields.io/github/stars/DmitryRyumin/CVPR-2023-24-Papers?style=flat" alt="" /></a></td>
</tr>
<tr>
<td>ICCV</td>
<td align="center"><a href="https://github.com/DmitryRyumin/ICCV-2023-Papers" target="_blank"><img src="https://img.shields.io/github/stars/DmitryRyumin/ICCV-2023-Papers?style=flat" alt="" /> <img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/done.svg" width="20" alt="" /></a></td>
<td align="center"><img src="https://img.shields.io/badge/Not%20Scheduled-CC5540" alt=""/></td>
</tr>
<tr>
<td>ECCV</td>
<td align="center"><img src="https://img.shields.io/badge/Not%20Scheduled-CC5540" alt=""/></td>
<td align="center"><img src="https://img.shields.io/badge/October-white?logo=github&labelColor=b31b1b" alt="" /></td>
</tr>
<tr>
<td>WACV</td>
<td align="center">:heavy_minus_sign:</td>
<td align="center"><a href="https://github.com/DmitryRyumin/WACV-2024-Papers" target="_blank"><img src="https://img.shields.io/github/stars/DmitryRyumin/WACV-2024-Papers?style=flat" alt="" /> <img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/done.svg" width="20" alt="" /></a></td>
</tr>
<tr>
<td>FG</td>
<td align="center">:heavy_minus_sign:</td>
<td align="center"><a href="https://github.com/DmitryRyumin/FG-2024-Papers" target="_blank"><img src="https://img.shields.io/github/stars/DmitryRyumin/FG-2024-Papers?style=flat" alt="" /></a></td>
</tr>
<tr>
<td colspan="3" align="center"><i>Speech/Signal Processing (SP/SigProc)</i></td>
</tr>
<tr>
<td>ICASSP</td>
<td colspan="2" align="center"><a href="https://github.com/DmitryRyumin/ICASSP-2023-24-Papers" target="_blank"><img src="https://img.shields.io/github/stars/DmitryRyumin/ICASSP-2023-24-Papers?style=flat" alt="" /></a></td>
</tr>
<tr>
<td>INTERSPEECH</td>
<td align="center"><a href="https://github.com/DmitryRyumin/INTERSPEECH-2023-Papers" target="_blank"><img src="https://img.shields.io/github/stars/DmitryRyumin/INTERSPEECH-2023-Papers?style=flat" alt="" /> <img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/done.svg" width="20" alt="" /></a></td>
<td align="center"><img src="https://img.shields.io/badge/September-white?logo=github&labelColor=b31b1b" alt="" /></td>
</tr>
<tr>
<td>ISMIR</td>
<td align="center"><a href="https://github.com/yamathcy/ISMIR-2023-Papers" target="_blank"><img src="https://img.shields.io/github/stars/yamathcy/ISMIR-2023-Papers?style=flat" alt="" /> <img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/done.svg" width="20" alt="" /></a></td>
<td align="center">:heavy_minus_sign:</td>
</tr>
<tr>
<td colspan="3" align="center"><i>Natural Language Processing (NLP)</i></td>
</tr>
<tr>
<td>EMNLP</td>
<td align="center"><a href="https://github.com/DmitryRyumin/EMNLP-2023-Papers" target="_blank"><img src="https://img.shields.io/github/stars/DmitryRyumin/EMNLP-2023-Papers?style=flat" alt="" /></a></td>
<td align="center"><img src="https://img.shields.io/badge/December-white?logo=github&labelColor=b31b1b" alt="" /></td>
</tr>
<tr>
<td colspan="3" align="center"><i>Machine Learning (ML)</i></td>
</tr>
<tr>
<td>AAAI</td>
<td align="center">:heavy_minus_sign:</td>
<td align="center"><a href="https://github.com/DmitryRyumin/AAAI-2024-Papers" target="_blank"><img src="https://img.shields.io/github/stars/DmitryRyumin/AAAI-2024-Papers?style=flat" alt="" /></a></td>
</tr>
<tr>
<td>ICLR</td>
<td align="center">:heavy_minus_sign:</td>
<td align="center"><img src="https://img.shields.io/badge/May-white?logo=github&labelColor=b31b1b" alt="" /></td>
</tr>
<tr>
<td>ICML</td>
<td align="center">:heavy_minus_sign:</td>
<td align="center"><img src="https://img.shields.io/badge/July-white?logo=github&labelColor=b31b1b" alt="" /></td>
</tr>
<tr>
<td>NeurIPS</td>
<td align="center">:heavy_minus_sign:</td>
<td align="center"><img src="https://img.shields.io/badge/December-white?logo=github&labelColor=b31b1b" alt="" /></td>
</tr>
</table>
Contributors
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/graphs/contributors">
<img src="http://contributors.nn.ci/api?repo=DmitryRyumin/ICCV-2023-Papers" alt="" />
</a>
<br />
<br />
[!NOTE]
Contributions to improve the completeness of this list are greatly appreciated. If you come across any overlooked papers, please feel free to create pull requests, open issues or contact me via email. Your participation is crucial to making this repository even better.
Papers <img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/ai.svg" width="30" alt="" />
<a href="https://huggingface.co/spaces/DmitryRyumin/NewEraAI-Papers" style="float:left;">
<img src="https://img.shields.io/badge/🤗-NewEraAI--Papers-FFD21F.svg" alt="App" />
</a>
<!-- > :exclamation: Final paper links will be added post-conference. -->
<table>
<thead>
<tr>
<th scope="col">Section</th>
<th scope="col">Papers</th>
<th scope="col"><img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/arxiv-logo.svg" width="45" alt="" /></th>
<th scope="col"><img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/github_code_developer.svg" width="27" alt="" /></th>
<th scope="col"><img src="https://cdn.jsdelivr.net/gh/DmitryRyumin/NewEraAI-Papers@main/images/video.svg" width="27" alt="" /></th>
</tr>
</thead>
<tbody>
<tr>
<td colspan="5" align="center"><i>Main</i></td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/3d-from-multi-view-and-sensors.md">3D from Multi-View and Sensors</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/3d-from-multi-view-and-sensors.md"><img src="https://img.shields.io/badge/173-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/3d-from-multi-view-and-sensors.md"><img src="https://img.shields.io/badge/136-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/3d-from-multi-view-and-sensors.md"><img src="https://img.shields.io/badge/110-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/3d-from-multi-view-and-sensors.md"><img src="https://img.shields.io/badge/37-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/adversarial-attack-and-defense.md">Adversarial Attack and Defense</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/adversarial-attack-and-defense.md"><img src="https://img.shields.io/badge/53-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/adversarial-attack-and-defense.md"><img src="https://img.shields.io/badge/41-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/adversarial-attack-and-defense.md"><img src="https://img.shields.io/badge/36-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/adversarial-attack-and-defense.md"><img src="https://img.shields.io/badge/2-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-and-robotics.md">Vision and Robotics</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-and-robotics.md"><img src="https://img.shields.io/badge/11-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-and-robotics.md"><img src="https://img.shields.io/badge/6-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-and-robotics.md"><img src="https://img.shields.io/badge/6-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-and-robotics.md"><img src="https://img.shields.io/badge/2-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-and-graphics.md">Vision and Graphics</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-and-graphics.md"><img src="https://img.shields.io/badge/22-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-and-graphics.md"><img src="https://img.shields.io/badge/19-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-and-graphics.md"><img src="https://img.shields.io/badge/15-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-and-graphics.md"><img src="https://img.shields.io/badge/7-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/segmentation-grouping-and-shape-analysis.md">Segmentation, Grouping and Shape Analysis</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/segmentation-grouping-and-shape-analysis.md"><img src="https://img.shields.io/badge/72-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/segmentation-grouping-and-shape-analysis.md"><img src="https://img.shields.io/badge/56-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/segmentation-grouping-and-shape-analysis.md"><img src="https://img.shields.io/badge/47-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/segmentation-grouping-and-shape-analysis.md"><img src="https://img.shields.io/badge/6-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/recognition-categorization.md">Recognition: Categorization</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/recognition-categorization.md"><img src="https://img.shields.io/badge/50-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/recognition-categorization.md"><img src="https://img.shields.io/badge/35-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/recognition-categorization.md"><img src="https://img.shields.io/badge/32-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/recognition-categorization.md"><img src="https://img.shields.io/badge/1-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/explainable-ai-for-cv.md">Explainable AI for CV</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/explainable-ai-for-cv.md"><img src="https://img.shields.io/badge/21-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/explainable-ai-for-cv.md"><img src="https://img.shields.io/badge/17-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/explainable-ai-for-cv.md"><img src="https://img.shields.io/badge/15-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/explainable-ai-for-cv.md"><img src="https://img.shields.io/badge/1-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/neural-generative-models.md">Neural Generative Models</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/neural-generative-models.md"><img src="https://img.shields.io/badge/34-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/neural-generative-models.md"><img src="https://img.shields.io/badge/28-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/neural-generative-models.md"><img src="https://img.shields.io/badge/22-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/neural-generative-models.md"><img src="https://img.shields.io/badge/7-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-and-language.md">Vision and Language</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-and-language.md"><img src="https://img.shields.io/badge/127-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-and-language.md"><img src="https://img.shields.io/badge/108-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-and-language.md"><img src="https://img.shields.io/badge/79-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-and-language.md"><img src="https://img.shields.io/badge/11-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-graphics-and-robotics.md">Vision, Graphics, and Robotics</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-graphics-and-robotics.md"><img src="https://img.shields.io/badge/8-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-graphics-and-robotics.md"><img src="https://img.shields.io/badge/8-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-graphics-and-robotics.md"><img src="https://img.shields.io/badge/8-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-graphics-and-robotics.md"><img src="https://img.shields.io/badge/0-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/privacy-security-fairness-and-explainability.md">Privacy, Security, Fairness, and Explainability</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/privacy-security-fairness-and-explainability.md"><img src="https://img.shields.io/badge/8-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/privacy-security-fairness-and-explainability.md"><img src="https://img.shields.io/badge/8-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/privacy-security-fairness-and-explainability.md"><img src="https://img.shields.io/badge/7-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/privacy-security-fairness-and-explainability.md"><img src="https://img.shields.io/badge/0-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/fairness-privacy-ethics-social-good-transparency-accountability-in-vision.md">Fairness, Privacy, Ethics, Social-good, Transparency, Accountability in Vision</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/fairness-privacy-ethics-social-good-transparency-accountability-in-vision.md"><img src="https://img.shields.io/badge/41-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/fairness-privacy-ethics-social-good-transparency-accountability-in-vision.md"><img src="https://img.shields.io/badge/29-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/fairness-privacy-ethics-social-good-transparency-accountability-in-vision.md"><img src="https://img.shields.io/badge/23-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/fairness-privacy-ethics-social-good-transparency-accountability-in-vision.md"><img src="https://img.shields.io/badge/0-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/first-person-egocentric-vision.md">First Person (Egocentric) Vision</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/first-person-egocentric-vision.md"><img src="https://img.shields.io/badge/7-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/first-person-egocentric-vision.md"><img src="https://img.shields.io/badge/6-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/first-person-egocentric-vision.md"><img src="https://img.shields.io/badge/3-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/first-person-egocentric-vision.md"><img src="https://img.shields.io/badge/1-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/representation-learning.md">Representation Learning</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/representation-learning.md"><img src="https://img.shields.io/badge/40-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/representation-learning.md"><img src="https://img.shields.io/badge/30-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/representation-learning.md"><img src="https://img.shields.io/badge/28-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/representation-learning.md"><img src="https://img.shields.io/badge/1-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/deep-learning-architectures-and-techniques.md">Deep Learning Architectures and Techniques</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/deep-learning-architectures-and-techniques.md"><img src="https://img.shields.io/badge/45-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/deep-learning-architectures-and-techniques.md"><img src="https://img.shields.io/badge/38-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/deep-learning-architectures-and-techniques.md"><img src="https://img.shields.io/badge/31-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/deep-learning-architectures-and-techniques.md"><img src="https://img.shields.io/badge/2-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/recognition-detection.md">Recognition: Detection</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/recognition-detection.md"><img src="https://img.shields.io/badge/73-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/recognition-detection.md"><img src="https://img.shields.io/badge/58-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/recognition-detection.md"><img src="https://img.shields.io/badge/50-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/recognition-detection.md"><img src="https://img.shields.io/badge/0-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/image-and-video-synthesis.md">Image and Video Synthesis</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/image-and-video-synthesis.md"><img src="https://img.shields.io/badge/135-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/image-and-video-synthesis.md"><img src="https://img.shields.io/badge/118-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/image-and-video-synthesis.md"><img src="https://img.shields.io/badge/104-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/image-and-video-synthesis.md"><img src="https://img.shields.io/badge/30-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-and-audio.md">Vision and Audio</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-and-audio.md"><img src="https://img.shields.io/badge/12-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-and-audio.md"><img src="https://img.shields.io/badge/11-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-and-audio.md"><img src="https://img.shields.io/badge/6-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-and-audio.md"><img src="https://img.shields.io/badge/1-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/recognition-segmentation-and-shape-analysis.md">Recognition, Segmentation, and Shape Analysis</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/recognition-segmentation-and-shape-analysis.md"><img src="https://img.shields.io/badge/12-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/recognition-segmentation-and-shape-analysis.md"><img src="https://img.shields.io/badge/10-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/recognition-segmentation-and-shape-analysis.md"><img src="https://img.shields.io/badge/10-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/recognition-segmentation-and-shape-analysis.md"><img src="https://img.shields.io/badge/1-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/generative-ai.md">Generative AI</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/generative-ai.md"><img src="https://img.shields.io/badge/24-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/generative-ai.md"><img src="https://img.shields.io/badge/23-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/generative-ai.md"><img src="https://img.shields.io/badge/17-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/generative-ai.md"><img src="https://img.shields.io/badge/6-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/humans-3d-modeling-and-driving.md">Humans, 3D Modeling, and Driving</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/humans-3d-modeling-and-driving.md"><img src="https://img.shields.io/badge/12-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/humans-3d-modeling-and-driving.md"><img src="https://img.shields.io/badge/10-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/humans-3d-modeling-and-driving.md"><img src="https://img.shields.io/badge/7-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/humans-3d-modeling-and-driving.md"><img src="https://img.shields.io/badge/3-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/low-level-vision-and-theory.md">Low-Level Vision and Theory</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/low-level-vision-and-theory.md"><img src="https://img.shields.io/badge/12-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/low-level-vision-and-theory.md"><img src="https://img.shields.io/badge/6-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/low-level-vision-and-theory.md"><img src="https://img.shields.io/badge/7-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/low-level-vision-and-theory.md"><img src="https://img.shields.io/badge/4-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/navigation-and-autonomous-driving.md">Navigation and Autonomous Driving</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/navigation-and-autonomous-driving.md"><img src="https://img.shields.io/badge/51-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/navigation-and-autonomous-driving.md"><img src="https://img.shields.io/badge/45-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/navigation-and-autonomous-driving.md"><img src="https://img.shields.io/badge/29-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/navigation-and-autonomous-driving.md"><img src="https://img.shields.io/badge/5-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/3d-from-a-single-image-and-shape-from-x.md">3D from a Single Image and Shape-from-X</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/3d-from-a-single-image-and-shape-from-x.md"><img src="https://img.shields.io/badge/68-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/3d-from-a-single-image-and-shape-from-x.md"><img src="https://img.shields.io/badge/58-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/3d-from-a-single-image-and-shape-from-x.md"><img src="https://img.shields.io/badge/45-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/3d-from-a-single-image-and-shape-from-x.md"><img src="https://img.shields.io/badge/18-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/motion-estimation-matching-and-tracking.md">Motion Estimation, Matching and Tracking</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/motion-estimation-matching-and-tracking.md"><img src="https://img.shields.io/badge/59-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/motion-estimation-matching-and-tracking.md"><img src="https://img.shields.io/badge/42-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/motion-estimation-matching-and-tracking.md"><img src="https://img.shields.io/badge/40-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/motion-estimation-matching-and-tracking.md"><img src="https://img.shields.io/badge/14-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/action-and-event-understanding.md">Action and Event Understanding</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/action-and-event-understanding.md"><img src="https://img.shields.io/badge/30-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/action-and-event-understanding.md"><img src="https://img.shields.io/badge/22-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/action-and-event-understanding.md"><img src="https://img.shields.io/badge/19-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/action-and-event-understanding.md"><img src="https://img.shields.io/badge/1-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/computational-imaging.md">Computational Imaging</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/computational-imaging.md"><img src="https://img.shields.io/badge/37-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/computational-imaging.md"><img src="https://img.shields.io/badge/22-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/computational-imaging.md"><img src="https://img.shields.io/badge/19-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/computational-imaging.md"><img src="https://img.shields.io/badge/8-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/embodied-vision-active-agents-simulation.md">Embodied Vision: Active Agents, Simulation</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/embodied-vision-active-agents-simulation.md"><img src="https://img.shields.io/badge/15-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/embodied-vision-active-agents-simulation.md"><img src="https://img.shields.io/badge/14-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/embodied-vision-active-agents-simulation.md"><img src="https://img.shields.io/badge/8-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/embodied-vision-active-agents-simulation.md"><img src="https://img.shields.io/badge/6-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/recognition-retrieval.md">Recognition: Retrieval</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/recognition-retrieval.md"><img src="https://img.shields.io/badge/31-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/recognition-retrieval.md"><img src="https://img.shields.io/badge/16-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/recognition-retrieval.md"><img src="https://img.shields.io/badge/18-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/recognition-retrieval.md"><img src="https://img.shields.io/badge/2-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/transfer-low-shot-continual-long-tail-learning.md">Transfer, Low-Shot, Continual, Long-Tail Learning</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/transfer-low-shot-continual-long-tail-learning.md"><img src="https://img.shields.io/badge/110-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/transfer-low-shot-continual-long-tail-learning.md"><img src="https://img.shields.io/badge/78-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/transfer-low-shot-continual-long-tail-learning.md"><img src="https://img.shields.io/badge/72-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/transfer-low-shot-continual-long-tail-learning.md"><img src="https://img.shields.io/badge/7-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/low-level-and-physics-based-vision.md">Low-Level and Physics-based Vision</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/low-level-and-physics-based-vision.md"><img src="https://img.shields.io/badge/115-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/low-level-and-physics-based-vision.md"><img src="https://img.shields.io/badge/71-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/low-level-and-physics-based-vision.md"><img src="https://img.shields.io/badge/78-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/low-level-and-physics-based-vision.md"><img src="https://img.shields.io/badge/9-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/computer-vision-theory.md">Computer Vision Theory</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/computer-vision-theory.md"><img src="https://img.shields.io/badge/9-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/computer-vision-theory.md"><img src="https://img.shields.io/badge/5-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/computer-vision-theory.md"><img src="https://img.shields.io/badge/6-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/computer-vision-theory.md"><img src="https://img.shields.io/badge/0-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/video-analysis-and-understanding.md">Video Analysis and Understanding</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/video-analysis-and-understanding.md"><img src="https://img.shields.io/badge/51-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/video-analysis-and-understanding.md"><img src="https://img.shields.io/badge/38-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/video-analysis-and-understanding.md"><img src="https://img.shields.io/badge/32-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/video-analysis-and-understanding.md"><img src="https://img.shields.io/badge/6-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/object-pose-estimation-and-tracking.md">Object Pose Estimation and Tracking</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/object-pose-estimation-and-tracking.md"><img src="https://img.shields.io/badge/16-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/object-pose-estimation-and-tracking.md"><img src="https://img.shields.io/badge/10-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/object-pose-estimation-and-tracking.md"><img src="https://img.shields.io/badge/12-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/object-pose-estimation-and-tracking.md"><img src="https://img.shields.io/badge/2-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/3d-shape-modeling-and-processing.md">3D Shape Modeling and Processing</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/3d-shape-modeling-and-processing.md"><img src="https://img.shields.io/badge/46-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/3d-shape-modeling-and-processing.md"><img src="https://img.shields.io/badge/35-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/3d-shape-modeling-and-processing.md"><img src="https://img.shields.io/badge/33-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/3d-shape-modeling-and-processing.md"><img src="https://img.shields.io/badge/11-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/human-poseshape-estimation.md">Human Pose/Shape Estimation</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/human-poseshape-estimation.md"><img src="https://img.shields.io/badge/47-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/human-poseshape-estimation.md"><img src="https://img.shields.io/badge/38-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/human-poseshape-estimation.md"><img src="https://img.shields.io/badge/35-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/human-poseshape-estimation.md"><img src="https://img.shields.io/badge/19-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/transfer-low-shot-and-continual-learning.md">Transfer, Low-Shot, and Continual Learning</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/transfer-low-shot-and-continual-learning.md"><img src="https://img.shields.io/badge/12-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/transfer-low-shot-and-continual-learning.md"><img src="https://img.shields.io/badge/7-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/transfer-low-shot-and-continual-learning.md"><img src="https://img.shields.io/badge/6-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/transfer-low-shot-and-continual-learning.md"><img src="https://img.shields.io/badge/0-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/self--semi--and-unsupervised-learning.md">Self-, Semi-, and Unsupervised Learning</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/self--semi--and-unsupervised-learning.md"><img src="https://img.shields.io/badge/12-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/self--semi--and-unsupervised-learning.md"><img src="https://img.shields.io/badge/11-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/self--semi--and-unsupervised-learning.md"><img src="https://img.shields.io/badge/9-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/self--semi--and-unsupervised-learning.md"><img src="https://img.shields.io/badge/1-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/self--semi--meta--unsupervised-learning.md">Self-, Semi-, Meta-, Unsupervised Learning</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/self--semi--meta--unsupervised-learning.md"><img src="https://img.shields.io/badge/67-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/self--semi--meta--unsupervised-learning.md"><img src="https://img.shields.io/badge/50-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/self--semi--meta--unsupervised-learning.md"><img src="https://img.shields.io/badge/33-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/self--semi--meta--unsupervised-learning.md"><img src="https://img.shields.io/badge/2-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/photogrammetry-and-remote-sensing.md">Photogrammetry and Remote Sensing</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/photogrammetry-and-remote-sensing.md"><img src="https://img.shields.io/badge/11-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/photogrammetry-and-remote-sensing.md"><img src="https://img.shields.io/badge/9-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/photogrammetry-and-remote-sensing.md"><img src="https://img.shields.io/badge/5-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/photogrammetry-and-remote-sensing.md"><img src="https://img.shields.io/badge/0-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/efficient-and-scalable-vision.md">Efficient and Scalable Vision</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/efficient-and-scalable-vision.md"><img src="https://img.shields.io/badge/63-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/efficient-and-scalable-vision.md"><img src="https://img.shields.io/badge/49-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/efficient-and-scalable-vision.md"><img src="https://img.shields.io/badge/43-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/efficient-and-scalable-vision.md"><img src="https://img.shields.io/badge/2-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/machine-learning-other-than-deep-learning.md">Machine Learning (other than Deep Learning)</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/machine-learning-other-than-deep-learning.md"><img src="https://img.shields.io/badge/11-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/machine-learning-other-than-deep-learning.md"><img src="https://img.shields.io/badge/6-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/machine-learning-other-than-deep-learning.md"><img src="https://img.shields.io/badge/6-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/machine-learning-other-than-deep-learning.md"><img src="https://img.shields.io/badge/0-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/document-analysis-and-understanding.md">Document Analysis and Understanding</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/document-analysis-and-understanding.md"><img src="https://img.shields.io/badge/13-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/document-analysis-and-understanding.md"><img src="https://img.shields.io/badge/12-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/document-analysis-and-understanding.md"><img src="https://img.shields.io/badge/9-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/document-analysis-and-understanding.md"><img src="https://img.shields.io/badge/0-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/biometrics.md">Biometrics</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/biometrics.md"><img src="https://img.shields.io/badge/9-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/biometrics.md"><img src="https://img.shields.io/badge/6-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/biometrics.md"><img src="https://img.shields.io/badge/5-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/biometrics.md"><img src="https://img.shields.io/badge/1-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/datasets-and-evaluation.md">Datasets and Evaluation</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/datasets-and-evaluation.md"><img src="https://img.shields.io/badge/53-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/datasets-and-evaluation.md"><img src="https://img.shields.io/badge/46-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/datasets-and-evaluation.md"><img src="https://img.shields.io/badge/41-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/datasets-and-evaluation.md"><img src="https://img.shields.io/badge/14-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/faces-and-gestures.md">Faces and Gestures</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/faces-and-gestures.md"><img src="https://img.shields.io/badge/45-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/faces-and-gestures.md"><img src="https://img.shields.io/badge/29-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/faces-and-gestures.md"><img src="https://img.shields.io/badge/22-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/faces-and-gestures.md"><img src="https://img.shields.io/badge/7-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/medical-and-biological-vision-cell-microscopy.md">Medical and Biological Vision; Cell Microscopy</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/medical-and-biological-vision-cell-microscopy.md"><img src="https://img.shields.io/badge/40-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/medical-and-biological-vision-cell-microscopy.md"><img src="https://img.shields.io/badge/25-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/medical-and-biological-vision-cell-microscopy.md"><img src="https://img.shields.io/badge/32-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/medical-and-biological-vision-cell-microscopy.md"><img src="https://img.shields.io/badge/3-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/scene-analysis-and-understanding.md">Scene Analysis and Understanding</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/scene-analysis-and-understanding.md"><img src="https://img.shields.io/badge/40-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/scene-analysis-and-understanding.md"><img src="https://img.shields.io/badge/33-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/scene-analysis-and-understanding.md"><img src="https://img.shields.io/badge/30-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/scene-analysis-and-understanding.md"><img src="https://img.shields.io/badge/5-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/multimodal-learning.md">Multimodal Learning</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/multimodal-learning.md"><img src="https://img.shields.io/badge/30-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/multimodal-learning.md"><img src="https://img.shields.io/badge/25-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/multimodal-learning.md"><img src="https://img.shields.io/badge/24-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/multimodal-learning.md"><img src="https://img.shields.io/badge/3-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/human-in-the-loop-computer-vision.md">Human-in-the-Loop Computer Vision</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/human-in-the-loop-computer-vision.md"><img src="https://img.shields.io/badge/6-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/human-in-the-loop-computer-vision.md"><img src="https://img.shields.io/badge/6-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/human-in-the-loop-computer-vision.md"><img src="https://img.shields.io/badge/4-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/human-in-the-loop-computer-vision.md"><img src="https://img.shields.io/badge/0-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/image-and-video-forensics.md">Image and Video Forensics</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/image-and-video-forensics.md"><img src="https://img.shields.io/badge/11-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/image-and-video-forensics.md"><img src="https://img.shields.io/badge/8-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/image-and-video-forensics.md"><img src="https://img.shields.io/badge/5-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/image-and-video-forensics.md"><img src="https://img.shields.io/badge/0-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/geometric-deep-learning.md">Geometric Deep Learning</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/geometric-deep-learning.md"><img src="https://img.shields.io/badge/8-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/geometric-deep-learning.md"><img src="https://img.shields.io/badge/7-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/geometric-deep-learning.md"><img src="https://img.shields.io/badge/4-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/geometric-deep-learning.md"><img src="https://img.shields.io/badge/0-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-applications-and-systems.md">Vision Applications and Systems</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-applications-and-systems.md"><img src="https://img.shields.io/badge/36-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-applications-and-systems.md"><img src="https://img.shields.io/badge/26-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-applications-and-systems.md"><img src="https://img.shields.io/badge/21-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/vision-applications-and-systems.md"><img src="https://img.shields.io/badge/4-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/machine-learning-and-dataset.md">Machine Learning and Dataset</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/machine-learning-and-dataset.md"><img src="https://img.shields.io/badge/12-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/machine-learning-and-dataset.md"><img src="https://img.shields.io/badge/10-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/machine-learning-and-dataset.md"><img src="https://img.shields.io/badge/10-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/main/machine-learning-and-dataset.md"><img src="https://img.shields.io/badge/3-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td colspan="5" align="center"><i>Workshops</i></td>
</tr>
<tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-scene-graphs-and-graph-representation-learning.md">1st Workshop on Scene Graphs and Graph Representation Learning</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-scene-graphs-and-graph-representation-learning.md"><img src="https://img.shields.io/badge/10-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-scene-graphs-and-graph-representation-learning.md"><img src="https://img.shields.io/badge/8-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-scene-graphs-and-graph-representation-learning.md"><img src="https://img.shields.io/badge/6-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-scene-graphs-and-graph-representation-learning.md"><img src="https://img.shields.io/badge/0-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/visual-inductive-priors-for-data-efficient-dl-w.md">4th Visual Inductive Priors for Data-Efficient Deep Learning Workshop</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/visual-inductive-priors-for-data-efficient-dl-w.md"><img src="https://img.shields.io/badge/17-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/visual-inductive-priors-for-data-efficient-dl-w.md"><img src="https://img.shields.io/badge/10-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/visual-inductive-priors-for-data-efficient-dl-w.md"><img src="https://img.shields.io/badge/10-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/visual-inductive-priors-for-data-efficient-dl-w.md"><img src="https://img.shields.io/badge/0-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-what-is-next-in-multimodal-foundation-models.md">What is Next in Multimodal Foundation Models?</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-what-is-next-in-multimodal-foundation-models.md"><img src="https://img.shields.io/badge/9-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-what-is-next-in-multimodal-foundation-models.md"><img src="https://img.shields.io/badge/5-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-what-is-next-in-multimodal-foundation-models.md"><img src="https://img.shields.io/badge/3-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-what-is-next-in-multimodal-foundation-models.md"><img src="https://img.shields.io/badge/1-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-and-challenge-on-deepfake-analysis-and-detection.md">Workshop and Challenge on DeepFake Analysis and Detection</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-and-challenge-on-deepfake-analysis-and-detection.md"><img src="https://img.shields.io/badge/12-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-and-challenge-on-deepfake-analysis-and-detection.md"><img src="https://img.shields.io/badge/5-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-and-challenge-on-deepfake-analysis-and-detection.md"><img src="https://img.shields.io/badge/5-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-and-challenge-on-deepfake-analysis-and-detection.md"><img src="https://img.shields.io/badge/0-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-on-cv-in-plant-phenotyping-and-agriculture.md">8th Workshop on Computer Vision in Plant Phenotyping and Agriculture</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-on-cv-in-plant-phenotyping-and-agriculture.md"><img src="https://img.shields.io/badge/25-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-on-cv-in-plant-phenotyping-and-agriculture.md"><img src="https://img.shields.io/badge/2-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-on-cv-in-plant-phenotyping-and-agriculture.md"><img src="https://img.shields.io/badge/3-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-on-cv-in-plant-phenotyping-and-agriculture.md"><img src="https://img.shields.io/badge/0-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-on-new-ideas-in-vision-transformers.md">Workshop on New Ideas in Vision Transformers</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-on-new-ideas-in-vision-transformers.md"><img src="https://img.shields.io/badge/18-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-on-new-ideas-in-vision-transformers.md"><img src="https://img.shields.io/badge/8-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-on-new-ideas-in-vision-transformers.md"><img src="https://img.shields.io/badge/9-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-on-new-ideas-in-vision-transformers.md"><img src="https://img.shields.io/badge/10-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-representation-learning-with-very-limited-images.md">Representation Learning with very Limited Images: The Potential of Self-, Synthetic- and Formula-Supervision</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-representation-learning-with-very-limited-images.md"><img src="https://img.shields.io/badge/20-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-representation-learning-with-very-limited-images.md"><img src="https://img.shields.io/badge/12-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-representation-learning-with-very-limited-images.md"><img src="https://img.shields.io/badge/3-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-representation-learning-with-very-limited-images.md"><img src="https://img.shields.io/badge/1-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-to-nerf-or-not-to-nerf.md">To NeRF or not to NeRF: A View Synthesis Challenge for Human Heads</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-to-nerf-or-not-to-nerf.md"><img src="https://img.shields.io/badge/2-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-to-nerf-or-not-to-nerf.md"><img src="https://img.shields.io/badge/1-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-to-nerf-or-not-to-nerf.md"><img src="https://img.shields.io/badge/0-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-to-nerf-or-not-to-nerf.md"><img src="https://img.shields.io/badge/0-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-on-resource-efficient-dl-for-cv.md">Workshop on Resource Efficient Deep Learning for Computer Vision</a>
</td>
<td colspan="4" rowspan="25" align="center"><i>Will soon be added</i></td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-on-cv-aided-architectural-design.md">1st Workshop on Computer Vision Aided Architectural Design</a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-on-electronic-cultural-heritage.md">4th Workshop on Electronic Cultural Heritage</a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-solving-cad-history-and-parameters-recovery-from-point-clouds-and-3d-scans.md">Solving CAD History and pArameters Recovery from Point Clouds and 3D Scans</a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/visual-object-tracking-and-segmentation-challenge.md">Visual Object Tracking and Segmentation Challenge</a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-on-assistive-cv-and-robotics.md">11th Workshop on Assistive Computer Vision and Robotics</a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-on-open-vocabulary-3d-scene-understanding.md">1st Workshop on Open-Vocabulary 3D Scene Understanding</a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-recovering-6d-object-pose.md">Recovering 6D Object Pose</a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-visual-perception-for-navigation-in-human-environments.md">Visual Perception for Navigation in Human Environments: The JackRabbot Human Motion Forecasting Dataset and Benchmark</a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-women-in-cv.md">Women in Computer Vision</a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-on-cv-for-automated-medical-diagnosis.md">2nd Workshop on Computer Vision for Automated Medical Diagnosis</a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-on-closing-the-loop-between-vision-and-language.md">5th Workshop on Closing the Loop Between Vision and Language</a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-ai-for-3d-content-creation.md">AI for 3D Content Creation</a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-ai-for-creative-video-editing-and-understanding.md">AI for Creative Video Editing and Understanding</a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/artificial-social-intelligence-w-and-challenge.md">Artificial Social Intelligence Workshop and Challenge</a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/international-w-on-analysis-and-modeling-of-faces-and-gestures.md">International Workshop on Analysis and Modeling of Faces and Gestures</a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/the-second-road-workshop-and-challenge.md">2nd ROAD workshop and Challenge: Event Detection for Situation Awareness in Autonomous Driving</a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-visual-continual-learning.md">Visual Continual Learning</a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-on-adversarial-robustness-in-the-real-world.md">Workshop on Adversarial Robustness in the Real World</a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-ai-for-humanitarian-assistance-and-disaster-response.md">Artificial Intelligence for Humanitarian Assistance and Disaster Response</a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/bioimage-computing-w.md">BioImage Computing Workshop</a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-rob-and-rel-of-auto-vehicles-in-the-open-world.md">Robustness and Reliability of Autonomous Vehicles in the Open-World</a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/latinx-in-ai-research-w.md">LatinX in AI Research Workshop</a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/cv-for-metaverse-w.md">2nd Computer Vision for Metaverse Workshop</a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-and-challenges-for-out-of-distribution-generalization-in-cv.md">2nd Workshop and Challenges for Out-of-Distribution Generalization in Computer Vision</a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-uncertainty-estimation-for-cv.md">Uncertainty Estimation for Computer Vision</a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-uncertainty-estimation-for-cv.md"><img src="https://img.shields.io/badge/14-42BA16" alt="Papers"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-uncertainty-estimation-for-cv.md"><img src="https://img.shields.io/badge/9-b31b1b" alt="Preprints"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-uncertainty-estimation-for-cv.md"><img src="https://img.shields.io/badge/8-1D7FBF" alt="Open Code"></a>
</td>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/w-uncertainty-estimation-for-cv.md"><img src="https://img.shields.io/badge/0-FF0000" alt="Videos"></a>
</td>
</tr>
<tr>
<td>
<a href="https://github.com/DmitryRyumin/ICCV-2023-Papers/blob/main/sections/2023/workshops/vision-and-language-algorithmic-reasoning-w.md">Vision-and-Language Algorithmic Reasoning Workshop</a>
</td>
<td colspan="4" rowspan="1" align="center"><i>Will soon be added</i></td>
</tr>
</tbody>
</table>
Key Terms
<p align="center">
<img width="500" src="https://cdn.jsdelivr.net/gh/DmitryRyumin/ICCV-2023-Papers@main/images/Keywords.png" alt="Key Terms">
<p>
Star History
<p align="center">
<a href="https://star-history.com/#Dmitryryumin/ICCV-2023-Papers&Date" target="_blank">
<img width="500" src="https://api.star-history.com/svg?repos=Dmitryryumin/ICCV-2023-Papers&type=Date" alt="Star History Chart">
</a>
<p>