Home

Awesome

<p align="center"> <img src="resources/logo.png" width="100%"> </p>

SAD: Segment Any RGBD

🎉🎉🎉 Welcome to the Segment Any RGBD GitHub repository! 🎉🎉🎉

🚀🚀🚀 New! We release technical report! 🚀🚀🚀 [arxiv]


🤗🤗🤗 Segment AnyRGBD is a toolbox to segment rendered depth images based on SAM! Don't forget to star this repo if you find it interesting!
Hugging Face Spaces Hugging Face Spaces

Input to SAM (RGB or Rendered Depth Image)SAM Masks with Class and Semantic Masks3D Visualization for SAM Masks with Class and Semantic Masks
<img src="resources/demos/sailvos_1/000160.bmp" width="100%"><img src="resources/demos/sailvos_1/RGB_Semantic_SAM_2D.gif" width="100%" ><img src="resources/demos/sailvos_1/RGB_3D_All.gif" width="100%">
<img src="resources/demos/sailvos_1/Depth_plasma.png" width="100%"><img src="resources/demos/sailvos_1/Depth_Semantic_SAM_2D.gif" width="100%" ><img src="resources/demos/sailvos_1/Depth_3D_All.gif" width="100%">

🥳 Introduction

We find that humans can naturally identify objects from the visulization of the depth map, so we first map the depth map ([H, W]) to the RGB space ([H, W, 3]) by a colormap function, and then feed the rendered depth image into SAM. Compared to the RGB image, the rendered depth image ignores the texture information and focuses on the geometry information. The input images to SAM are all RGB images in SAM-based projects like SSA, Anything-3D, and SAM 3D. We are the first to use SAM to extract the geometry information directly. The following figures show that depth maps with different colormap functions has different SAM results.

<p align="center"> <img src="resources/examples_2.png" width="100%"> </p>

😎 Method

In this repo, we provide two alternatives for the users, including feeding the RGB images or rendered depth images to the SAM. In each mode, the user could obtain the semantic masks (one color refers to one class) and the SAM masks with the class. The overall structure is shown in the following figure. We use OVSeg for zero-shot semantic segmentation.

<p align="center"> <img src="resources/flowchart_3.png" width="100%"> </p>

🤩 Comparison

<p align="center"> <img src="resources/comparison.png" width="80%"> </p>

🔥 Demos

Sailvos3D Dataset

Input to SAM (RGB or Rendered Depth Image)SAM Masks with Class and Semantic Masks3D Visualization for SAM Masks with Class and Semantic Masks
<img src="resources/demos/sailvos_3/rgb_000100.bmp" width="100%"><img src="resources/demos/sailvos_3/RGB_Semantic_SAM_2D.gif" width="100%" ><img src="resources/demos/sailvos_3/RGB_3D_All.gif" width="100%">
<img src="resources/demos/sailvos_3/Depth_rendered.png" width="100%"><img src="resources/demos/sailvos_3/Depth_Semantic_SAM_2D.gif" width="100%" ><img src="resources/demos/sailvos_3/Depth_3D_All.gif" width="100%">
<img src="resources/demos/sailvos_2/000540.bmp" width="100%"><img src="resources/demos/sailvos_2/RGB_Semantic_SAM_2D.gif" width="100%" ><img src="resources/demos/sailvos_2/RGB_3D_All.gif" width="100%">
<img src="resources/demos/sailvos_2/Depth_plasma.png" width="100%"><img src="resources/demos/sailvos_2/Depth_Semantic_SAM_2D.gif" width="100%" ><img src="resources/demos/sailvos_2/Depth_3D_All.gif" width="100%">

ScannetV2 Dataset

Input to SAM (RGB or Rendered Depth Image)SAM Masks with Class and Semantic Masks3D Visualization for SAM Masks with Class and Semantic Masks
<img src="resources/demos/scannet_1/5560.jpg" width="100%"><img src="resources/demos/scannet_1/RGB_Semantic_SAM_2D.gif" width="100%" ><img src="resources/demos/scannet_1/RGB_3D_All.gif" width="100%">
<img src="resources/demos/scannet_1/Depth_rendered.png" width="100%"><img src="resources/demos/scannet_1/Depth_Semantic_SAM_2D.gif" width="100%" ><img src="resources/demos/scannet_1/Depth_3D_All.gif" width="100%">
<img src="resources/demos/scannet_2/1660.jpg" width="100%"><img src="resources/demos/scannet_2/RGB_Semantic_SAM_2D.gif" width="100%" ><img src="resources/demos/scannet_2/RGB_3D_All.gif" width="100%">
<img src="resources/demos/scannet_2/Depth_rendered.png" width="100%"><img src="resources/demos/scannet_2/Depth_Semantic_SAM_2D.gif" width="100%" ><img src="resources/demos/scannet_2/Depth_3D_All.gif" width="100%">

⚙️ Installation

Please see installation guide.

💫 Try Demo

🤗 Try Demo on Huggingface

Hugging Face Spaces Hugging Face Spaces

🤗 Try Demo Locally

We provide the UI (ui.py) and example inputs (/UI/) to reproduce the above demos. We use the OVSeg checkpoints ovseg_swinbase_vitL14_ft_mpt.pth for zero-shot semantic segmentation, and SAM checkpoints sam_vit_h_4b8939.pth. Put them under this repo. Simply try our UI on your own computer:

python ui.py 

Simply click one of the Examples at the bottom and the input examples will be automatically fill in. Then simply click 'Send' to generate and visualize the results. The inference takes around 2 and 3 minutes for ScanNet and SAIL-VOS 3D respectively.

<p align="center"> <img src="resources/UI_Output.png" width="100%"> </p>

Data Preparation

Please download SAIL-VOS 3D and ScanNet to try more demos.

LICENSE

Shield: CC BY-NC 4.0

This repo is developed based on OVSeg which is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

CC BY-NC 4.0

However portions of the project are under separate license terms: CLIP and ZSSEG are licensed under the MIT license; MaskFormer is licensed under the CC-BY-NC; openclip is licensed under the license at its repo; SAM is licensed under the Apache License.