Home

Awesome

<div align="center"> <h1>🤖 OMEGA</h1> <h2>Efficient Occlusion-Aware Navigation for Air-Ground Robot in Dynamic Environments via State Space Model</h2> <br> <a href='https://arxiv.org/abs/2408.10618'><img src='https://img.shields.io/badge/arXiv-OMEGA-green' alt='arxiv'></a> <a href='https://jmwang0117.github.io/OMEGA/'><img src='https://img.shields.io/badge/Project_Page-OMEGA-green' alt='Project Page'></a> </div>

🤗 AGR-Family Works

🎉 Chinese Media Reports/Interpretations

📢 News

<div align="center">
Simulation ResultsExperiment Log
OMEGAlink
AGRNavlink
TABVlink
</div> <div align="center">
OccMamba ResultsExperiment Log
OccMamba on the SemanticKITTI hidden official test datasetlink
OccMamba test loglink
OccMamba evaluation loglink
</div>

📜 Introduction

OMEGA emerges as the pioneering navigation system tailored for AGRs in dynamic settings, with a focus on ensuring occlusion-free mapping and pathfinding. It incorporates OccMamba, a module designed to process point clouds and perpetually update local maps, thereby preemptively identifying obstacles within occluded areas. Complementing this, AGR-Planner utilizes up-to-date maps to facilitate efficient and effective route planning, seamlessly navigating through dynamic environments.

<p align="center"> <img src="misc/head.png" width = 60% height = 60%/> </p> <br>
@article{wang2024omega,
  title={Omega: Efficient Occlusion-Aware Navigation for Air-Ground Robots in Dynamic Environments Via State Space Model},
  author={Wang, Junming and Guan, Xiuxian and Sun, Zekai and Shen, Tianxiang and Huang, Dong and Liu, Fangming and Cui, Heming},
  journal={IEEE Robotics and Automation Letters},
  year={2024},
  publisher={IEEE}
}
<br>

Please kindly star ⭐️ this project if it helps you. We take great efforts to develop and maintain it 😁.

🔧 Hardware List

<div align="center">
HardwareLink
AMOV Lab P600 UAVlink
AMOV Lab Allapark1-Jetson Xavier NXlink
Wheeltec R550 ROS Carlink
Intel RealSense D435ilink
Intel RealSense T265link
TFmini Pluslink
</div>

❗ Considering that visual positioning is prone to drift in the Z-axis direction, we added TFmini Plus for height measurement. Additionally, GNSS-RTK positioning is recommended for better localization accuracy.

🤑 Our customized Aerial-Ground Robot cost about RMB 70,000.

🛠️ Installation

conda create -n occmamba python=3.10 -y
conda install pytorch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 pytorch-cuda=12.1 -c pytorch -c nvidia -y
pip install spconv-cu120
pip install tensorboardX
pip install dropblock
pip install pyg_lib torch_scatter torch_sparse torch_cluster torch_spline_conv -f https://data.pyg.org/whl/torch-2.1.0+cu121.html
pip install -U openmim
mim install mmcv-full
pip install mmcls==0.25.0

[!NOTE] Please refer to Vision-Mamba for more installation information.

💽 Dataset

Please download the Semantic Scene Completion dataset (v1.1) from the SemanticKITTI website and extract it.

Or you can use voxelizer to generate ground truths of semantic scene completion.

The dataset folder should be organized as follows.

SemanticKITTI
├── dataset
│   ├── sequences
│   │  ├── 00
│   │  │  ├── labels
│   │  │  ├── velodyne
│   │  │  ├── voxels
│   │  │  ├── [OTHER FILES OR FOLDERS]
│   │  ├── 01
│   │  ├── ... ...

🤗 Getting Start

Clone the repository:

git clone https://github.com/jmwang0117/Occ-Mamba.git

Train OccMamba Net

$ cd <root dir of this repo>
$ bash run_train.sh

Validation

$ cd <root dir of this repo>
$ bash run_val.sh

Test

Since SemantiKITTI contains a hidden test set, we provide test routine to save predicted output in same format of SemantiKITTI, which can be compressed and uploaded to the SemanticKITTI Semantic Scene Completion Benchmark. You can provide which checkpoints you want to use for testing. We used the ones that performed best on the validation set during training. For testing, you can use the following command.

$ cd <root dir of this repo>
$ bash run_test.sh

🏆 Acknowledgement

Many thanks to these excellent open source projects: