Awesome
<div align="center"> <h1>🤖 OMEGA</h1> <h2>Efficient Occlusion-Aware Navigation for Air-Ground Robot in Dynamic Environments via State Space Model</h2> <br> <a href='https://arxiv.org/abs/2408.10618'><img src='https://img.shields.io/badge/arXiv-OMEGA-green' alt='arxiv'></a> <a href='https://jmwang0117.github.io/OMEGA/'><img src='https://img.shields.io/badge/Project_Page-OMEGA-green' alt='Project Page'></a> </div>🤗 AGR-Family Works
- OMEGA (RA-L 2024.12): The First AGR-Tailored Dynamic Navigation System.
- HE-Nav (RA-L 2024.09): The First AGR-Tailored ESDF-Free Navigation System.
- AGRNav (ICRA 2024.01): The First AGR-Tailored Occlusion-Aware Navigation System.
🎉 Chinese Media Reports/Interpretations
- AMOV Lab Research Scholarship -- 2024.11: 5000 RMB
- AMOV Lab Research Scholarship -- 2024.10: 5000 RMB
📢 News
- [03/07/2024]: OMEGA's simulation logs are available for download:
Simulation Results | Experiment Log |
---|---|
OMEGA | link |
AGRNav | link |
TABV | link |
- [01/07/2024]: OccMamba's test and evaluation logs are available for download:
OccMamba Results | Experiment Log |
---|---|
OccMamba on the SemanticKITTI hidden official test dataset | link |
OccMamba test log | link |
OccMamba evaluation log | link |
- [28/06/2024]: The pre-trained model can be downloaded at OneDrive
- [25/06/2024]: We have released the code for OccMamba, a key component of OMEGA!
📜 Introduction
OMEGA emerges as the pioneering navigation system tailored for AGRs in dynamic settings, with a focus on ensuring occlusion-free mapping and pathfinding. It incorporates OccMamba, a module designed to process point clouds and perpetually update local maps, thereby preemptively identifying obstacles within occluded areas. Complementing this, AGR-Planner utilizes up-to-date maps to facilitate efficient and effective route planning, seamlessly navigating through dynamic environments.
<p align="center"> <img src="misc/head.png" width = 60% height = 60%/> </p> <br>@article{wang2024omega,
title={Omega: Efficient Occlusion-Aware Navigation for Air-Ground Robots in Dynamic Environments Via State Space Model},
author={Wang, Junming and Guan, Xiuxian and Sun, Zekai and Shen, Tianxiang and Huang, Dong and Liu, Fangming and Cui, Heming},
journal={IEEE Robotics and Automation Letters},
year={2024},
publisher={IEEE}
}
<br>
Please kindly star ⭐️ this project if it helps you. We take great efforts to develop and maintain it 😁.
🔧 Hardware List
<div align="center">Hardware | Link |
---|---|
AMOV Lab P600 UAV | link |
AMOV Lab Allapark1-Jetson Xavier NX | link |
Wheeltec R550 ROS Car | link |
Intel RealSense D435i | link |
Intel RealSense T265 | link |
TFmini Plus | link |
❗ Considering that visual positioning is prone to drift in the Z-axis direction, we added TFmini Plus for height measurement. Additionally, GNSS-RTK positioning is recommended for better localization accuracy.
🤑 Our customized Aerial-Ground Robot cost about RMB 70,000.
🛠️ Installation
conda create -n occmamba python=3.10 -y
conda install pytorch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 pytorch-cuda=12.1 -c pytorch -c nvidia -y
pip install spconv-cu120
pip install tensorboardX
pip install dropblock
pip install pyg_lib torch_scatter torch_sparse torch_cluster torch_spline_conv -f https://data.pyg.org/whl/torch-2.1.0+cu121.html
pip install -U openmim
mim install mmcv-full
pip install mmcls==0.25.0
[!NOTE] Please refer to Vision-Mamba for more installation information.
💽 Dataset
- SemanticKITTI
Please download the Semantic Scene Completion dataset (v1.1) from the SemanticKITTI website and extract it.
Or you can use voxelizer to generate ground truths of semantic scene completion.
The dataset folder should be organized as follows.
SemanticKITTI
├── dataset
│ ├── sequences
│ │ ├── 00
│ │ │ ├── labels
│ │ │ ├── velodyne
│ │ │ ├── voxels
│ │ │ ├── [OTHER FILES OR FOLDERS]
│ │ ├── 01
│ │ ├── ... ...
🤗 Getting Start
Clone the repository:
git clone https://github.com/jmwang0117/Occ-Mamba.git
Train OccMamba Net
$ cd <root dir of this repo>
$ bash run_train.sh
Validation
$ cd <root dir of this repo>
$ bash run_val.sh
Test
Since SemantiKITTI contains a hidden test set, we provide test routine to save predicted output in same format of SemantiKITTI, which can be compressed and uploaded to the SemanticKITTI Semantic Scene Completion Benchmark. You can provide which checkpoints you want to use for testing. We used the ones that performed best on the validation set during training. For testing, you can use the following command.
$ cd <root dir of this repo>
$ bash run_test.sh
🏆 Acknowledgement
Many thanks to these excellent open source projects: