Home

Awesome

<div align="center">

🤖 AGRNav: Efficient and Energy-Saving Autonomous Navigation for Air-Ground Robots in Occlusion-Prone Environments

</div>

🤗 AGR-Family Works

🎉 Chinese Media Reports/Interpretations

📢 News

</br>

If you find this work helpful, kindly show your support by giving us a free ⭐️. Your recognition is truly valued.

<p align = "center"> <img src="figs/sim1.gif" width = "400" height = "260" border="1" style="display:inline;"/> </p>

If you find this work useful in your research, please consider citing:

@INPROCEEDINGS{wang2024agrnav,
  author={Wang, Junming and Sun, Zekai and Guan, Xiuxian and Shen, Tianxiang and Zhang, Zongyuan and Duan, Tianyang and Huang, Dong and Zhao, Shixiong and Cui, Heming},
  booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)}, 
  title={AGRNav: Efficient and Energy-Saving Autonomous Navigation for Air-Ground Robots in Occlusion-Prone Environments}, 
  year={2024},
  pages={11133-11139}
}

🔧 Hardware List

<div align="center">
HardwareLink
AMOV Lab P600 UAVlink
AMOV Lab Allapark1-Jetson Xavier NXlink
Wheeltec R550 ROS Carlink
Intel RealSense D435ilink
Intel RealSense T265link
TFmini Pluslink
</div>

❗ Considering that visual positioning is prone to drift in the Z-axis direction, we added TFmini Plus for height measurement. Additionally, GNSS-RTK positioning is recommended for better localization accuracy.

🤑 Our customized Aerial-Ground Robot cost about RMB 70,000.

🛠️ Installation

The code was tested with python=3.6.9, as well as pytorch=1.10.0+cu111 and torchvision=0.11.2+cu111.

Please follow the instructions here to install both PyTorch and TorchVision dependencies. Installing both PyTorch and TorchVision with CUDA support is strongly recommended.

  1. Clone the repository locally:
 git clone https://github.com/jmwang0117/AGRNav.git
  1. We recommend using Docker to run the project, which can reduce the burden of configuring the environment, you can find the Dockerfile in our project, and then execute the following command:
 docker build . -t skywalker_robot -f Dockerfile
  1. After the compilation is complete, use our one-click startup script in the same directory:
 bash create_container.sh
  1. Next enter the container and use git clone our project
 docker exec -it robot bash
  1. Re-clone the repository locally
 git clone https://github.com/jmwang0117/AGRNav.git
  1. Since need to temporarily save the point cloud, please check the path in the following file:
/root/AGRNav/src/perception/launch/inference.launch

/root/AGRNav/src/perception/SCONet/network/data/SemanticKITTI.py

/root/AGRNav/src/perception/script/pointcloud_listener.py
  1. SCONet pre-trained model is in the folder below:
/root/AGRNav/src/perception/SCONet/network/weights
  1. If you want to use our 3D AGR model, please download the AGR model to the folder below:
/root/AGRNav/src/uav_simulator/Utils/odom_visualization/meshes

And modify the code on line 503 in the following file to AGR.dae

/root/AGRNav/src/uav_simulator/Utils/odom_visualization/src/odom_visualization.cpp
  1. Run the following commands
catkin_make
source devel/setup.bash
sh src/run.sh

You've begun this project successfully; enjoy yourself!

💽 Dataset

🏆 Acknowledgement

Many thanks to these excellent open source projects: