Home

Awesome

<h1 align="center"> Active Vision and Zero-Shot Learning for Enhancing Agricultural Environment Perception </h1> <p align="center"> Implementation of the research project developed during the Master's thesis of Michele Carlo La Greca at Politecnico di Milano, corresponding to the work related to the <a href="https://arxiv.org/abs/2409.12602">preprint</a> available on <i>arXiv</i>. <p align="center"> <img src="images/main.png" alt="Main Image" width="80%"> </p> </p> <hr>

humble ubuntu22

<!-- ![GitHubWorkflowStatus](https://img.shields.io/github/actions/workflow/status/AIRLab-POLIMI/active-vision/main.yml?logo=github&style=flat-square) [![GitHubcontributors](https://img.shields.io/github/contributors/AIRLab-POLIMI/active-vision?style=flat-square)](CONTRIBUTING.md) [![License](https://img.shields.io/github/license/AIRLab-POLIMI/active-vision?style=flat-square)](LICENSE) --> <hr>

Table of contents

<hr>

Introduction

Agriculture is essential to society. Robotics can boost productivity in agriculture, and robots must be able to accurately perceive the unstructured, dynamic, and possibly covered environment of plants and crops. This complexity presents significant challenges for traditional management methods. The proposed research introduces an approach to overcome the challenges and complexities of fruit perception through Active Vision (AV). Instead of relying on passive observation, AV allows robots to actively perceive, explore, and reconstruct at run-time their surroundings by planning the optimal position of the camera viewpoint using the Next-Best View (NBV) planning, which maximizes the information gained regarding plants and crops. This ensures that even hidden or occluded parts of the environment are effectively captured.

This work applies Zero-Shot Learning (ZSL) to provide useful segmentation, enabling the robot to generalize and adapt to various crops or environmental features without requiring specific training data for each scenario. By leveraging both 3D and semantic data, the robot can reconstruct a detailed, semantic, and context-aware map of the environment, allowing it to strategically adjust its movements and positioning, leading to more effective interactions with the environment.

This work focuses on the following contributions:

  1. Developed a modular architecture in ROS 2, C++, and Python for Active Vision in agricultural robotics, addressing the challenge of detecting occluded fruits.

  2. To the best of the author’s knowledge, this is the first work to integrate Zero-Shot Learning with Active Vision exploration, enabling environment-independent operation in agriculture.

  3. Conducted extensive evaluations both in simulation and real-world scenarios, in contrast to state-of-the-art methods that primarily focus on simulated environments with supervised learning.

  4. Set a benchmark standard for the lack of reproducibility and availability of open-source code in the context of Active Vision in agricultural robotics.

Installation

<details> <summary> Step 1: Install the ROS 2 Humble distribution for Ubuntu 22.04 and other useful elements. </summary> </details> <details> <summary> Step 2: Setup Real Robot configuration (for Igus ReBeL and Standalone Realsense 435). </summary> </details> <details> <summary> Step 3: Install the dependencies and requirements of the Igus ReBeL ROS 2 repository. </summary> </details> <details> <summary> Step 4: Clone the active vision branch of the <a href="https://github.com/AIRLab-POLIMI/ros2-igus-rebel">Igus ReBeL ROS 2</a> repository. </summary> </details> <details> <summary> Step 5: Install the dependencies of this repository. </summary> </details> <details> <summary> Step 6: Clone the main branch of this repository. </summary> </details>

Visualization

<br> <br> <br>

OctoMap Creation

The first functionality of the architecture is to create and continuously update the occupancy and semantic OctoMap starting from the input data. A decentralized approach is used, based on topic communication between the nodes responsible for various functionalities, such as segmentation, point cloud creation, and OctoMap creation. The execution of these nodes is initiated from a ROS 2 launch file and spun indefinitely until a termination command is run. The methodology is based on four nodes communicating through topics: the sensors node, the segmentation node, the point cloud node, and the OctoMap node. <br>

<br> <br> <br> <br>

Active Vision

The main functionality of the architecture is performing Active Vision for creating a 3D reconstruction of the agricultural environment using a centralized approach. Different from the decentralized approach, it combines all the functionalities related to the Active Vision into a single node using multi-threading. Regarding the Active Vision Pipeline Block, a MultiThreadedExecutor defined in the main node is used to allow multiple nodes to run in separate threads: MoveIt2APICreator, SegmentationClient, PointcloudCreator, SegmentedPointcloudCreator, ExtendedOctomapCreator, and Pipeline. For what concerns the Segmentation Block, an independent client-server node is employed, managed by using a ROS 2 service. Finally, the Robot Block consists of executing all the entities related to the Igus ReBeL robot.

Simulation (Without mobile base)

Simulation (With mobile base)

Real-World (With mobile base)