top of page
Scientia Research

Simplifying Uncertainty for Smart Robots with Basic Sensors

Nature 620, 282-283 (2023)


Introduction

The authors begin by highlighting the importance of understanding uncertainty when making predictions using neural networks. Uncertainties can reveal valuable contextual information, such as data collection biases and model limitations. The two main uncertainty types are “aleatoric” (related to data) and “epistemic” (related to model). Aleatoric uncertainty can indicate object boundaries in images, while epistemic uncertainty suggests when more data is needed for accurate predictions.

The authors propose a way to estimate aleatoric uncertainty efficiently for real-time applications. This involves a new loss function that considers different probability distributions. They apply this concept, called “Ajna,” to various robotics tasks. By understanding uncertainty, robots can navigate through obstacles, detect unknown objects, and perform other tasks more effectively.

So far, past works in robotics and computer vision have shown that uncertainty fusion from multiple sensors enhances performance. Uncertainties are used as regularizers (the predicted uncertainties are used as a form of additional information during training) to improve object detection, depth estimation, optical flow, and more. This paper’s goal is to advance the understanding and application of uncertainties in neural networks to benefit a wide range of predictions.


Advanced Quadrotor Abilities Enabled by Ajna’s Perception

The PRGLabrador500 quadrotor, designed with exceptional attributes, exhibits a range of impressive feats through its integration with the Ajna system.

The quadrotor platform itself is a custom-built configuration. It is controlled by firmware on a flight controller. It’s companion computer handles high-level navigational commands and processing of vision and planning algorithms.

Ajna, a neural network architecture forms the core perception module. With approximately 2.72 million different parameters, Ajna predicts dense optical flow and its associated uncertainty. It was trained on Flying Chairs 2 and FlyingThings3D datasets, optimizing a loss function encompassing self-supervision and supervised labels.

The quadrotor’s capabilities extend across several tasks:

  1. Dynamic Obstacle Avoidance: Ajna enables the quadrotor to dodge unknown dynamic obstacles. High uncertainty in optical flow, indicative of occlusions, is used to detect approaching obstacles. Successful trials demonstrated an 83.3% success rate.

  2. Unstructured Environment Navigation: Navigating through diverse environments like indoor forests and boxes is achieved by dynamically weighing local and global planning. Ajna assists in identifying safe regions and determining intermediate goal directions, contributing to successful navigation.

  3. Flying Through Unknown Gaps: The quadrotor can detect and navigate through gaps of unknown shapes. Ajna’s ability to discern high uncertainty areas, resulting from parallax effects within gaps, enables gap detection and successful passage.

  4. Object Pile Segmentation: Ajna’s analysis of uncertainties during active movement aids in segmenting objects within a pile. While direct segmentation is challenging, Ajna’s insights could serve as an initialization step for more complex segmentation methods.

Ajna’s computational efficiency is noteworthy, with varying inference speeds on different hardware platforms. Overall, the integration of Ajna with the quadrotor platform showcases a sophisticated synergy of perception and action, enabling impressive performance across a range of tasks.


Discussion

Traditionally, robots rely on 3D scene representations built through complex algorithms using sensors like cameras and mapping systems. These methods fuse noisy data to improve accuracy. Ajna takes a fresh perspective, aiming to harness the often overlooked uncertainties within sensor data to enhance robot capabilities.

A unified framework was introduced for a resource-constrained aerial robot to perform key tasks: avoiding dynamic obstacles, navigating cluttered spaces, flying through gaps, and segmenting object piles. The common thread among these tasks is the robot’s active image capture ability. By analyzing the uncertainty in optical flow, which indicates the level of blurriness due to movement, the robot was able to identify object boundaries. Optical flow uncertainty also helps detect moving obstacles and assess motion blur, similar to event cameras’ properties.

The authors evaluated uncertainty across various settings, from changes in light conditions to adversarial attacks. Experiments showcased its effectiveness in unknown gap detection, static environment navigation, and dynamic obstacle evasion. Using two images instead of one ended up being more advantageous for capturing subtle details like tree branches, which is vital for robot navigation.

In essence, the work leverages uncertainty in optical flow to enhance robot perception, leading to a more unified approach to various tasks. This approach paves the way for more efficient and versatile robotic systems.


Limitations and future directions

A major limitation is that optical flow uncertainty can be influenced by factors such as depth boundaries, color-flat regions, abrupt brightness changes, or blinking lights. However, it’s important to note that higher uncertainty doesn’t always translate to the successful resolution of a task.

Although the authors demonstrated the utility of uncertainty (referred to as ϒ in the paper) in addressing typical robotic tasks, challenges remain in practical deployment. Developing strategies to seamlessly switch between tasks and navigate complex real-world scenarios is a critical next step.

In summary, the study represents an initial step towards introducing a novel uncertainty framework that could revolutionize how authors approach solving robotic challenges.

33 views0 comments

Comments


bottom of page