16:45 GMT - Tuesday, 04 February, 2025

Revolutionizing Depth Perception: Praying Mantis-inspired Vision Systems

Home - Family & Relationships - Revolutionizing Depth Perception: Praying Mantis-inspired Vision Systems

Share Now:

Posted 2 hours ago by inuno.ai


Praying mantisPraying mantis

Praying mantises have advanced vision with exceptional depth perception. (Patricia Chumillas/Shutterstock)

In a nutshell

  • Scientists have created artificial compound eyes inspired by praying mantises that process 3D vision 400 times more efficiently than traditional camera systems, tracking objects with millimeter precision
  • Unlike other insects, praying mantises use both stereoscopic vision and motion detection, giving them superior depth perception that researchers successfully replicated
  • The breakthrough could improve safety in self-driving vehicles and robotics by helping machines better judge distances to stationary objects – a current limitation in autonomous systems

CHARLOTTESVILLE, Va. — Autonomous vehicles can navigate complex traffic patterns and recognize road signs, but they sometimes fail at a seemingly simple task: determining how far away a parked car is. This limitation shares a surprising connection with the insect world, where most compound eyes excel at detecting motion but struggle with depth perception. Enter an unlikely hero in solving this technological challenge: the praying mantis.

A team from the University of Virginia’s School of Engineering and Applied Science has developed high-tech artificial eyes that mimic those of insects. These eyes mark a big step forward in visual sensing technology. According to a study published in Science Robotics, the researchers built them using two curved surfaces, each packed with 256 tiny light sensors arranged in a 16-by-16 grid. To recreate the unique shape and structure of mantis eyes, they used flexible semiconductor materials, allowing the artificial eyes to capture images in a way similar to real insect vision.

“Making the sensor in hemispherical geometry while maintaining its functionality is a state-of-the-art achievement, providing a wide field of view and superior depth perception,” explains Byungjoon Bae, a Ph.D. candidate in the Charles L. Brown Department of Electrical and Computer Engineering at UVA, in a statement.

The way this artificial vision system works is remarkably elegant. Each eye contains light sensors similar to those in digital cameras, but with a crucial difference; they can process and store information right where they detect it. Traditional cameras must send all their visual data to a separate computer for processing, which takes time and energy. This new system handles much of the visual processing right at the source, similar to how insects process visual information in their eyes rather than sending raw data to their brains.

 the artificial compound eye prototype developed at the University of Virginia School of Engineering and Applied Science the artificial compound eye prototype developed at the University of Virginia School of Engineering and Applied Science
The artificial compound eye prototype developed at the University of Virginia School of Engineering and Applied Science. (Credit: Science Robotics)

The artificial eyes are constructed using flexible materials that can be shaped into hemispheres, mimicking the curved surface of natural compound eyes. Each eye contains an array of tiny lenses that focus light onto the sensors beneath them. When an object moves through its field of view, both eyes track it independently and then share their information through a sophisticated computer network that combines their observations to determine the object’s position and movement in three dimensions.

This binocular vision system proves particularly effective because it combines two different ways of perceiving depth. First, it uses stereopsis, which is the same principle that gives humans depth perception by comparing slightly different views from two eyes. Second, it employs motion parallax, where closer objects appear to move faster across our field of view than distant ones. While many animals use one method or the other, praying mantises are unique among insects in using both, making them particularly adept at judging distances.

It can track objects with an error margin of just 0.3 centimeters, about the width of a pencil eraser. Even more remarkable is its processing speed: it can analyze visual information in just 1.8 milliseconds, fast enough to track rapidly moving objects in real time. All this while using minimal power—about 400 times less energy than conventional camera systems.

The researchers achieved this efficiency through an innovative approach called “edge computing” where data is processed as close as possible to where it’s collected. The system continuously monitors scenes for changes, creating compact data packages that require minimal processing power. This mirrors how biological vision systems work, focusing on changes in the visual scene rather than constantly processing everything in view.

Praying mantis-inspired robotic eyePraying mantis-inspired robotic eye
A photograph of the artificial compound eye prototype developed at the University of Virginia School of Engineering and Applied Science by associate professor Kyusang Lee. (Credit:
University of Virginia School of Engineering and Applied Science / Kyusang Lee)

One of the most promising aspects of this technology is its potential to improve the safety and reliability of autonomous vehicles. Current self-driving cars sometimes struggle to accurately judge the distance between stationary or slow-moving objects, leading to safety concerns. By incorporating this mantis-inspired vision system, future vehicles could better perceive their environment, potentially reducing accidents and improving navigation in complex environments.

Beyond autonomous vehicles, this technology could revolutionize other fields where accurate visual perception is crucial. Robotic systems could become more precise in assembly lines or more adept at navigating dynamic environments. Surveillance systems could track objects more accurately while using less power. Even smart home devices could benefit from improved depth perception and motion-tracking capabilities.

“The seamless fusion of these advanced materials and algorithms enables real-time, efficient, and accurate 3D spatiotemporal perception,” explains Kyusang Lee, an associate professor from UVA.

This research could mark the beginning of a new era in machine vision, where efficient, accurate depth perception becomes standard. By following nature’s blueprint, researchers have shown that sometimes the best innovations come from understanding and adapting existing biological solutions.

Paper Summary

Methodology

The artificial eyes work through an integrated process combining hardware and software innovations. Each eye contains photodetectors that sense light, paired with memory units that store information about what they see. The system uses microlenses to direct light to each sensor, similar to how insect eyes focus light through individual facets. The system’s edge computing approach processes visual information right where it’s captured, similar to how insects process visual information in their eyes rather than sending everything to their brain. The sensor array continuously monitors scenes for changes, creating efficient data sets that require minimal processing power.

Results

Testing showed three key achievements: First, the system tracked objects with high precision (within 0.3 centimeters). Second, it processed information extremely quickly (1.8 milliseconds per frame). Third, it did this while using very little power (about 4 millijoules per operation) – about 400 times more efficient than traditional systems. The researchers tested this using various objects and lighting conditions to verify consistent performance.

Limitations

Current limitations include relatively low resolution (256 pixels per eye compared to millions in modern cameras), laboratory-only testing conditions, and a fixed mounting system that doesn’t yet allow for eye movement. The system also requires specific lighting conditions for optimal performance.

Takeaways and Discussion

This research demonstrates how mimicking biological systems can solve complex engineering challenges. The system’s ability to process visual information at the sensor level, rather than sending all data to a central processor, represents a new approach to computer vision. The combination of speed, accuracy, and energy efficiency makes it particularly promising for applications where these factors are crucial.

Funding and Disclosures

This work received support from the National Science Foundation (grant no. 1942868) and the U.S. Air Force Office of Scientific Research Young Investigator Program (grant no. FA9550-23-1-0159). The research team included UVA electrical and computing engineering graduate students Doeon Lee, Minseong Park, Yujia Mu, Yongmin Baek, Inbo Sim and Cong Shen. The researchers declared no competing interests.

Publication Information

Published in Science Robotics (Volume 9, eadl3606) on May 15, 2024, by researchers at the University of Virginia’s Department of Electrical and Computer Engineering and Department of Material Science and Engineering.

Highlighted Articles

Add a Comment

You may also like

Stay Connected

Please enable JavaScript in your browser to complete this form.