Bionic Super 3D cameras have bug eyes and bat sonar

0

Reconstructed 3D images rendered in different perspectives for the letter “X”. Credit: Smart Optics Laboratory, Liang Gao/UCLA Samueli

Key points:

  • The researchers developed CLIP, a new framework that allows the camera system to “see” with an extended depth range and around objects.
  • CLIP was inspired by the echolocation abilities of bats, as well as the geometrically shaped compound eyes of insects.
  • The technology could be integrated into autonomous vehicles and medical imaging tools.

Inspired by flies and bats, UCLA engineers have developed a new class of bionic 3D camera systems with multi-dimensional imaging and depth range that can scan through blind spots.

Powered by computer image processing, the camera can decipher the size and shape of objects hidden in corners or behind other features. Once perfected, the technology would be applicable in autonomous vehicles or medical imaging tools with sensing capabilities.

The researchers drew inspiration from bats, which use echolocation to visualize their surroundings in the dark, as well as insects, which possess geometrically shaped compound eyes in which each “eye” comprises hundreds to tens of thousands of individual units for sight, making it possible to see the same thing from multiple lines of sight.

“Although the idea itself has been tried, seeing through a range of distances and around occlusions has been a major hurdle,” said study leader Liang Gao, associate professor of bioengineering at the UCLA Samueli School of Engineering. “To address this issue, we have developed a new computational imaging framework, which allows for the first time wide and deep panoramic view acquisition with simple optics and a small sensor array.”

Called “compact bright field photography” or CLIP, the frame allows the camera system to see with an extended depth range and around objects. In experiments, the researchers demonstrated that their system can see hidden objects that are not detected by conventional 3D cameras.

Gao and his team then combined CLIP with a type of LiDAR, or light detection and ranging. Conventional LiDAR, without CLIP, would take a high-resolution snapshot of the scene but miss hidden objects, much like our human eyes would. Using seven LiDAR cameras with CLIP, the network takes a lower-resolution image of the scene, processes what the individual cameras see, and then reconstructs the combined scene into high-resolution 3D imagery. The researchers demonstrated that the camera system could image a complex 3D scene with multiple objects, all placed at different distances.

According to Gao, CLIP helps the camera network make sense of what is similarly hidden. Combined with LiDAR, the system is able to achieve the echolocation effect of bats so that one can detect a hidden object based on the time it takes light to bounce back to the camera.

Information provided by UCLA School of Engineering.

Share.

Comments are closed.