Robotic eyes unconstrained by human perception

0

Forcing robots to see through human eyes is limiting. Intelligent robotic eyes that can think for themselves could be the answer.

In his epic 1968 film 2001: A Space Odyssey, Stanley Kubrick has incorporated unforgettable scenes in which the sentient computer, HAL, watches Dave, the scientist aboard the Discovery One spacecraft. This representation of how a machine perceives the world is ultimately defined by our own perception of it.

In practice, we built and taught AI to understand the world as if we were looking through our own eyes. As a result, most machine vision systems rely on cameras that first produce images for humans, which then form the basis for training a neural network. However, forcing robotic eyes to see through our own cognitive interpretation might actually undermine its true potential.

But what alternatives could robots use? It is difficult to think about the perception of the world beyond our own experience. The ability of biological organisms to understand reality has been shaped by the necessities of survival and shaped by evolution over hundreds of millions of years. However, the AI ​​is not limited to this construction.

Published research in the review Advanced intelligent systems attempts to answer this problem by proposing a new approach to artificial vision. It aims to implement an intelligent visual perception device that mimics biological retinal cells and their connecting neurons at a fundamental level. Additionally, it incorporates a small hardware-based artificial neural network to perform rudimentary tasks similar to elementary functions of the visual cortex.

How “smart” do these robotic eyes have to be?

Robotic eyes with artificial visual perception are essential to some important technologies, such as automotive security systems, industrial manufacturing, and even advanced medical equipment. These platforms are usually expensive because they are based on a camera to capture images which are processed by a complex AI algorithm running on a powerful processor. In addition, the AI ​​must undergo extensive offline training with large detailed databases.

But do all smart machine applications require high-end hardware with high-level cognition in their learning process?

Abstract understanding might suffice in some applications, where the AI ​​only needs to make basic or general assumptions. For example, identifying a ball or a round object may be sufficient in some situations, without requiring the system to tell the difference between a basketball and a baseball. Such an AI does not need to rely on a processor and would be considerably cheaper. Moreover, it can be used even without the use of a camera. For example, abstract vision can be used to simply identify defective and disproportionate balls.

The answer to the previous question therefore directly affects the cost and complexity of AI systems. In the first case, a large artificial neural network is needed while the second can be solved by making small, cheap building blocks that work in parallel. Referring to the previous example, basketballs and baseballs can have unique and differentiating characteristics.

A high-level understanding network must take these nuances into account and learn to classify them correctly. The amount of information fed into the AI ​​can therefore be quite large. As a result, the size and complexity of the network can grow rapidly and the associated energy expenditure grow even faster.

On the other hand, an AI with an abstract understanding can be small and simple, designed to simply identify whether or not the captured image contains a shape with a single symmetry. A group of such entities can tell when a ball-shaped object is presented by identifying multiple axes of symmetry in the image. Thus, the integrated decision of all the individual units produces a relatively complicated response. Such a system can be implemented using dedicated and minimalistic hardware, as shown in this study.

Pervasive intelligent vision systems generally adhere to the high-level approach and must rely on software, where AI is implemented as a learning algorithm. These algorithms have to go through a preliminary training process before they can give their own predictions.

Object-oriented programming offers many degrees of freedom and allows simple implementation, where code entities act as artificial neurons. These neurons are represented by a mathematical function which contains a very large number of multiplication and summation operations. These complex algorithms require powerful processors that consume a lot of energy to process the interaction between thousands of multivariate soft neurons.

As with abstract level intelligence, dedicated neural processors can be used. These computers are composed of hardware neurons orchestrated to cooperatively perform tasks beyond their inherent level of sophistication. With this in mind, the research demonstrated a simple and cheap implementation, used to control a robotic vehicle, with only four hardware neurons that could be trained on the fly. Moreover, the acquisition of bioinspired images allowed to considerably reduce the size of the input data.

These concepts were demonstrated using a prototype vision platform that was used to maneuver a small robotic vehicle. It was based on a microcontroller and a fully programmable integrated circuit. The system incorporated a neural processor trained on the fly to associate a set of hieroglyphs with motor control instructions. These hieroglyphic symbols were linked to commands such as “forward then turn right” or “backward then turn left”.

So, do robotic eyes need to “see” through a camera designed to satisfy human perception? Honestly, no. And the lifting of this limitation can allow them to move into hitherto unsuspected areas.

Dan Berco, Chih-Hao Chiu and Diing Shenp Ang, Bio-inspired visio-neural controller supervised by hieroglyphicsAdvanced Intelligent Systems (2022), DOI:aisy.202200066

Disclaimer: The author of this article participated in the study

Share.

Comments are closed.