To create a smarter robot, engineers have created an artificial brain system combining touch and visual sensors.


For robots, picking up a can of soda is a hard task even though it may seem simple to humans. It must find the thing, assess its form, decide how much force to apply, and grab the object without letting go.


The majority of robots in use today rely only on visual processing, which has a limit on what they can do. Robots must have an outstanding sense of touch and the capacity to swiftly and intelligently analyze sensory data to carry out more difficult jobs.


The new sensory-integrated artificial brain system may function on a power-efficient neuromorphic processor, such as Intel's Loihi chip, and replicate organic neural networks.


The technology also incorporates synthetic skin and optical sensors, giving robots the capacity to create drawings.based on the information the real-time vision and touch sensors collect about the items they are grabbing.

Quick and reliable intuition


According to Benjamin Tee, an associate professor in the Department of Materials Science and Engineering at the National University of Singapore, "robot manipulation has grown exponentially in recent years."


"However, it remains a technical challenge to combine visual and tactile information to deliver highly accurate information in milliseconds. The most recent work we've done to make robots smarter and faster with them." in physical contact with our lightning-fast electronic skin cells." And integrating the system with recent advances in visual perception and AI, he explains.


Adding human-like features to robots could enhance their current capabilities and possibly open up new applications. For example, a robotic arm with an electronic skin is used in the manufacturing area.It could easily maneuver between objects, use touch to detect unfamiliar objects, and apply just the right amount of pressure to keep from slipping


For the new robotic system, the researchers used an advanced asynchronous coded electronic skin, developed by Tee et al in 2019. The sensor detects touch, like, like 1,000 times faster than the human sensory neural system it can also think, and things are 10 times more complex than blinking an eye.


"Creating artificial skin recognition technology provides an answer to around half of the questions needed to make robots intelligent. They also need an artificial brain to ultimately develop consciousness and learning as another important piece of the puzzle,” says Tee.

ANOTHER puzzle for intelligent robots


The researchers investigated neuromorphic technology—a branch of computing that mimics the neural organization and function of the human brain—to interpret sensory input from the synthetic skin to advance robotic perception.


Tee and Harold Soh, an assistant professor in the department of computer science, say using Intel's Loihi neuromorphic research processor for their new robotic system was an obvious decision because they are both members of the Intel Neuromorphic Research Community.


In the initial tests, scientists equipped a robotic hand with synthetic skin and had it read Braille while transmitting tactile data to Loihi through the cloud to interpret the significance of the little bumps the hand felt. Using 20 times less power than a typical CPU, Loihi classified the Braille letters with an accuracy of over 92%.


To enhance the robot's sensing abilities, Soh's team used a spiking neural network to incorporate both visual and touch data. The researchers used a robot with fake skin and visual sensors to categorize several opaque containers holding varied volumes of liquid in their studies. Additionally, they evaluated the system's capacity to recognize rotational slip, which is crucial for secure gripping.


The spiking neural network that included touch and visual input was successful in classifying objects and detecting object slippage in both experiments. Comparing the classification accuracy to a system that simply used vision, it was 10% better. Additionally, using a method that Soh's team created, the neural networks were able to categorize the sensory data while it was being acquired, as opposed to the typical methodology that categorizes data after it has been entirely gathered.


Additionally, the researchers illustrated the effectiveness of neuromorphic technology by showing that Loihi handled sensory input more than 45 times more efficiently while processing it 21% faster than a top-performing graphics processing unit.


These outcomes have us quite happy," Soh exclaims. They demonstrate that a neuromorphic system is a viable option for fusing several sensors to enhance robot perception. It's a step in the direction of creating reliable, power-efficient robots that can react swiftly and properly to unforeseen circumstances.


In the next years, especially in the post-COVID era, Tee and Soh intend to further improve their innovative robotic system for use in the logistics and food production industries where there is a great need for robotic automation.


During the Robotics: Science and Systems conference in July 2020, the researchers presented their findings.


The project was financed by the National Robotics R&D Programme Office.