A team of scientists at the Max Planck Institute for Intelligent Systems (MPI-IS) has created a robust, soft, thumb-shaped haptic sensor that uses computer vision and a deep neural network to accurately estimate where objects come into contact with the sensor and the magnitude of the applied forces. The research project is a significant step toward the creation of robots that are able to feel their environment as accurately as humans and other animals.
Dubbed ‘Insight’, the sensor is made of a soft shell built around a stiff, lightweight skeleton that holds up the structure much like bones stabilise soft finger tissue. The opaque greyish-coloured shell is made from an elastomer mixed with dark but reflective aluminium flakes. Hidden inside this finger-sized cap is a tiny 160-degree fish-eye camera that records colour images illuminated by a ring of LED lights.
When an object touches the sensor’s shell, the appearance of the colour pattern within the sensor changes. The camera records images many times per second and feeds the data to a deep neural network that can detect even the smallest change in light in each pixel. Within a fraction of a second, the trained machine-learning model can map out where exactly the contact is occurring on the sensor, determine how strong the forces are and indicate the force direction. The model is thus inferring what’s known a force map, providing a force vector for every point in the three-dimensional fingertip.
While testing the sensor, the researchers realised that it was sensitive enough to feel its own orientation relative to gravity.
‘We achieved this excellent sensing performance through the innovative mechanical design of the shell, the tailored imaging system inside, automatic data collection and cutting-edge deep learning,’ said Georg Martius, who heads up the Autonomous Learning Group at MPI-I.
‘Previous soft haptic sensors had only small sensing areas, were delicate and difficult to make, and often could not feel forces parallel to the skin, which are essential for robotic manipulation such as holding a glass of water or sliding a coin along a table,’ said Katherine J Kuchenbecker, the director of the Haptic Intelligence Department at MPI-IS.
The training data used to build the neural network was generated using a testbed that probed the sensor all over its surface and recorded the contact force vector together with the camera image inside the sensor. It took nearly three weeks to collect 200,000 measurements and another day to train the machine-learning model.
‘The hardware and software design we present in our work can be transferred to a wide variety of robot parts with different shapes and precision requirements,’ said PhD student Huanbo Sun, who designed the testbed. ‘The machine-learning architecture, training and inference process are all general and can be applied to many other sensor designs.’
The research has been published in Nature Machine Intelligence.