Skin deep
Information from sensors is transmitted using neural-style activity spikes.
The nervous system does an astonishing job of tracking sensory information, and does so using signals that would drive many computer scientists insane: a noisy stream of activity spikes that may be transmitted to hundreds of additional neurons, where they are integrated with similar spike trains coming from still other neurons.
Now, researchers have used spiking circuitry to build an artificial robotic skin, adopting some of the principles of how signals from our sensory neurons are transmitted and integrated. While the system relies on a few decidedly not-neural features, it has the advantage that [we have chips](https://arstechnica.com/science/2021/09/understanding-neuromorphic-computing-and-why-i…
Skin deep
Information from sensors is transmitted using neural-style activity spikes.
The nervous system does an astonishing job of tracking sensory information, and does so using signals that would drive many computer scientists insane: a noisy stream of activity spikes that may be transmitted to hundreds of additional neurons, where they are integrated with similar spike trains coming from still other neurons.
Now, researchers have used spiking circuitry to build an artificial robotic skin, adopting some of the principles of how signals from our sensory neurons are transmitted and integrated. While the system relies on a few decidedly not-neural features, it has the advantage that we have chips that can run neural networks using spiking signals, which would allow this system to integrate smoothly with some energy-efficient hardware to run AI-based control software.
Location via spikes
The nervous system in our skin is remarkably complex. It has specialized sensors for different sensations: heat, cold, pressure, pain, and more. In most areas of the body, these feed into the spinal column, where some preliminary processing takes place, allowing reflex reactions to be triggered without even involving the brain. But signals do make their way along specialized neurons into the brain, allowing further processing and (potentially) conscious awareness.
The researchers behind the recent work, based in China, decided to implement something similar for an artificial skin that could be used to cover a robotic hand. They limited sensing to pressure, but implemented other things the nervous system does, including figuring out the location of input and injuries, and using multiple layers of processing.
All of this started out by making a flexible polymer skin with embedded pressure sensors that were linked up to the rest of the system via conductive polymers. The next layer of the system converted the inputs from the pressure sensors to a series of activity spikes—short pulses of electrical current.
There are three ways that these trains of spikes can convey information: the shape of an individual pulse, through their magnitude, through the length of the spike, and through the frequency of the spikes. Spike frequency is the most commonly used means of conveying information in biological systems, and the researchers use that to convey the pressure experienced by a sensor. The remaining forms of information are used to create something akin to a bar code that helps identify which sensor the reading came from.
In addition to registering the pressure, the researchers had each sensor send a “I’m still here” signal at regular time intervals. Failure to receive this would be an indication that something has gone wrong with a sensor.
The spiking signals allow the next layer of the system to identify any pressure being experienced by the skin, as well as where it originated. This layer can also do basic evaluation of the sensory input: “Pressure-initiated raw pulses from the pulse generator accumulated in the signal cache center until a predefined pain threshold is surpassed, activating a pain signal.” This can allow the equivalent of basic reflex reactions that don’t involve higher-level control systems. For example, the researchers set up a robotic arm covered with their artificial skin, and got it to move the arm whenever it experiences pressure that can cause damage.
The second layer also combines and filters signals from the skin before sending the information on to the arm’s controller, which is the equivalent of the brain in this situation. So, the same system caused a robotic face to change expressions based on how much pressure its arm was sensing.
Easy fixes
A lot of the details of how the system operates were figured out empirically. For example, they applied the amount of pressure that registers as pain in human skin, and figured out how frequently their sensors generated spikes. This was then set as a threshold to emit a pain signal to the higher control system, and would trigger any reflex responses to excessive pressure. A lot of the more elaborate responses will ultimately depend on how these higher-level systems are programmed. For example, it’s easy for the system to generate a signal that indicates damage in a specific location in the skin; how the overall system responds to that damage isn’t specified by the skin itself.
But the team did make it easy to repair things if damage occurs. The skin is designed to be assembled from a collection of segments that can snap together using magnetic interlocks. These automatically link up any necessary wiring, and each segment of skin broadcasts a unique identity code. So, if the system identifies damage, it’s relatively easy for an operator to pop out the damaged segment and replace it with fresh hardware, and then update any data that links the new segment’s ID with its location.
The researchers call their development a neuromorphic robotic e-skin, or NRE-skin. “Neuromorphic” as a term is a bit vague, with some people using it to mean a technology that directly follows the principles used by the nervous system. That’s definitely not this skin. Instead, it uses “neuromorphic” far more loosely, with the operation of the nervous system acting as an inspiration for the system.
This is clearest in the case of the positional information. The nervous system actually maintains a map of the body, and links sensory inputs to locations on this map. Biology uses nothing at all like the encoding of positional information in the properties of the activity spikes that the NRE-skin uses. So, this system is more biology-inspired than it is a model of actual biology.
It also falls a bit short of biology in its current implementation, in that all it senses is pressure. Actual skin can process a variety of different sensory inputs, including things like temperatures, irritants, and more. These could all potentially be added to something like NRE-skin, but it would require a parallel processing system to keep the additional signals from getting intermingled with the ones from the pressure-sensitive hardware.
All that said, spiking neuromorphic processors can host neural networks and are far more energy-efficient when doing so. So, even with its limitations, this seems like an area of research worth exploring.
PNAS, 2025. DOI: 10.1073/pnas.2520922122 (About DOIs).
John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.