
Source: Xinhua via Alamy Stock Photo
A quiet economic subsector is emerging around humanoid robots, and it’s already experiencing a variety of cybersecurity challenges.
In case large language models (LLMs) don’t wipe out enough jobs, organizations in the US and Asia are currently working toward replacing manual laborers too, with machines that look and move like people but won’t demand wages…

Source: Xinhua via Alamy Stock Photo
A quiet economic subsector is emerging around humanoid robots, and it’s already experiencing a variety of cybersecurity challenges.
In case large language models (LLMs) don’t wipe out enough jobs, organizations in the US and Asia are currently working toward replacing manual laborers too, with machines that look and move like people but won’t demand wages. Fortune tellers at Morgan Stanley,Bank of America and elsewhere are projecting that humanoid robots are inevitably going to get cheaper to manufacture over time — though you can already buy a Unitree R1 today for $5,000 — and that, as a result, somewhere between tens of thousands to hundreds of millions of them could be in the world by 2050.
Joseph Rooke, director of risk insights at Recorded Future’s Insikt Group, points out that "nations are clearly watching this space. Just look at China’s latest 15th Five-Year Plan. It specifically calls out ‘embodied AI’ as a sector it wants to lead in."
Further to the point, in the past five years, more than 5,000 patents in China alone have mentioned the term "humanoid." And a recent report from Recorded Future documents a couple handfuls of suspected nation-state espionage campaigns against the robotics industry, dating back to the fall of 2024.
Related:Critical Railway Braking Systems Open to Tampering
More than the risks to manufacturers, though, analysts are worried about the potentially sci-fi consequences of attacks against the humanoids themselves — attacks which have already proven feasible, if not utterly easy to pull off.
Cyberespionage Against Robotics Organizations
Though in-depth data is missing, the robotics industry attacks of recent months appear to be similar in nature. Rooke reports that, so far, "most cyber activity touching the robotics sector isn’t driven by some exotic, robot-specific tradecraft. It tends to look like the same state-linked intrusion activity we track across other advanced manufacturing and high-value technology industries."
The malware is the same, too: household open source stealers and remote access Trojans (RATs), useful for stealing sensitive intellectual property (IP). Since the fall of 2024, Rooke and his colleagues have picked up on three or four long spells of malicious activity involving Russia’s Dark Crystal RAT (DcRAT), and six quick bursts of AsyncRAT. Other known malware involved in robotics industry cyberattack campaigns include XWorm, PrivateLoader, and the Havoc framework.
"I wouldn’t be surprised if threat actors are also positioning themselves inside supply chains" Rooke says. "It would be an obvious move, and Recorded Future has seen this with the targeting of the semiconductor and advanced electronics industries."
Related:Operational Technology Security Poses Inherent Risks for Manufacturers
Vulnerabilities in Robots
That countries and companies are quietly stealing from one another may be of interest to those involved, but the real threats to everyone else are the quiet risks embedded inside the humanoid robots themselves.
Nobody has been more active in demonstrating these risks than Víctor Mayoral-Vilches, founder and chairman of Alias Robotics. He and his colleagues have been sounding alarms over Unitree, a Chinese vendor currently in a league of its own when it comes to selling humanoid robots for remarkably affordable prices. In a variety of experiments, they’ve not only rooted Unitree’s robots, but also figured out how to worm their way into rooting any number of them within Bluetooth range. (They also proved that internet-connected bots send a variety of system data to Unitree’s servers in Asia, without asking for users’ consent.)
From his home laboratory, with a Unitree robot hanging in the background, Mayoral-Vilches warns Dark Reading that "if you were to mention a CVE to the average robotic company, they would say ‘CV what?’ They don’t even know the terms, they don’t know how it works, they don’t know the standards. In most cases, they are still getting started with cybersecurity."
Related:Critical Claroty Authentication Bypass Flaw Opened OT to Attack
In theory, he says, "people love it. Robot cyber security is super cool. Yet companies are not taking action." And that’s not even the worst part, because even if they were to prioritize it, the process of sufficiently securing a humanoid from cyber threats today is exceptionally difficult.
Why Robots Can’t Really Be Secured Yet
The way Mayoral-Vilches puts it, "a robot is a system of systems, a network of networks."
He explains that "a robot is composed of sensors that perceive the world, actuators that help the robot make a physical change in the world, and then a computation system which grabs the information from these sensors, digests it computationally, and then transfers that back into the actuators for making a physical change. A robot, by definition, needs to have all those three elements."
To state the obvious, then, a tippy top priority for any human developer of humanoid robots is going to be the speed of that control loop — typically no more than a millisecond. "In an IT system, if a package arrives 100 milliseconds late, then you have a delay in your application, but that’s it. Nobody falls, nobody crashes, nobody dies. In a robotic system, all those things can happen," Mayoral-Vilches points out.
And herein lies the rub. Most recently in artificial intelligence (AI), but arguably in any emerging category of technology, developers inevitably sacrifice security for speed to please casual users and investors. In robotics, if 100 milliseconds can be the difference between normalcy and a fall, crash, or even a safety risk to humans, then speed is itself critical to security.
Yet the sine qua nons for securing data communications — robust authentication and encryption — by their very nature retard a control loop, especially when they’re stacked on top of the other necessary middleware in a robotics system. So where is the sacrifice going to come from?
Most vendors have landed on the same solution: access control and prayer. "Increasingly, robotic systems tend to be more and more careful about how they engage in communications with [entities] outside of them. But the moment that you open the robot’s app and just look around at what’s inside of it, then it’s just transparent and clear," Mayoral-Vilches says.
To help developers move past amateur, bolted-on IT security solutions, Mayoral-Vilches and colleagues are working on a Secure Robot Operating System (SROS) — security extensions and tooling built on top of the Robot Operating System (ROS), a framework widely used in humanoid robotics software today. "And I can tell you that SROS actually builds upon technologies that are, on their own, flawed. So if you know about robotic architectures and cybersecurity overall, it doesn’t take very much effort to break down those layers," he admits.
"We are still very, very immature in the field of cybersecurity and robotics," Mayoral-Vilches concludes. To hit an even minimum acceptable standard, "we need to embrace and adopt core principles of the cybersecurity space, such as zero-trust architectures, making sure that we comply with basic access control within a robotics system, and also outside of the robot, across all of its interactions. We’re not there yet."
About the Author
Nate Nelson is a writer based in New York City. He formerly worked as a reporter at Threatpost, and wrote "Malicious Life," an award-winning Top 20 tech podcast on Apple and Spotify. Outside of Dark Reading, he also co-hosts "The Industrial Security Podcast."