
Brain light via Alamy Stock Photo
Artificial intelligence (AI) integration poses security, governance, and data privacy risks — challenges that will only increase in operational technology (OT) environments. A new government advisory seeks to provide guidance, but it omits some key points.
The Cybersecurity and Infrastructure Security Agency (CISA), the National Security Agency, and the Australian Signals Directorate’s Australian Cyber Security Centre [published a joint government advisory](https://www.cis…

Brain light via Alamy Stock Photo
Artificial intelligence (AI) integration poses security, governance, and data privacy risks — challenges that will only increase in operational technology (OT) environments. A new government advisory seeks to provide guidance, but it omits some key points.
The Cybersecurity and Infrastructure Security Agency (CISA), the National Security Agency, and the Australian Signals Directorate’s Australian Cyber Security Centre published a joint government advisory that details four key principles to help OT owners and operators of critical infrastructure: to understand AI, assess AI use in OT, establish AI governance, and embed safety and security.
The need for this guidance highlights how difficult it is, and will be, to use AI safely and effectively in OT environments. That’s why CISA’s No. 1 principle — to understand AI — is so important, says Floris Dankaart, lead product manager at NCC Group. Best practices for OT environments "heavily depend on the type of AI and what we mean by AI," he says.
"Many LLMs [large language models] and AI agents are nondeterministic, meaning they will use new seeds every time and, through that, produce a different response," Dankaart tells Dark Reading. "This contrasts sharply with OT systems built for stability and predictability."
Related:How Has IoT Security Changed Over the Past 5 Years?
Trust Issues on the Factory Floor
The mismatch introduces challenges like model drift, lack of explainability, and new attack surfaces, Dankaart adds, echoing CISA’s concerns. Transparency and trust will be major obstacles for AI integration in OT environments. Organizations should be highly conservative when deploying AI in OT environments, especially nondeterministic models like LLMs or agents, because their unpredictability conflicts with OT’s need for stability and safety, he advises.
Although CISA’s new guidance is directionally correct, the challenge is most OT environments lack the trust foundation required to follow it, says Integrity Security Services CEO David Sequino.
AI depends on trustworthy device data, but many OT systems cannot authenticate firmware, updates, or sensor outputs, which makes AI decisions inherently risky, Sequino warns. He says he has already observed widespread gaps in device identity, code signing, and life cycle control.
Building trust while safely using AI to enhance OT environments requires balance. Establishing cryptographic identity during manufacturing and having signed and attested firmware and updates to ensure AI models receive valid data are a few essential elements, Sequino notes. Automated life cycle governance for credentials and cryptographic assets could help reduce manual errors and ensure long-term integrity, he adds.
Related:CISO Conversations: How IT and OT Security Worlds Are Converging
To that end, implementing supply chain verification, including cybersecurity bills of materials and software bills of materials, is important to help prevent tampered components from injecting false data into AI systems, Sequino says.
Operators Left to Pick up the Pieces
OT operators face enough hurdles without adding AI into the mix, particularly when it comes to vendor and integrity maturity. Systems are not always delivered with the hardening, segmentation, or documentation operators expect, and the burden falls on small security teams already juggling legacy assets and constrained maintenance issues, explains Sam Maesschalck, lead OT cybersecurity engineer at Immersive.
"Embedding AI into that environment adds complexity that must be carefully managed," he says.
AI is meant to alleviate work pressures, but a lack of trust introduces problems for the operators just as much as the machines. The human factor should not be underestimated, Maesschalck says. Introducing AI that can drift, hallucinate, or fail silently only increases the burden on operators, he adds.
Misfired alarms could affect processes such as dosing, pressure management, or machine torque, he adds. And early alerts may be riskier than delayed alerts if they can’t be immediately understood or trusted.
Related:Analysts Warn of Cybersecurity Risks in Humanoid Robots
How Are Attackers Using AI?
On top of that, OT personnel learn how to operate in physically risky environments. But AI integration will introduce new skills gap challenges that could affect threat detection and response. Without significant cross-training, OT personnel may struggle to troubleshoot AI-powered systems, says Dankaart — and attackers could take advantage of that scenario.
"Attackers could exploit this by masking malicious activity so that operator screens appear normal, even during an active attack," Dankaart says. "This has already happened pre-AI."
Deep insight into how attackers use AI is one vital element the guidance overlooks. But it is an emerging and serious concern, emphasizes Dankaart, highlighting the first reported case of using LLMs to discover vulnerabilities and the first AI-orchestrated cyber-espionage campaigns.
Attackers’ new AI uses for AI highlights how defenders increasingly require multiple lines of defense in prevention, detection, response, deception, and recovery.
"This signals a paradigm shift — an arms race where threat actors are leveraging AI at scale to find and exploit zero-day vulnerabilities, while defenders are still catching up," he says. "As OT systems begin to incorporate AI, they risk becoming targets themselves."
Cloud Creates Even More Concerns
Cloud-dependent AI will prove particularly challenging, Maesschalck says. Most OT networks cannot support continuous outbound connectivity or vendor-controlled update paths — and many never will, he adds. That conflicts with how the AI life cycle works, even if AI is deployed locally with "tightly defined data flows."
"AI models need periodic restraining or verification to remain accurate, while OT processes evolve slowly over long asset lifetimes," Maesschalck tells Dark Reading. "A model can become misaligned with reality even when no updates occur."
Deeper life cycle challenges arise from a lack of vendor support, keeping a model valid and supportable over decades, and maintaining the skills needed to retrain it.
Is CISA’s Guidance Difficult to Implement?
Although CISA’s four key principles come with challenges, it’s unanimous that they are necessary. Ease of implementation will vary across organizations, especially when comparing small-to-midsize businesses with large enterprises. More concerningly, significant investment in skills and processes that are scarce in the OT space will be required, says Dankaart.
"For smaller organizations, this could feel unrealistic and risk either stifling innovation or introducing new vulnerabilities if shortcuts are taken," he says.
Despite all the hardships, one area where AI can add value to OT environments with relatively low risk is anomaly detection. This can be achieved using traditional machine learning in passive monitoring systems, such as those leveraging port monitoring for network detection and response, Dankaart explains.
"These approaches don’t interfere with core OT operations, but they do minimize the risk of introducing new attack vectors and can significantly strengthen cybersecurity posture when combined with a defensible architecture," he says.
AI is just the latest technology shift to potentially hinder OT operations. OT is still recovering from the side effects of rapid IT/OT convergence, Maesschalck notes. Lessons learned may serve organizations well during the AI push.
"AI presents a similar risk: strong upside but the potential to embed long-lived architectural debt if organizations adopt it before fixing existing hygiene and segmentation weaknesses," he says.
About the Author
Features Writer, Dark Reading
Arielle spent the last decade working as a reporter, transitioning from human interest stories to covering all things cybersecurity related in 2020. Now, as a features writer for Dark Reading, she delves into the security problems enterprises face daily, hoping to provide context and actionable steps. She previously lived in Florida where she wrote for the Tampa Bay Times before returning to Boston where her cybersecurity career took off at SearchSecurity. When she’s not writing about cybersecurity, she pursues personal projects that include a mystery novel and poetry collection.