
Source: Age Foto Stock via Alamy Stock Photo
A new backdoor uses an OpenAI API for command-and-control (C2) communications to covertly manage malicious activities within a compromised environment, demonstrating a unique way attackers can [abuse generative AI services and tooling](https://www.darkreading.com/cyber-risk/openai-new-…

Source: Age Foto Stock via Alamy Stock Photo
A new backdoor uses an OpenAI API for command-and-control (C2) communications to covertly manage malicious activities within a compromised environment, demonstrating a unique way attackers can abuse generative AI services and tooling.
Researchers from Microsoft’s Detection and Response Team (DART) team discovered a backdoor dubbed “SesameOp” in July after responding to a security incident in which threat actors lurked undetected in an environment for several months, according to a blog post published Monday by Microsoft Incident Response.
The covert backdoor, designed by threat actors to maintain persistence and allow them to manage compromised devices, is “consistent with the objective of the attack, which was determined to be long term-persistence for espionage-type purposes,” according to the post.
An unique aspect of SesameOp, however, is a component that uses the OpenAI Assistants API as a storage or relay mechanism to fetch commands, which the malware then runs. The API is a tool that allows developers to create custom AI assistants using Azure OpenAI models, enabling features like conversation management and task automation.
“Our investigation uncovered how a threat actor integrated the OpenAI Assistants API within a backdoor implant to establish a covert C2 channel, leveraging the legitimate service rather than building a dedicated infrastructure for issuing and receiving instructions,” according to the post.
Related:UNC6384 Targets European Diplomatic Entities With Windows Exploit
The attacker also employed other “sophisticated techniques” in its use of the API to secure and obfuscate C2 communications. These included compressing payloads to minimize size and using both layered symmetric and asymmetric encryption to protect command data and exfiltrated results.
Deliberate Misuse of OpenAI API
Microsoft researchers provided a deep dive into the attack and misuse of OpenAI in the blog post. After DART responded to the July incident, its investigation found a “complex arrangement of internal web shells,” each of which was responsible for running commands relayed from persistent, strategically placed malicious processes, according to the post.
These processes leveraged multiple Microsoft Visual Studio (VS) utilities that had been compromised with malicious libraries, “a defense evasion method known as .NET AppDomainManager injection,” according to the post.
It was when attackers were hunting across other VS utilities loading unusual libraries that they discovered additional files that could facilitate external communications with the internal web shell structure, including SesameOps. The overall infection chain of the backdoor is comprised of a loader (Netapi64.dll) and a NET-based backdoor (OpenAIAgent.Netapi64) that leverages OpenAI as a C2 channel, according to the post.
Related:Ribbon Communications Breach Marks Latest Telecom Attack
“The dynamic link library (DLL) is heavily obfuscated using Eazfuscator.NET and is designed for stealth, persistence, and secure communication using the OpenAI Assistants API,” according to the post. “Netapi64.dll is loaded at runtime into the host executable via .NET AppDomainManager injection, as instructed by a crafted .config file accompanying the host executable.”
Disclosure and Mitigation
Microsoft’s DART informed OpenAI of their findings and the two companies jointly investigated this misuse of the API. What they found is that it did not represent a vulnerability or misconfiguration of the tool, “but rather a way to misuse built-in capabilities of the OpenAI Assistants API,” which itself will be phased out in August 2026, according to the post.
OpenAI has since identified and disabled an API key and associated account believed to have been used by the threat actor as part of SesameOps. “The review confirmed that the account had not interacted with any OpenAI models or services beyond limited API calls,” according to Microsoft Incident Response.
Related:Dentsu Subsidiary Breached, Employee Data Stolen
Microsoft and OpenAI said they will continue to collaborate to achieve a better understanding of how threat actors misuse emerging technologies so as to disrupt these efforts. Indeed, APIs can hold the keys to the kingdom for generative AI services and applications, and attackers have been quick to misuse and abuse them since the inception of the technology.
In the meantime, Microsoft Incident Response offered several mitigations for defenders and recommended that organizations frequently audit and review firewalls and web server logs, and be aware of all systems exposed directly to the Internet.
They also should use a local firewall, intrusion prevention systems, and a network firewall to block C2 server communications across endpoints whenever feasible. “This approach can help mitigate lateral movement and other malicious activities,” according to the post.
Defenders also should review and configure perimeter firewall and proxy settings to limit unauthorized access to services, including connections through non-standard ports, according to Microsoft.
About the Author
Elizabeth Montalbano is a freelance writer, journalist, and therapeutic writing mentor with more than 25 years of professional experience. Her areas of expertise include technology, business, and culture. Elizabeth previously lived and worked as a full-time journalist in Phoenix, San Francisco, and New York City; she currently resides in a village on the southwest coast of Portugal. In her free time, she enjoys surfing, hiking with her dogs, traveling, playing music, yoga, and cooking.