November 11, 2025

(Adobe Stock)
Microsoft discovered a new type of side-channel attack on large language models (LLMs) that could let an attacker draw specific information on sensitive topics from LLMs despite having transport layer security (TLS) encryption.
In a [Nov. 7 blog post](https://www.microsoft.com/en-us/security/blog/2025/11/07/whisper-leak-a-novel-side-chan…
November 11, 2025

(Adobe Stock)
Microsoft discovered a new type of side-channel attack on large language models (LLMs) that could let an attacker draw specific information on sensitive topics from LLMs despite having transport layer security (TLS) encryption.
In a Nov. 7 blog post, Microsoft said this type of attack especially poses real-world risks to users by oppressive governments where they may target topics such as protesting, banned material, an election process, or reports by journalists.
“Unlike prior research from October that analyzed token lengths to infer content, Whisper Leak offers a genuinely new approach: it uses machine learning to analyze patterns in encrypted packet sizes and arrival times to categorize the user’s prompt’s general topic,” explained John Carberry, solution sleuth at Xcape, Inc.
Carberry said these attacks are highly effective, achieving 100% precision in identifying sensitive discussions, such as those involving money laundering or political dissent on certain LLMs, even when TLS encryption is used, demanding attention from security professionals.
“This makes it a realistic surveillance tool for anyone observing a network, from nation-states to local adversaries on public Wi-Fi,” said Carberry.
Jason Soroko, a senior fellow at Sectigo, added that compared with the mid October reports, Whisper Leak is new in that it treats streaming traffic as a fingerprint for prompt topics rather than just leaking token lengths or exploiting cache timing like InputSnatch. Soroko said it shows that sequences of encrypted packet sizes and inter-arrival times let a passive observer classify whether a conversation matches a chosen topic even when tokens are grouped and the channel is HTTPS.
“The work validates this across many major providers and reports accuracies above 98% for several services while noting that some batching strategies offer more resistance, but do not eliminate the signal,” said Soroko. “Security teams should care because this is a practical metadata attack that needs only a vantage point on the network and no break of TLS, letting adversaries triage who’s discussing sensitive themes in real-time.”
According to Soroko, Whisper Leak does not give attackers the words from the conversation: It lets a passive observer watch encrypted traffic metadata like packet sizes and the gaps between packets and then use those patterns to decide whether a session is about a particular topic. Both this signal and token length are metadata rather than plaintext, said Soroko, but Whisper Leak turns that metadata into a “yes or no” on sensitive themes instead of raw text.
“Token length leakage mostly tells you how long a reply is or hints at rough token boundaries, which is often weak by itself,” explained Soroko. “Whisper Leak looks at the entire sequence of sizes and timing in a streaming response and treats it like a fingerprint, which tends to correlate with topics even when tokens are grouped. So, the content remains hidden, yet the metadata can still expose that someone is discussing a sensitive category, which is enough to drive targeting, surveillance, or policy enforcement.”
Microsoft offered the following tips to security teams:
*Avoid *discussing highly-sensitive topics over AI chatbots when on untrusted networks. Use VPN services to add an additional layer of protection. *Opt *for providers that have implemented mitigations. Deploy non-streaming models of LLMs from providers. Keep up-to-date on provider security practices.
Get essential knowledge and practical strategies to use AI to better your security program.

Related
Barracuda launches AI assistant for security teams
SC StaffNovember 12, 2025
Barracuda Networks has launched Barracuda Assistant, an AI-powered virtual agent designed to bolster security operations by accelerating threat investigation and simplifying complex tasks for organizations and managed service providers, reports Security Brief Australia.