I want security researchers to be able to disclose more of their bug bounty findings because greater transparency from companies is essential for advancing our collective security knowledge and practices – and because so many of these individuals have become dear friends over the span of my career. So, I want to address something with the new generation of reachers – some of your communication choices work against the very trust you need to make transparency possible.
The public spotlight of responsible disclosure often falls on companies that mishandle vulnerability reports or treat researchers poorly, and while those stories certainly exist (I’ve written extensively about how companies can communicate more effectively with researchers), another element of this formula deserves atten…
I want security researchers to be able to disclose more of their bug bounty findings because greater transparency from companies is essential for advancing our collective security knowledge and practices – and because so many of these individuals have become dear friends over the span of my career. So, I want to address something with the new generation of reachers – some of your communication choices work against the very trust you need to make transparency possible.
The public spotlight of responsible disclosure often falls on companies that mishandle vulnerability reports or treat researchers poorly, and while those stories certainly exist (I’ve written extensively about how companies can communicate more effectively with researchers), another element of this formula deserves attention. Namely, the communication challenges that security researchers themselves bring to the table (and sometimes insist on amplifying).
After years of working with bug bounty programs and independent security researchers across web2 and web3, I’ve seen a pattern of communication practices that repeatedly undermine otherwise productive interactions. Even the most talented researchers can sabotage their own efforts through ineffective communication.
Here are three of the most common communication mistakes I see researchers make during the vulnerability disclosure process. I’ll also walk through how addressing these issues can lead to better outcomes for everyone.
1. Misaligned Expectations About Urgency and Impact
A common source of friction in disclosure communications is misaligned expectations about a vulnerability’s severity and subsequent resolution timeline. Some researchers submit reports with the implicit assumption that their discovery should be treated as a “drop-everything” emergency, using alarming or threatening language to push for immediate fixes without considering other factors that might inform a company’s risk decisions. When a company then triages the issue according to its established risk matrix (perhaps assigning it medium priority rather than critical), researchers often feel dismissed or undervalued.
When this disconnect happens, it’s often because a researcher failed to:
Understand how companies evaluate risk in the context of their specific business and threat model
Recognize that engineering, infra, and product teams often have competing priorities and release cycles
Provide clear and objective evidence of their report’s impact rather than only theoretical worst-case scenarios
I routinely hear from bug bounty teams that the worst reports scream "CRITICAL VULNERABILITY!!!" but fail to show meaningful impact or include reproduction steps. This kind of report creates tension and skepticism rather than collaboration because it leads with alarm rather than specifics, making the security team’s job even harder. Uncertainty breeds resentment, not urgency, so even if you’re right, you’ve just decreased your chances of success.
My advice is to provide clear evidence of what the vulnerability does in their environment, walk through reproduction steps, and show the impact in terms that someone can act on. Your objective should be to help them understand where this fits in their risk landscape so they can prioritize appropriately, which is how you convince engineering teams to schedule fixes.
2. Failing to Understand Audience and Process
Many researchers don’t realize that their reports go through multiple hands, each with different technical expertise and responsibilities. The initial triage team often has a broad but not deep understanding of all systems, and they’re primarily focused on two questions: "Is this a legitimate security issue?" and "Where should this report be routed next?"
Common researcher mistakes include:
Assuming everyone reading the report has identical technical context
Being either too technical without a summary or too vague without specifics
Sending lengthy, unedited videos that meander around the point
A bug bounty manager once told me, "I’ve seen researchers send 20-page reports with no clear indication of what the vulnerability actually is, or a two-sentence report with no reproduction steps. Both extremes make it incredibly difficult for us to evaluate the finding properly." Spoiler: Things that are hard to evaluate usually don’t get prioritized.
Structure your reports with a human workflow in mind because your initial report needs to help the triage team understand where to route it, while subsequent communications may go to engineers with deeper knowledge. Your communication should help frontline folks communicate accurately to the next person in the workflow, recognizing the actual level of decision-making authority at each step, and adapt your communication style accordingly as the report progresses through the internal team. Spoiler: IC engineers aren’t responsible for negative headlines in the press, and rarely does it impact their own performance reviews. They’re not the audience for this overused, often ignorant threat by researchers (most organizations can safely weather a single, isolated news cycle as a bump in the road).
3. Letting Emotions Damage Professional Relationships
The bug bounty ecosystem can be emotionally charged. Researchers invest significant time and energy into their findings, and when they perceive a company as dismissive or unresponsive, frustration naturally follows. However, letting those emotions dictate your communication creates significant problems, including:
Using threatening or extortionate language
Publicly venting frustrations before allowing internal processes to finish
Deliberately overstating risk
Adopting a combative tone
These approaches signal to companies that a researcher may not be acting in good faith, which ironically leads to even slower response times and more conservative assessments. Companies talk to each other about problematic researchers just as researchers talk about problematic companies – and when organizations receive an angry or threatening report, it’s not uncommon for teams to loop in the lawyers, which immediately slows everything down. Security teams want to work with researchers who see themselves as partners in security, not adversaries trying to extort or blackmail them.
Responsible disclosure is a professional relationship that might span years, not a one-time transaction. Position yourself on the same side of the table as the company’s security team because you’re both working to solve the same problem. Frame your communications around shared goals such as protecting users and advancing the field’s collective knowledge. Stop telling yourself that your vulnerability alone could destroy their brand, because if they’ve hired me, it won’t.
Try communicating with something like this: "I discovered an issue that we should address together to protect your users. Here’s what I found and how we can work together to resolve it effectively."
Be patient but persistent, maintaining courteous communication even when frustrated. If you believe your report is being mishandled, explain your reasoning clearly and ask for clarification rather than escalating to threats or public disclosure. Remember that bug bounty teams are staffed by security professionals who generally want to do the right thing – treat them accordingly (hell, they’re probably equally annoyed as you are about the process/politics that allowed this issue to happen in the first place). When you demonstrate that you’re collaborating toward a mutual goal rather than extracting value from an adversary, you lay the foundation for the trust that enables companies to be more transparent about vulnerabilities, ultimately benefiting the entire security community. If you say this is your goal, but your behavior demonstrates otherwise, it’s really hard to believe you.
The Communications Science Behind These Challenges
The challenges described above aren’t unique to security research - they’re amplified manifestations of well-documented phenomena in communications scholarship. Understanding the academic research behind these issues can help researchers approach disclosure more effectively.
Media Richness Theory and Bug Bounty Communications
Media Richness Theory, pioneered by Richard Daft and Robert Lengel in the 1980s, explains that different communication channels vary in their ability to convey complex information. Text-based bug reports represent a "lean" medium that lacks facial expressions, tone of voice, and immediate feedback - all elements that help prevent misunderstandings.
Research shows that lean media is particularly problematic when:
The subject matter is complex or ambiguous
Parties don’t share established relationships
There are potential conflicts of interest
All three conditions are typically present in bug bounty scenarios. This explains why misunderstandings about severity, impact, and intent are so common - the communication medium itself predisposes these interactions to confusion.
Cross-Cultural and Language Barriers
Bug bounty programs operate globally, connecting researchers and companies across cultural and linguistic boundaries. Communications research by Geert Hofstede and others demonstrates that cultural differences significantly impact how people interpret messages, particularly around:
Power distance (attitudes toward authority and hierarchy)
Directness vs. indirectness in communication
Tolerance for ambiguity and uncertainty
Time orientation (long-term vs. short-term thinking)
When a researcher from a culture that values direct communication interacts with a program managed by people from a culture that prioritizes harmony and indirect communication, misunderstandings are virtually inevitable without conscious adaptation.
Asynchronous vs. Synchronous Communication
Most bug bounty communications happen asynchronously, with significant time delays between messages. Research by McGrath and Hollingshead shows that asynchronous communication creates unique challenges:
It extends the feedback cycle, allowing misunderstandings to compound
It reduces context awareness between parties
It makes relationship-building more difficult
It can exacerbate attribution biases (assuming negative intent)
These factors help explain why frustration often builds over time in bug bounty interactions, particularly when reports remain unresolved for extended periods.
Building Better Security Partnerships for Greater Transparency
My ultimate goal in highlighting these communication issues is to promote more transparency in security. Companies should disclose more vulnerability reports publicly, sharing information that helps the entire security community learn and improve. But this kind of transparency requires trust, and that gets undermined when communications go poorly on either side of the relationship.
To be clear: this post focuses on researcher communication patterns, but companies bear equal responsibility for effective bug bounty communications. I’ve written extensively about how organizations can improve their side of this equation, includingsetting clear expectations,understanding metacommunication dynamics, andbreaking down communication barriers that prevent productive researcher relationships. Both parties need to improve for this ecosystem to reach its full potential.
When researchers approach disclosures in ways that create adversarial relationships, legal concerns, or public relations anxieties, companies naturally become more hesitant to share information openly. Each negative interaction reinforces the corporate instinct to minimize disclosure rather than embrace it.
Successful bug bounty relationships aren’t just about finding vulnerabilities – they’re about establishing productive, long-term collaborations between researchers and companies. The most effective researchers understand this dynamic and communicate in ways that facilitate trust rather than undermine it.
Before submitting your next bug report, consider how your communication approach might be perceived. Are you presenting yourself as a professional security partner, or as someone the company should view with caution? The difference often determines not just the bounty amount, but whether your findings make a meaningful security impact at all – and whether they eventually become part of the shared knowledge that advances our field.
By avoiding these common communication pitfalls, researchers can significantly improve their effectiveness and build the trust necessary for greater transparency across the industry. After all, both parties share the same ultimate goal: making systems more secure for users through both fixes and knowledge sharing.
Want to catch future posts on security communication? We write regularly about the intersection of communication theory and security practice, including more on vulnerability disclosure, incident comms, and building influence in technical organizations. Subscribe to our monthly newsletter or follow along on LinkedIn.