- 08 Nov, 2025 *
running one
history
I’ve had the questionable honor of evaluating bug bounty submissions for one or two employers. It can be a fun activity, especially if the company isn’t large enough to warrant large-scale exploitation.
rewards
For such a program to work well, you must offer money. Even if the program is small and the rewards are not super large, there are large parts of the world where people make relatively little money. Your rewards are large for them.
More than just a humanitarian concern is the fact that people just plain deserve to receive something for their effort. They went out of their way to help you, the least you can do is reciprocate.
If you or your boss are the kind of callous that means you can’t be persuaded to care for people’s w…
- 08 Nov, 2025 *
running one
history
I’ve had the questionable honor of evaluating bug bounty submissions for one or two employers. It can be a fun activity, especially if the company isn’t large enough to warrant large-scale exploitation.
rewards
For such a program to work well, you must offer money. Even if the program is small and the rewards are not super large, there are large parts of the world where people make relatively little money. Your rewards are large for them.
More than just a humanitarian concern is the fact that people just plain deserve to receive something for their effort. They went out of their way to help you, the least you can do is reciprocate.
If you or your boss are the kind of callous that means you can’t be persuaded to care for people’s wellbeing, consider this: it is just more effective. By offering money you will receive more reports with more effort put into them that you would if you did not. By offering money and actually paying out, you will retain more independent researchers (they will be more likely to submit more reports, over a longer time). This isn’t some feel-good “it just seems logical” argument (even though it is logical), I’ve seen it work in practice. I’ve seen the difference between programs that compensate you properly and respectfully, and programs that offer “swag”. A program that does not pay may as well not exist.
report evaluation
without intervention
Without intervention, you will receive a lot of low-quality reports. The text won’t let you replicate a bug and the severity is probably “misjudged” (in much the same way a cleaning product is always the best ever stain remover).
To get anywhere, you must be strict. Over time, independent researchers that send you reports that get accepted and compensated will learn that you don’t suck. They’re going to invest more effort into you, which means receiving more reports.
intervention
It is in everyone’s best interest for a report to be evaluated quickly. The independent researcher wants to eat (or buy beer, or do whatever it is they do with the money) and you want to get to fixing whatever it is they found.
What I eventually settled on is pretty simple:
- if you reference open-source code, you do so by directly linking to an online repository
- if you make a claim about an effect, you include a script to reproduce it without significant effort on my part (I run it and it works)
- if you cannot produce a script, you will include a video of you performing the entire exploit
open-source
By requiring a reference to an online repository instead of allowing code embedded in the report, you prevent people from making up code that doesn’t exist. It also tells you immediately what commit the vulnerability exists in.
Honestly most of what this serves to accomplish is weeding out liars, and for this it works well.
reproducing script
When the programs start out (i.e. when I inherit them), I inevitably spend 90% of my time trying to reproduce exploits from vaguely written reports. That’s one thing if you’ve worked in a place for a decade and know the entire codebase intimately, but entirely another if you are relatively new.
By requiring a script be included to reproduce the vulnerability without problems, you’re eliminating that time. You run it, it either produces the claimed result or it does not. Rejection takes a few minutes, maybe a bit longer if it involves compiling. When you receive 8 reports in one week (or in a day!), you will be happy about that timeline.
video evidence
In a better time before LLMs, videos demonstrating an exploit from start to finish were a lot of effort to falsify but relatively easy to create for a benign actor. To make the report, they’d have to run the exploit anyway. Recording the screen while doing so is a small extra effort.
efficacy
Every described intervention is relatively easy to check (in the order of a few minutes). The video evidence takes the longest, because you can’t just match expected text output with actual text output. It’s still an order of magnitude less effort than having a vague paragraph describing something which might not exist.
Now that we have fancy autocompleters in the guise of AI producing longer and more nicely formatted vaguely ominous reports, filtering nonsense is even more challenging.
Most of the described interventions still work, with the notable exception of video evidence. That is now low-effort enough to fake that asking for it has become meaningless.
By no longer allowing what is the least challenging piece of uncontrovertible truth, you’re going to exclude some more people. This is unfortunate, but better than the alternative.
Asking for online source code commit and line references (as a direct link) and/or a script that reproduces the whole vulnerability without any requirements outside those for setting up the environment (e.g. a c compiler is acceptable, having to install novelty software is not) is still entirely functional.
Any portal that requests those two can verify that the link works and points to the referenced code, and that the script is indeed a shell (or maybe python) script of some sort that runs in a sandboxed environment with only acceptable dependencies. Anything that does not satisfy one (but preferably both for open source projects) can be rejected automatically, filtering all of the AI slop reports that I’ve seen to date.
concluding
Fraud detection has once again become a little more challenging than it used to be. In a lot of people, the tendency exists to say that we are doomed and our problems are inevitable. This is nothing new, you see this in fraud detection in any environment. It’s essentially never correct. Loads of corporations will tell you that it’s hard or impossible, but this is because any iota of extra friction is antithetical to their core belief of making more money.
If you are willing and capable of taking a balanced stance, small interventions in the form of verification and requirements that you probably should have had in place anyway will solve 90% of your problems. It is unfortunate that some amount of fraud or mistake will remain, but I think we can all agree it is nicer to have it be less.