All software has flaws. Sometimes those flaws lead to security vulnerabilities that put users at risk. Security researchers work to find those vulnerabilities. Responsible researchers report vulnerabilities to developers and give them time to release a fix — and if they don’t, the researcher will release their findings to the public. This is an established procedure. The question is: how should a developer react?
Companies who are told that their products are vulnerable sometimes do not respond well. In some ways, that’s understandable.
- It’s embarrassing to be told that the product you have built — that you’re proud of — has flaws that are putting users at risk.
- It’s possible that the vulnerability becoming widely available will put your users at more risk.
- It’s quite likely that news of the vulnerability will negatively impact users’ trust in your product, which in turn will impact your company’s revenue. For small development companies, this can mean the difference between paying the bills and shutting up shop.
- Security researchers will often leverage the vulnerability in your product to promote themselves and their expertise. They make money based on their reputation.
So, it’s entirely human that developers — both large and small — sometimes meet vulnerability reports with a display of defensiveness, aggression, and occasionally legal threats. Most developers, particularly those in the open source world, won’t react like this, but there have been enough instances of angry reactions in the news recently that it’s worth addressing the issue. Sometimes these responses are valid: not all security researchers are responsible.
But, an angry and defensive response doesn’t help anyone. The first thing to be aware of, and it’s a blow to the pride, is that security researchers don’t care about vendors and developers. It is not their job or in their interest to protect developers’ income or reputation.
They are interested in protecting users of vulnerable products — for both self-interested and altruistic reasons. And since developers should have the same concern, combative reactions are counterproductive: they don’t help the user.
The doctrine of responsible disclosure, whatever its flaws, exists to protect users. Once researchers have reported a vulnerability, the ideal response would be to fix it as soon as possible. But that’s not always the route developers and their companies take. Fixes can be complex and expensive, and if users don’t know about the flaw they’ll keep buying.
It’s often not clear what the economic advantage is of implementing an immediate fix. Responsible disclosure exists to combat the possibility that a developer will simply decide not to implement a fix, or will delay the fix for as long as possible.
In that case, releasing the data to the public is the only option security researchers have if they want to protect users.
A non-cooperative reaction from developers — not implementing a fix, a hostile response, a spurious legal threat — will likely result in disclosure, which has the potential to put users at risk as the vulnerability becomes more widely known. But researchers are balancing that possibility with the likelihood that the vulnerability is already known to criminals who have much darker motives than security researchers.
The reporting of vulnerabilities by security researchers is part of the Internet’s immune system. As stressful as it can be for developers, it’s an essential part of keeping users’ safe.