Are whistleblowers traitors to the company, a danger to corporate brand image, and a form of insider threat? Or are they an early warning safety valve that can be used to strengthen cybersecurity and compliance?
Two high profile recent whistleblower cases confirm the arrival of whistleblowing to cybersecurity. These are Peiter (Mudge) Zatko and Twitter; and an FCA action against Penn State’s Applied Research Laboratory (ARL). We are not concerned with the detail of the cases here, but with the concept and implications of cybersecurity whistleblowing. Nevertheless, a brief look at these two cases is important.
Zatko was technically a member of the Twitter executive team, but generally considered the head of security. He was employed to improve security following a breach that affected the accounts of Biden, Obama, Gates, Bezos, Musk, and several celebrities. He found numerous issues, focusing on a lack of security visibility into the platform, and open access to private accounts for Twitter employees (explaining the earlier breach – employees were hacked, granting the hackers access to the celebrity accounts).
Zatko attempted to improve matters but was effectively ignored. Eventually he went public (became a whistleblower) and – in his own words – was “summarily dismissed”. At a subsequent Senate hearing, he was asked if better visibility would improve Twitter’s security. He replied, “Yeah, absolutely… But that hasn’t been prioritized over other projects such as increasing revenue or users.”
The Penn State case follows revelations by then CIO Matt Decker (now chief data and information officer at the NASA Jet Propulsion Laboratory) that Penn’s ARL failed to abide by government requirements. He filed a lawsuit under the False Claims Act in October 2022. The lawsuit was unsealed on August 28, 2023.
B. Stephanie Siegmann posted on LinkedIn “[the lawsuit alleges that] Penn State University committed cybersecurity violations, falsified records provided to the government, and mishandled Confidential Unclassified Information (CUI).”
The final outcome of these two whistleblowing incidents is yet to play out – but they are by no means the only examples of cybersecurity-related whistleblowing. In the summer of 2022, Rocketdyne agreed a payment of $9 million to settle another whistleblower-instigated FCA case. And in the summer of 2021, a former Facebook product manager, Frances Haugen, leaked internal Facebook documents to the Wall Street Journal – leading to the WSJ series known as The Facebook Files. The full collection of Haugen’s whistle-blown Facebook files is now available at https://fbarchive.org/.
What is clear is that whistleblowing has come to cybersecurity. It will not go away. What remains to be seen is how organizations will or should respond.
Whistleblowers have always existed. In the past, the primary motivations have been an ethical stance against wrongdoing, and grievance or grudge against a person within the company, or the company itself. Typically, it has resulted in one person with limited knowledge against the corporation — and the power of the corporation has usually prevailed. Whistleblowers have been stigmatized as complainers, and their future employment prospects have been damaged. Shooting the messenger became a standard method of damage limitation.
This is no longer realistic within cybersecurity and the digitized business. The primary reasons are a single person, especially a member of IT or security, has deep and widespread insight in the operation of the company — and is often protected by law from retaliation by the enterprise.
The ethical motivation has been expanded by a legal requirement for compliance. Compliance is primarily designed to protect the customer, including end users. Compliance failures are sometimes caused by concentration on profit and disregard for customers; and ethical employees can object to this. IT and security staff are likely to have deep insight into such non-compliance.
Finally, the reward motivation has increased dramatically. Personal satisfaction has been complemented by financial reward. Apart from the FCA, we also have the SEC’s whistleblower program run by its Office of the Whistleblower. Together with providing the fullest possible confidentiality to the whistleblower, it also prohibits retaliation by the employer (potentially enforced by fines and injunctions). Most notably, however, it offers monetary awards to whistleblowers who provide information that results in SEC sanctions in excess of $1 million.
On August 4, 2023, the SEC announced an award of $104 million to seven individuals. On August 25, 2023, it announced an award of $18 million to a single whistleblower. By the end of 2022, the SEC had already paid out more than $1.3 billion in 328 awards to whistleblowers.
While boards may wish to continue shooting the messenger, this is no longer an effective approach.
Friend or foe; boon or bane?
The old view of the whistleblower is no longer sustainable. “It is an injustice to think of whistleblowers with valid complaints and concerns about cybersecurity issues as a threat,” suggests Claude Mandy, chief evangelist for data security at Symmetry Systems. “Ensuring that potential whistleblowers have a means for raising anonymous cybersecurity concerns to an independent body, without fear of repercussion, plays an essential role in corporate governance of an organization and reducing the surprise factor.”
In short, don’t fight whistleblowers – make use of them. This echoes the sentiment often credited to the fifteenth century Emperor Sigismund: “Do I not destroy my enemies by making them my friends?” Rather than castigating potential whistleblowers, the valid worries of concerned employees should be encouraged, heard, and acknowledged internally.
“A whistleblower,” says Igor Volovich, VP of compliance strategy at Qmulos, “is a canary in the mine. They are critical. They are essential for the continued safe operation of the mine.”
Retaliation is illegal, while ignoring the problem is dangerous. Dismissing concerns as being simply over-sensitive personal ethical values won’t work. It is no longer a question of ethics, but ultimately one of law. Exposing sensitive personal information will be considered unethical by many but is likely to be universally condemned by numerous national and international laws.
The problem is complex. The board may not even be aware that the company is contravening legally required compliance regulations. Its focus is on profit, and it employs cybersecurity, IT, and legal teams to handle such cost centers while the board concentrates on profit. Pressured by lack of resources from the board, compliance is often reduced to a checkbox exercise that can be skimped – the appearance may be compliance while the reality is not.
There are two fundamental dangers here: a lack of transparency and poor reporting (both in delivery and reception). Anderson Lunsford, CEO and founder at BreachRx, gives a hypothetical scenario that encompasses both issues: “Imagine a security analyst who alerts the CISO to a potential event that seems problematic, but the CISO decides to ignore it as it doesn’t appear particularly impactful. Fast forward a few months when it turns out that alert was related to a much bigger widespread data breach.”
The original warning was likely unrecorded so there is no transparency. The reporting was ignored (given a poor reception) and not delivered to senior management (poor delivery).
“Meanwhile,” continues Lunsford, “the SEC pursues action against the company and anyone personally they believe has violated its rules. Now the CISO, GC, and any other leader on the IR team that is perceived to be involved may end up in the SEC’s crosshairs. And the original security analyst could be enticed with a potential multi-million dollar payday to say the company knew about the problem months beforehand but ignored it.”
In the digital world, this danger is magnified many times by all the engineers who now have visibility into what is really happening within the company infrastructure. It is multiplied further by the potential for massive monetary reward for successful whistleblowing. The traditional ethical motivation is joined by greed while legal protections could shield political activism. These considerations raise a new question: should the whistleblower now be considered a new type of insider threat?
“There isn’t a simple answer. For many reasons, over time, organizations sometimes get loose with their interpretation of the rules. Whistleblowers keep things honest. That said, people can abuse that role as a disguise for an alternative agenda and provide false information,” comments Alex Janas, Field CTO, Security at Commvault.
“The US government has the Whistle Blower Protection Act (WPA), Office of Special Counsel, and Office of Inspector General. This law and these orgs allow federal employees to disclose information,” he adds. “The trick is to not allow [activists] to use this term ‘whistleblower’ when they are clearly exploiting the system for some political statement.”
The entanglement of legal protection, monetary reward, compliance requirements, ethical and/or political motivation, and the increasing complexity of IT and security infrastructures means that the likelihood of whistleblowing is growing in pace with the potential cost of its effect.
Maximize the friend, minimize the foe
Clearly, whistleblowing must be given greater consideration than has perhaps occurred in the past. One of the themes we have seen so far is that whistleblowing has the potential for good – to act as the early warning canary in the mine, to help keep the company honest, to surface unintentional non-compliance. The question, then, is how should companies maximize the friend to minimize the foe among whistleblowers.
“Can you eliminate whistleblowers? Probably not,” says Volovich. “But you can use them. If we consider that whistleblowing is a symptom of malfeasance or negligence, then I don’t think we’ll ever eliminate them. But we can consider whistleblowing as a transparency mechanism.” Rather than simply thinking of whistleblowers as a symptom of bad behavior, they should be treated as a vehicle for transparency, becoming a check on bad behavior and allowing corporate bad behavior to be corrected.
“The more transparency you have in your systems, the more credibility and integrity that you have in your reporting – both internally and externally – the less likely you are to incur the wrath of a whistleblower. Sunlight is the best disinfectant. That’s how you get rid of that internal malfeasance bacteria – you shine a light on it.” The whistleblower provides that light.
Key to this is an effective internal reporting mechanism. Nearly all companies have one, but not many companies make good use of it. “Companies may say we’re interested in internal reporting, but when you actually do report, you face retaliation,” he added. “When this happens, the only way to really report a serious issue is to use the mechanisms that exist beyond the company; that is, to take your concerns outside the organization.”
You should treat internal reporting as a welcome safety valve. “Encourage it,” continued Volovich, “but encourage it as early as possible and make sure these issues don’t fester to the point where people are forced to go outside. The primary cause of whistleblowing is internal malfeasance witnessed by somebody whose personal ethics say, ‘I can’t take it anymore.’”
Organizations must accept that whistleblowers exist and are here to stay. But they are symptoms of problems within the organization. Ignore them and the problems will grow until they explode. But listening to issues and – more importantly – responding quickly, positively, and sympathetically will limit the blast radius and ensure you have the most secure and compliant infrastructure possible.
“Bottom line,” reminds Lunsford, “the potential impact from whistleblowers has increased dramatically with the SEC rules, and companies are not yet treating whistleblowers with the same level of attention they do with other threats. That’s a big mistake.”