RSA, the Security Division of EMC, has been in the news lately, and not in a good way. The first shoe dropped in March, when the company disclosed via press release that an unknown attacker, likely a state-sponsored actor, stole certain unidentified assets related to its SecurID product. Although RSA did not reveal exactly what the attackers stole, many observers speculated that these assets were the master token seed database. If true, this is about as serious a problem as RSA could have, because it allows attackers to potentially clone tokens.
My company uses RSA tokens, so the hack got my attention. In addition, as a SaaS-based security services provider, we supply many of our customers — banks, credit unions and insurance companies — with SecurID tokens for their own use. The compromise therefore potentially affected not just my company’s security, but that of our customers as well. Right after the first stories hit, our customers had lots of questions we could not answer: When did you know about this? What was the true extent of the compromise? What does it mean for us? Can we continue to rely on RSA tokens for their security? And the most important one: What are you doing about it?
Our impromptu answers were:
(a) We found out when you did, by reading the news; (b) We don’t know, but we will ask and maybe they will tell us; (c) We don't know; (d) We’re not sure; and (e) We’ll get back to you real soon with an answer.
As quickly as we could, we issued a security advisory to our customers indicating that we were investigating the issue and would update them soon when we knew more. In the meantime, a small team of stakeholders assembled: me, our CISO, and key members of our operations, customer support and finance teams. The team escalated to RSA, and we had a frank conversation. It didn’t yield much more than a generic RSA best practices document and a muffled refusal to tell us much more than we’d read already. We reviewed our security procedures, log files, and server protections. We also called several alternative suppliers, including Symantec’s Verisign division, to understand what it would take to switch if we needed to.
In the end, though, we determined that we had been doing most of the right things operationally, including (a) keeping the token records containing the token seed values (secret keys) offline; (b) hardening the SecurID server's operating system; (c) strictly limiting administrative access to it per RSA's guidance, and (d) monitoring the SecurID server for signs of fraud and abuse. We felt that the residual risk to our customers as a result of these measures was fairly low.
Then the other shoe dropped. Late last month, Lockheed Martin revealed that attackers had tried to break in to the defense contractor using the materials they stole in March. That made the risks that RSA had characterized as “hypothetical” much more real.
To RSA’s credit, in response to the story the company offered to replace every customer’s SecurID tokens with new ones. But as with the previous disclosure, RSA did it via a press release. Once again, calls and tickets from our customers started rolling in. And once again, Perimeter was caught by surprise and without good answers. This time, though, we had already decided to replace all of our tokens with new ones, probably — but not necessarily — from RSA. We are working though that plan now, and have been keeping our customers informed.
I mention the preceding narrative not to complain (at least, not too much), and not to put a positive spin our own actions (at least, not too much). I mention the preceding because it provides the context for understanding what we can learn from the experience. I’ve observed three things about this incident: (1) even the most trusted technologies fail; (2) the incident illustrates what “risk management” is all about; and (3) customers should always come first. Let’s review each of these.
1. Even the most trusted technologies fail
Many of the world’s most valuable and sensitive secrets are protected, in part, by the strong authentication SecurID has long provided. They have been a trusted security brand for years. On a personal note, my own ties to RSA are long-standing: I’ve been an RSA Conference co-chair for most of the last eight years, and I have many friends who work there.
For many of us, the SecurID token was a thing to be trusted, because it just worked, constantly. Certain things in life are constants: the sun rises and sets every day; the government collects taxes; everyone dies eventually; and the SecurID token changes its numbers every 60 seconds. That’s just the way it is. It is a tribute to RSA’s success that its tokens were so reliable and so trusted for so many years that they, in effect, faded into the background like the setting sun.
So, hearing that RSA — RSA! — was seriously hacked was profoundly unsettling. It reminded us that nothing is one hundred percent secure. As security professionals, we are trained to remember that any single control is fallible. This particular incident reminded us of that fact like a bucketful of ice water to the face. Sometimes, as PJ O’Rourke once put it, “the shock of recognition is just a shock.”
2. The RSA token incident illustrates what “risk management” is all about.
Most security vendors conflate risk management with risk elimination: use our product to identify risks so that you can get rid of them. That works fine in clear-cut cases like vulnerability scanning, for example, where you can calculate risk scores and decide whether to patch a workstation based on your scoring thresholds. That strategy works less well in cases where you need to make a call in uncertain circumstances, where you do not possess the facts.
That’s what happened here. When news of the attacks broke, thousands of RSA customers had to engage in a Rumsfeldian exercise in assessing the risks: sizing up our knowns and unknowns. Understanding what was hypothetical, and what was real. In this case, we saw true risk management for what it is: the art of balancing customer interests, our own reputation, known risks, countermeasures, probability of attack, and uncertainty. In other words, it is risk management in its purest form.
3. Customers should always come first
If I had to grade RSA’s responses, both to the initial hack and to the more recent revelations, I could not give them better than a D. RSA made three serious mistakes.
Rather than communicate with their customers privately first, RSA issued press releases, leaving resellers like Perimeter surprised and hanging. Rather than coming clean about the extent of compromise, RSA was stunningly evasive about the secrets that were stolen, leaving customers uncertain about their true risk posture. And finally, rather than biting the bullet and replacing all of their customers’ tokens in March, RSA downplayed the severity of the compromise, forcing them to backtrack when the attempted hacks on Lockheed came to light.
Making any one of these mistakes in isolation would be understandable. Making all three of them is much harder to understand. It gives the strong impression of a company trying to talk its way out of a big problem — on the cheap. To put it another way, RSA seemed to care more about protecting its profits than protecting its customers.
In the security industry, “trust” is a somewhat slippery term defined in terms ranging from the cryptographic to the contractual. Bob Blakley, a Gartner analyst and former chief scientist of Tivoli, once infamously wrote that “Trust is for Suckers.” What he meant by that is that trust is an emotional thing, a fragile bond whose value transcends prime number multiplication, tokens, drug tests or signatures — and that it is foolish to rely too much on it.
In the case of RSA, it’s not a stretch to say that the amount of trust customers are willing to extend to them has dropped precipitously. For us, regrettably, they are just another vendor now.