Whether it’s coming from the business units or the IT organization, every company wants to pull off new tech initiatives to create business impact. Thus, we see new functionality. We think it’s cool. We introduce it.
…but then a user slips up because of some unforeseen slack in the system.
When that happens, suddenly we’re in reactive mode, because we didn’t work to figure out the risk until after the fact. The user’s mistake becomes a means to justify new security technologies and a bunch of new controls.
And then guess what?
We end up with a horrible user experience — witness the old-school PIN tokens that were really popular in the early 2000s, the often demanding two-factor authentication systems of today, along with finicky VPN connections and other day-to-day hassles.
As much as they all make sense to security pros, they spark inevitable friction with business users. You’d be hard pressed to find someone who hasn’t opted for a riskier workaround just to get things done.
By default, this system set us up for unsafe workarounds and introduces even more risk to the organization. This is a vicious cycle, and a broken way to implement.
Instead of feeling that little vein on your forehead pulsate when you find users sharing files through an unsecured external web service, maybe it’s time to take a deep breath and ask yourself a tough question: Did you really provide them with an efficient way to securely share that information?
User experience and security don’t need to be at such odds. In fact, security should help optimize user experience, and vice versa. It’s all about finding the right balance — and acknowledging that the onus is on the entire organization, not just the user.
Every new technology comes with a user experience, and for enterprises, part of creating a positive user experience is building in security with as little impact to the business process as necessary for the risk involved. Does a user really need to log in 10 times a day using two-factor authentication to access her email? Every time the organization introduces a new tool or technology, do users immediately need to be aware of all its security implications?
Or should we first figure out a way to mitigate user-related risk before we roll out the new technology?
This question is more relevant than ever in today’s cloud-based world in which we consume dozens of different technologies in pieces — from enterprise apps to web services to small productivity plug-ins. Yes, there’s still a user obligation to be aware of security risks and protocols, but it’s just not a fundamentally sound practice to expect users to understand every potential security scenario at play.
The smarter we can be in striking the right balance up front, the smaller the user-awareness footprint will be, and the less we’ll have to teach the user.
Every business can benefit from staying ahead of the curve by using new tech strategically and by creating easier, more convenient ways for users to leverage the technology. But as security professionals, it’s on us to understand the business implications of those conveniences and the impact they may have.
In all reality, if the organization as a whole approaches this problem in the right way — aligning security posture with user experience before the tech is released — the company can implement its new technology initiative, mitigate risk and do it all while providing an awesome user experience.