Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

AI in the Enterprise: Cutting Through the Hype and Assessing Real Risks

The introduction of AI can bring benefits to the enterprise while not introducing additional risk that is beyond acceptable levels.

AI Regulation

Most security professionals are acutely aware of the hype and buzz around AI. I can’t think of a day in recent memory that has gone by without the topic of AI coming up. I’ve written several pieces in the past offering suggestions around how enterprises can see through the hype and buzz in an effort to have a more meaningful and substantial conversation around AI.

Lately, as I’ve spoken with various customers and partners globally, I’ve noticed another area where perhaps some suggestions would be helpful. While there are certainly organizations that have implemented AI in various areas, the majority of organizations I’ve spoken with aren’t yet doing very much with AI. This is by no means a criticism. Rather, many organizations around the world are, rightfully, cautiously and carefully evaluating the benefits that AI may bring them versus the risk it may introduce.

I believe this to be a healthy and wise approach – I don’t see a need for most organizations to dive into AI at an accelerated pace without fully understanding the impact it may have. Beyond that, many people I’ve spoken with still see AI as a bit of a mystery – and for good reason. Before any meaningful assessment of if, when, and how to incorporate AI can take place, it seems to me that enterprises need some help in seeing beyond the mystery they perceive around AI.

Given this, how can enterprises move from a place of mystery to a place where they can objectively evaluate to what extent and towards which challenges it makes sense to invest in AI? There are likely many different approaches that can be taken. Given the conversations I’ve had around the topic over the last year or two, here are seven points that I think would be helpful for enterprises to consider:

  1. Application is key: AI is a tool that needs to be applied to specific problems.  In other words, general banter around the topic of AI, doom and gloom, scare tactics, and fear of missing out (FOMO) aren’t constructive and shouldn’t influence decisions around AI. If you find yourself in one of these types of discussions, push for substantive talk around what specific problems AI could be applied to. If you can’t get any satisfactory answers, that should tell you quite a bit. If, on the other hand, the topics are challenges you are looking to address, then it may be the case that applying AI to those challenges could potentially be of interest to you.
  2. Understand the implications: It may be tempting to dive into, play with, and experiment with new technologies, including AI. It is important to remember, however, that doing so may have security, privacy, legal, and regulatory consequences, among others. When looking to incorporate AI, it is important to work with various teams across the organization to ensure that the impact and implications are well understood and well mitigated on a variety of levels.
  3. Develop the appropriate policies: It would be unwise for an organization to allow the use of AI in a completely unsupervised and unregulated manner. This would obviously introduce an untenable amount of risk beyond the appetite of most enterprises. On the other hand, policies that are overbearing and overly restrictive will inhibit innovation, productivity, and effectiveness. Clearly there is a balance where a tolerable amount of risk can be accepted without adversely affecting creativity and problem solving. This is the target zone when developing policies around AI.
  4. Choose specific challenges: It is important to know what problems you want to solve with AI and to ensure that those problems are a good fit for AI. Aimlessly firing AI at a bunch of problems with the hope that it will fix something is not likely to yield good results. On the contrary, it is likely to chew through significant resources and may also introduce a fair amount of risk into the organization.
  5. Understand exposure: As with any technology, when leveraging AI, it is important to understand what data and resources may be exposed. This is the case with any technology, of course, but when it comes to AI, additional care should be taken. The capabilities and risks of AI are not particularly well understood when compared with other technologies that the security community has been working with for longer.  The potential for the “unknown unknowns” to expose data and resources is higher for AI than it is for other, more traditional technologies.
  6. Understand the additional risk: While AI might have the attention and focus of many around the world these days, it is no different than other technologies in that it has the potential to introduce additional risk to the enterprise.  Some of that risk will come from the security, privacy, legal, and regulatory consequences discussed above.  Some of it will come from the exposure of data and resources also discussed above.  Risk can also come in other forms as well.  For example, the AI capabilities themselves being abused, misused, and exploited.
  7. Measure continuously: AI, like many tools, is not a “set it and forget it” endeavor.  AI requires continuous iteration and improvement.  Models need to be updated, trained, and honed continuously.  True positives and false positives need to be measured and kept to expected volumes.  Telemetry needs to be collected for security monitoring, audit, and efficacy purposes, among others.  Access to and use of AI needs to be controlled and monitored for abuse and misuse.  Risk needs to be continually assessed and evaluated to ensure that it remains within accepted levels.  The list goes on, but one thing is clear: When an enterprise decides to leverage AI, it is a long-term commitment that involves embracing the whole product concept.

Although AI may seem new, buzzy, and intimidating, it really needn’t seem that way. As I hope I’ve been able to articulate, introducing AI should follow the same tried and true security best practices for introducing any new technology. Specific challenges to tackle with AI should be chosen, risk and exposure should be well understood, and appropriate policies should govern the introduction and continued operation of AI. By remaining focused on security best practices, the introduction of AI can bring benefits to the enterprise while not introducing additional risk that is beyond acceptable levels.

Written By

Joshua Goldfarb (Twitter: @ananalytical) is currently Field CISO at F5. Previously, Josh served as VP, CTO - Emerging Technologies at FireEye and as Chief Security Officer for nPulse Technologies until its acquisition by FireEye. Prior to joining nPulse, Josh worked as an independent consultant, applying his analytical methodology to help enterprises build and enhance their network traffic analysis, security operations, and incident response capabilities to improve their information security postures. He has consulted and advised numerous clients in both the public and private sectors at strategic and tactical levels. Earlier in his career, Josh served as the Chief of Analysis for the United States Computer Emergency Readiness Team (US-CERT) where he built from the ground up and subsequently ran the network, endpoint, and malware analysis/forensics capabilities for US-CERT.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Hear from experts as they explore the latest trends, challenges and innovations in Attack Surface Management.

Register

Event: ICS Cybersecurity Conference

The leading industrial cybersecurity conference for Operations, Control Systems and IT/OT Security professionals to connect on SCADA, DCS PLC and field controller cybersecurity.

Register

People on the Move

Janet Rathod has been named VP and CISO at Johns Hopkins University.

Barbara Larson has joined SentinelOne as Chief Financial Officer.

Amy Howland has been named Partner and CISO at Guidehouse.

More People On The Move

Expert Insights