Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Why We Need to Get a Handle on AI

It will be interesting to see how AI continues to evolve and how it is used by defenders as they attempt to leapfrog attackers and protect the organization against new forms of AI attacks.

AI Cybersecurity

There has been a lot of talk about AI recently debating its opportunities and potential risks. Today AI can be trained on images and videos of real customers or executives, to produce audio and video clips impersonating them. These have the potential to fool security systems, and according to a report by identity verification platform Sumsub (PDF), the number of “deepfake” incidents in the financial technology sector alone increased by 700% in 2023, year on year.

This issue suddenly hit much closer to home for me when Pikesville High School, my alma mater, recently made the headlines for all the wrong reasons. It seems that the school’s athletic director allegedly used AI to create and publish a fake recording of the principal’s voice making, let’s just say, inappropriate comments. This prompted immediate public outrage against the principal before these allegations came forward. However, the principal’s reputation is stained regardless of the outcome.  This situation serves as a salutary warning that as a society we need to get a handle on AI before such instances become a daily occurrence.

Supercharging disinformation campaigns

We only have to consider the impact this type of activity could have on elections, for example. The fear is that disinformation campaigns may be supercharged by deepfakes in 2024, just as countries with a collective population of some 4 billion, including the USA, Britain, India, Indonesia, Mexico, and Taiwan, prepare to vote.  With the recent emergence of multiple AI programs that can produce realistic images, videos, and voices in a matter of seconds, election campaigns and state policymakers are adjusting to what this has the potential to do. While people have long attempted to alter or misrepresent media to influence an election – advances in AI and deepfakes are causing a lot of concern due to the speed and scale with which deepfakes can be created and distributed. As Jonathan Swift said in a quote often attributed to Mark Twain, ‘a lie can travel halfway around the world while the truth is putting on its shoes.’ Digital deepfake AI gives lies an even greater head start.

For those less familiar, deepfake technology enables threat actors to use digital tools and AI to create a realistic-looking video or audio of somebody. And, while this technology has been around for a while, each year it is getting better from a technology standpoint. As AI and machine learning tools advance, IT decision makers are expressing concerns over deepfake technology and its impact on global businesses. They believe that business leaders fail to grasp the potentially devastating impact that this technology could have on their organization if used unethically.

Concerns that AI is benefiting attackers more than defenders

To this point a survey of Chief Information Security Officers (CISOs) by Splunk found (PDF) that 70% believe generative AI could give cyber adversaries more opportunities to commit attacks. Certainly, the prevailing opinion is that AI is benefiting attackers more than defenders.

Deepfakes are inevitably becoming more advanced, which is making it harder to spot and stop those that are created with malicious intent. As access to synthetic media technology increases, deepfakes can be used to damage reputations, fabricate evidence, and undermine trust. With deepfake technology increasingly being used for mal-intent, businesses would do well to ensure that their workforce is fully trained and aware of the risks associated with AI-generated content.

Advertisement. Scroll to continue reading.

Widening cyber inequity

A recent World Economic Forum report also found a widening cyber inequity, which is accelerating the profound impact of emerging technologies. The path forward therefore demands strategic thinking, concerted action, and a steadfast commitment to cyber resilience.

Again, this isn’t new. Organizations of all sizes and maturity levels have often struggled to maintain the central tenets of organizational cyber resilience. At the end of the day, it is much easier to use technology to create malicious attacks than it is to use technology to detect such a wide spectrum of potential attack vectors and vulnerabilities. The modern attack surface is vast and can overwhelm an organization as they determine how to secure it. With this increased complexity and proliferation of new devices and attack vectors, people and organizations have become a bigger vulnerability than ever before. It is often said that humans are the biggest risk when it comes to security and deepfakes can more easily trick people into taking actions that benefit the attackers.

Therefore, what questions should security teams be asking to protect their organization?

Adopting a Zero Trust Approach (trust nothing and no-one)

This is where it is important to get a holistic view of what the organization is doing from a security perspective before making important decisions regarding who, and what, can access your mission-critical networks and assets. It is one of the reasons why a zero-trust approach has become so important and, with deepfakes growing at an alarming rate, it is another reason to not implicitly trust people or assets.

Who and what you should trust are the key questions every security team should be asking before they allow any access to systems and devices. The best answer to those questions might be, “nothing and no one.”

Zero Trust is a security model that assumes everything is a risk and cannot be trusted, in other words never trust and always verify. Organizations should default to the belief that every device, every user, and every asset is compromised. This is not a one-time risk analysis but an ongoing assessment of every asset and user inside and outside of your organization to maintain cyber hygiene. By providing least privilege to users, or removing unnecessary services and applications from devices, security teams can reduce the attack surface.

The benefits of AI labeling

As for AI and the risk of deepfakes which is where I started this article, labeling is a commonly proposed strategy for reducing the risks of generative AI. This approach involves applying visible content warnings to alert users to the presence of AI-generated media online such as social media, news sites, or search engines.  Although the jury is still out about the benefits of labeling AI content, many academics believe that it is an effective way to inform the public and employees about AI generated media and to warn them that it might not be authentic or trustworthy information. Of course, malicious actors are not going to label their deepfake efforts, but by labeling benign AI creations and educating users around the level of sophistication that can be achieved, we can reinforce the idea that not everything they see should be believed.

It will certainly be interesting to see how AI continues to evolve and how it is used by defenders as they attempt to leapfrog attackers and protect the organization against new forms of AI attacks.

Written By

Marc Solomon is Chief Marketing Officer at ThreatQuotient. He has a strong track record driving growth and building teams for fast growing security companies, resulting in several successful liquidity events. Prior to ThreatQuotient he served as VP of Security Marketing for Cisco following its $2.7 billion acquisition of Sourcefire. While at Sourcefire, Marc served as CMO and SVP of Products. He has also held leadership positions at Fiberlink MaaS360 (acquired by IBM), McAfee (acquired by Intel), Everdream (acquired by Dell), Deloitte Consulting and HP. Marc also serves as an Advisor to a number of technology companies.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

The AI Risk Summit brings together security and risk management executives, AI researchers, policy makers, software developers and influential business and government stakeholders.

Register

People on the Move

Retired U.S. Army General and former NSA Director Paul M. Nakasone has joined the Board of Directors at OpenAI.

Jill Passalacqua has been appointed Chief Legal Officer at autonomous security solutions provider Horizon3.ai.

Cisco has appointed Sean Duca as CISO and Practice Leader for the APJC region.

More People On The Move

Expert Insights