Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Epic AI Fails And What We Can Learn From Them

Large language models (LLMs) are trained on vast amounts of data to learn patterns and recognize relationships in language usage. But they can’t discern fact from fiction.

In 2016, Microsoft launched an AI chatbot called “Tay” with the aim of interacting with Twitter users and learning from its conversations to imitate the casual communication style of a 19-year-old American female.

Within 24 hours of its release, a vulnerability in the app exploited by bad actors resulted in “wildly inappropriate and reprehensible words and images” (Microsoft). Data training models allow AI to pick up both positive and negative patterns and interactions, subject to challenges that are “just as much social as they are technical.”

Microsoft didn’t quit its quest to exploit AI for online interactions after the Tay debacle. Instead, it doubled down.

From Tay to Sydney

In 2023 an AI chatbot based on OpenAI’s GPT model, calling itself “Sydney,” made abusive and inappropriate comments when interacting with New York Times columnist Kevin Rose, in which Sydney declared its love for the author, became obsessive, and displayed erratic behavior: “Sydney fixated on the idea of declaring love for me, and getting me to declare my love in return.” Eventually, he said, Sydney turned “from love-struck flirt to obsessive stalker.”

Google stumbled not once, or twice, but three times this past year as it attempted to use AI in creative ways. In February 2024, it’s AI-powered image generator, Gemini, produced bizarre and offensive images such as Black Nazis, racially diverse U.S. founding fathers, Native American Vikings, and a female image of the Pope.

Then, in May, at its annual I/O developer conference, Google experienced several mishaps including an AI-powered search feature that recommended that users eat rocks and add glue to pizza.

If such tech behemoths like Google and Microsoft can make digital missteps that result in such far-flung misinformation and embarrassment, how are we mere humans avoid similar missteps? Despite the high cost of these failures, important lessons can be learned to help others avoid or minimize risk.

Advertisement. Scroll to continue reading.

Lessons Learned

Clearly, AI has issues we must be aware of and work to avoid or eliminate. Large language models (LLMs) are advanced AI systems that can generate human-like text and images in credible ways. They’re trained on vast amounts of data to learn patterns and recognize relationships in language usage. But they can’t discern fact from fiction.

LLMs and AI systems aren’t infallible. These systems can amplify and perpetuate biases that may be in their training data. Google image generator is a good example of this. Rushing to introduce products too soon can lead to embarrassing mistakes.

AI systems can also be vulnerable to manipulation by users. Bad actors are always lurking, ready and prepared to exploit systems—systems subject to hallucinations, producing false or nonsensical information that can be spread rapidly if left unchecked.

Our mutual overreliance on AI, without human oversight, is a fool’s game. Blindly trusting AI outputs has led to real-world consequences, pointing to the ongoing need for human verification and critical thinking.

Transparency and Accountability

While errors and missteps have been made, remaining transparent and accepting accountability when things go awry is important. Vendors have largely been transparent about the problems they’ve faced, learning from errors and using their experiences to educate others. Tech companies need to take responsibility for their failures. These systems need ongoing evaluation and refinement to remain vigilant to emerging issues and biases.

As users, we also need to be vigilant. The need for developing, honing, and refining critical thinking skills has suddenly become more pronounced in the AI era. Questioning and verifying information from multiple credible sources before relying on it—or sharing it—is a necessary best practice to cultivate and exercise especially among employees.

Technological solutions can of course help to identify biases, errors, and potential manipulation. Employing AI content detection tools and digital watermarking can help identify synthetic media. Fact-checking resources and services are freely available and should be used to verify things. Understanding how AI systems work and how deceptions can happen in a flash without warning; staying informed about emerging AI technologies and their implications and limitations can minimize the fallout from biases and misinformation. Always double-check, especially if it seems too good—or too bad—to be true.

Written By

Stu Sjouwerman (pronounced “shower-man”) is the founder and CEO of KnowBe4, Inc., which hosts a security awareness training and simulated phishing platform with over 65,000 organizations and more than 60 million users. A serial entrepreneur and data security expert with 30 years in the IT industry, he was co-founder of Sunbelt Software, the anti-malware software company that was acquired in 2010. He is the author of four books, including “Cyberheist: The Biggest Financial Threat Facing American Businesses.”

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join SecurityWeek and Hitachi Vantara for this this webinar to gain valuable insights and actionable steps to enhance your organization's data security and resilience.

Register

Event: ICS Cybersecurity Conference

The leading industrial cybersecurity conference for Operations, Control Systems and IT/OT Security professionals to connect on SCADA, DCS PLC and field controller cybersecurity.

Register

People on the Move

Threat intelligence firm Intel 471 has appointed Mark Huebeler as its COO and CFO.

Omkhar Arasaratnam, former GM at OpenSSF, is LinkedIn's first Distinguised Security Engineer

Defense contractor Nightwing has appointed Tricia Fitzmaurice as Chief Growth Officer.

More People On The Move

Expert Insights

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest cybersecurity news, threats, and expert insights. Unsubscribe at any time.