Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Artificial Intelligence

Social Distortion: The Threat of Fear, Uncertainty and Deception in Creating Security Risk

While Red Teams can expose and root out organization specific weaknesses, there is another growing class of vulnerability at an industry level.

In offensive security, there are a range of organization specific vulnerabilities that create risk, from software/hardware vulnerabilities, to processes and people.  Attackers target and prey on any weakness they can identify. While Red Teams can expose and root out organization specific weaknesses, there is another growing class of vulnerability at an industry level. It’s not a single actor, vulnerability or intentionally malicious campaign.  It manifests from governmental requirements and policy interference, to overblown, sometimes false alarms about technology safety, to active efforts to undermining research or authoritative industry voices. It’s a culture of disinformation, misinformation and misrepresentation that erodes trust, confuses employees, and overloads security teams chasing ghosts. Let’s examine the traditional pillars of security community culture and how they are being weakened and compromised, and even peek at where this all could go in a world of deepfakes and AI-fueled bias and hallucination.

Open Season

The security industry at its core is built around open information sharing and collaboration to make things better and people safer. While research and researchers “compete” for attention, there are many examples of companies – even competitors – collaborating to find and fix weaknesses, and expose malicious activities. That heritage is at its best when informal and between peers, where the goal is community advancement. However, as the problem has grown, so has external scrutiny, liability, and of course profits within the industry.

This maturing has brought a broader population of stakeholders, gatekeepers and other indirect participants behind the curtain. What has resulted is a business and political ecosystem wherein motivations get murky and the integrity of the process and product become subject to the influence of agendas, commercial concerns and adversarial motives. Specifically:

  • Government agencies that operate with the often-conflicting goals of public safety versus strategic geopolitical advantage
  • Industry organizations with imbalanced power structures favor larger companies.
  • Business models whose profit hinges on rapid disclosure, reduced transparency, and territorialism
  • The cult of personality and “rock star” status drown out equality of perspective and voice, limit diversity of discussion, and even reduce peer review.

Let’s pull these inauspicious “influencers” apart and their impact on security teams.

Fear and Loathing…

The most institutionalized of ulterior motives lies in the Federal Government. In an increasingly connected and aggressive geopolitical environment, the government is like the mythical Fates, in multiple ways trying to exert control over aspects of our digital wellbeing.

The most problematic of public of governmental influence was one of its most closely held “secrets.”  The NSA became the poster child having long held onto private vulnerabilities to ensure a strategic advantage in the cyber realm, with the Shadow Brokers leak laying bare the extent of their capabilities. 

A second, and more recent, head presents itself as serving the public good.  While government has become more collaborative and communicative with the technology industry regarding security risks, there is also an aggressive push in multiple policy initiatives that seek to require mechanisms like encryption backdoors.  While these debatably could aid in some investigations, the presence of backdoors would irrefutably aid cybercriminals more.

Advertisement. Scroll to continue reading.

While the first two are challenges of their own creation, the third challenge is one of capitalism that has developed so quickly that it has largely surpassed the ability to implement controls, or in some cases even comprehend the scope of the problem. This is the reality that government has lost complete control of the narrative with regards to everything from disinformation, to rampant privacy violations via social media platforms. 

For security teams, this is the most chaotic and hard to control fronts in their battle to keep people safe.  Government secrecy creates an environment where security professionals are blindsided by attacks on addressable – and in some cases, long standing – vulnerabilities.  The public policy debate around weakening technology controls creates contentious relationships with law enforcement and policymakers and even turns public perception against the security industry.  And finally, the public proclamations and posturing around privacy and information usage creates massive end user confusion with regards to the limits and appropriate use of certain platforms, and severely muddies the overlapping impact between work/personal spheres

From an Uncertain Point of View

Stepping back from the societal challenges facing security professionals, there is misinformed, even if sometimes well intentioned, “guidance” around technology problems and pitfalls that complicate a security team’s relationship within their own organizations.

The first case is in industry standards for technology usage and implementation.  One of the most confusing is the white noise of password construction, from length, to complexity to renewal.  Even the National Institute of Standards and Technology (NIST) within the last few years changed its guidance as they recognized the onerous requirements were proving counterproductive.

Social media unsurprisingly flows through every category of this article, in this case from false posts about how to “reclaim” everything from data privacy to data sovereignty – in many cases with vaguely legal sounding, but wholly false, proclamations

And were would we be without government agencies in the chicken little business with FBI alarms around juice jacking or the latest iPhone feature,

This is where security professionals and teams really get hit in their day job.  From complex and kinetic recommendations, to hyperbolic headlines, this range of behavior results in executive and organizational confusion. It creates an overload of contentious questions to security teams regarding the reasons and efficacy of policy changes, directives and safe behavior.

Better to Give than Deceive

This final category, hits security professionals where they live – the community itself.  Some of the most insidious efforts to sow confusion or deceive mostly come from actors with monetary motives – from the outright illegal and malicious to the downright unethical and self-serving.

On the malicious side, there have been multiple attempts by attackers to dupe security professionals and even poison vulnerability research.

More recently, on the ethical side, some within the community have been accused of, and exposed for, their use of fake profiles to deceive and control industry voices, while attempting to project – and profit from – community support.

While the other two categories result in the distraction of and wasted resources on security teams, the impact of this can be more damaging.  It ranges from the “soft impact” of weakened community collaboration and important perspectives being silenced, to the hard impact of research processes and even researchers themselves being compromised, or even worse weaponized.

The Ghost in the Machine

If these examples weren’t troubling enough, wait, there’s more.  In one of my earlier columns, I touched on AI usage by cybercriminals. That emerging reality is concerning in its ability to increase the believability and efficacy of some of these deceptive activities. It also in turn will increase the frequency and tenor of regulatory voices calling for weakened controls and backdoors – and ultimately public willingness to accept compromise.

Another harbinger of confusion and compromise lies in the AI models and tools themselves, from poorly designed/trained models, to direct exploitation of model behaviors and guardrails. And like many technologies, the AI deployment and experimentation horses are already out of the barn, but that’s a topic for another article.   

Written By

Tom Eston is the VP of Consulting and Cosmos at Bishop Fox. Tom's work over his 15 years in cybersecurity has focused on application, network, and red team penetration testing as well as security and privacy advocacy. He has led multiple projects in the cybersecurity community, improved industry standard testing methodologies and is an experienced manager and leader. He is also the founder and co-host of the podcast The Shared Security Show; and a frequent speaker at user groups and international cybersecurity conferences.

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

The AI Risk Summit brings together security and risk management executives, AI researchers, policy makers, software developers and influential business and government stakeholders.

Register

People on the Move

Retired U.S. Army General and former NSA Director Paul M. Nakasone has joined the Board of Directors at OpenAI.

Jill Passalacqua has been appointed Chief Legal Officer at autonomous security solutions provider Horizon3.ai.

Cisco has appointed Sean Duca as CISO and Practice Leader for the APJC region.

More People On The Move

Expert Insights