Now on Demand Ransomware Resilience & Recovery Summit - All Sessions Available
Connect with us

Hi, what are you looking for?

SecurityWeekSecurityWeek

Cybercrime

Antiquated Policy Complicates Threat Intelligence Collection

Threat Intelligence Gathering

Threat Intelligence Gathering

Antiquated Policy Complicates Efforts of Intelligence Teams Tasked to Leverage Deep Web Data

Before the world began sending over 500 million tweets and posting more than four million Facebook messages each day, the practice of Open Source Intelligence (OSINT) gathering, conducted by law enforcement and government agencies for the purpose of evaluating threats to national security, largely involved analyzing and subscribing to newspapers delivered from all over the world. Today, the internet drives much of the world’s intelligence gathering, but its rapid evolution and lack of flexible policy-making is affecting how analysts do their jobs.  

Specifically, the rise of the deep web — online material that is invisible to Google and other search engines, but accessible to anyone in the public with an account, such as Facebook posts — has inadvertently created a complicated set of restrictions that are preventing the U.S. intelligence community from gathering information that could be of value in ensuring the safety of Americans. Many of these problems could easily be resolved, but are based on problematic policies and faulty interpretations of federal laws intended to protect privacy; laws that were never intended to bar analysts’ access to social-media postings that are explicitly marked as public.

The internet is in a state of exponential growth, producing high volumes of content that is available to be captured and investigated by both the public and seasoned OSINT analysts from the intelligence community. However, most of this data is concentrated in a small number of giant social media and deep web platforms, such as LinkedIn, Instagram, Snapchat and WhatsApp. It is this shift in how we share data that has led to questions about what legal authority the different agencies have to collect this information from the deep web. 

Some military and intelligence analysts only have legal authority to collect Publicly Available Information (PAI). PAI is defined in 17 CFR 160.3 as any information that you reasonably believe is lawfully made available to the general public, such as through widely distributed media, under federal, state, or local law. Unpacking that further, we find “widely distributed media” includes information from “a web site that is available to the general public on an unrestricted basis. A web site is not restricted merely because an Internet service provider or a site operator requires a fee or password, so long as access is available to the general public.”

Based on this directive, public posts on generally available social media platforms should qualify as PAI. For example, if you have a Facebook account and you publish something as “public” you are intending for anyone with a Facebook account to see it. But this is where it gets complicated for analysts and intelligence gathering. Typically, an analyst is allowed to collect information online if they are considered to be “non-attributed”, meaning that they are providing  no identifying information. However, if they have some level of false identity, like a fake social media account, then they are considered “misattributed” which falls under a different authority less widely available. 

This is part of a larger problem, often called “going dark”, and is defined broadly by law enforcement as having the legal authority to intercept and access communications or information pursuant to court orders, but lacking the technical ability to carry out those orders because of the rapid shift in communications services and technologies. The situation is a catch twenty-two; for an analyst to collect critical information today, they must register themselves for access to sites on the deep web. However, if they do not use misattributed accounts, they are putting themselves and their organization at risk of being uncovered or attacked online while trying to do their job. 

In my opinion, the idea that any activity can be completely non-attributed is absurd and based on realities that predate today’s internet – there is always some level of attribution in everything a person does online. The analyst’s public IP address will always be visible. Their browser fingerprint will be seen by websites, and many forms of tracking can be used to identify and follow them through the internet. Its a matter of masking and misattribution not non-attribution by shaping your apparent identity. But you don’t have the option of no identity at all.

Advertisement. Scroll to continue reading.

Rather than drawing the line at website registration or account creation, policies regulating OSINT analysts should be based on the level of engagement. Simply creating an account on Instagram does not make something a covert activity. I would argue that what should be restricted is when an analyst tries to engage with another online user by “friending” them, exchanging emails, or direct messaging. Passively observing intentionally public communications on these platforms is more appropriately part of collecting PAI. This should be consistent with 17 CFR 160.3 as written but would require an update to policy.

I believe that the real “going dark” problem is the self-imposed exile of our military and intelligence analysts from most of the internet. Restricting these analysts from accessing publicly available information puts their investigations at a disadvantage by effectively limiting access to communications for which there is not even a hint of an expectation of privacy from the user. By avoiding these sites, analysts run the risk of missing critical information about terrorist, criminals, and other hostile actors. All government organizations should have access to this information in order to keep our nation safe. Only by evolving our policies on when different authorities apply and what they allow, can we ensure the government’s ability to collect all the public information it needs and still provide robust privacy protection for users who are making the choice to communicate only to “friends” or closed groups.

Written By

Click to comment

Trending

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

Join the session as we discuss the challenges and best practices for cybersecurity leaders managing cloud identities.

Register

SecurityWeek’s Ransomware Resilience and Recovery Summit helps businesses to plan, prepare, and recover from a ransomware incident.

Register

People on the Move

Bill Dunnion has joined telecommunications giant Mitel as Chief Information Security Officer.

MSSP Dataprise has appointed Nima Khamooshi as Vice President of Cybersecurity.

Backup and recovery firm Keepit has hired Kim Larsen as CISO.

More People On The Move

Expert Insights

Related Content

Cybercrime

A recently disclosed vBulletin vulnerability, which had a zero-day status for roughly two days last week, was exploited in a hacker attack targeting the...

Cybercrime

The changing nature of what we still generally call ransomware will continue through 2023, driven by three primary conditions.

Data Protection

The cryptopocalypse is the point at which quantum computing becomes powerful enough to use Shor’s algorithm to crack PKI encryption.

Cybercrime

As it evolves, web3 will contain and increase all the security issues of web2 – and perhaps add a few more.

Cybercrime

Luxury retailer Neiman Marcus Group informed some customers last week that their online accounts had been breached by hackers.

Cybercrime

Zendesk is informing customers about a data breach that started with an SMS phishing campaign targeting the company’s employees.

Cybercrime

Patch Tuesday: Microsoft calls attention to a series of zero-day remote code execution attacks hitting its Office productivity suite.

Artificial Intelligence

The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.