Security Experts:

Connect with us

Hi, what are you looking for?



Antiquated Policy Complicates Threat Intelligence Collection

Threat Intelligence Gathering

Threat Intelligence Gathering

Antiquated Policy Complicates Efforts of Intelligence Teams Tasked to Leverage Deep Web Data

Before the world began sending over 500 million tweets and posting more than four million Facebook messages each day, the practice of Open Source Intelligence (OSINT) gathering, conducted by law enforcement and government agencies for the purpose of evaluating threats to national security, largely involved analyzing and subscribing to newspapers delivered from all over the world. Today, the internet drives much of the world’s intelligence gathering, but its rapid evolution and lack of flexible policy-making is affecting how analysts do their jobs.  

Specifically, the rise of the deep web — online material that is invisible to Google and other search engines, but accessible to anyone in the public with an account, such as Facebook posts — has inadvertently created a complicated set of restrictions that are preventing the U.S. intelligence community from gathering information that could be of value in ensuring the safety of Americans. Many of these problems could easily be resolved, but are based on problematic policies and faulty interpretations of federal laws intended to protect privacy; laws that were never intended to bar analysts’ access to social-media postings that are explicitly marked as public.

The internet is in a state of exponential growth, producing high volumes of content that is available to be captured and investigated by both the public and seasoned OSINT analysts from the intelligence community. However, most of this data is concentrated in a small number of giant social media and deep web platforms, such as LinkedIn, Instagram, Snapchat and WhatsApp. It is this shift in how we share data that has led to questions about what legal authority the different agencies have to collect this information from the deep web. 

Some military and intelligence analysts only have legal authority to collect Publicly Available Information (PAI). PAI is defined in 17 CFR 160.3 as any information that you reasonably believe is lawfully made available to the general public, such as through widely distributed media, under federal, state, or local law. Unpacking that further, we find “widely distributed media” includes information from “a web site that is available to the general public on an unrestricted basis. A web site is not restricted merely because an Internet service provider or a site operator requires a fee or password, so long as access is available to the general public.”

Based on this directive, public posts on generally available social media platforms should qualify as PAI. For example, if you have a Facebook account and you publish something as “public” you are intending for anyone with a Facebook account to see it. But this is where it gets complicated for analysts and intelligence gathering. Typically, an analyst is allowed to collect information online if they are considered to be “non-attributed”, meaning that they are providing  no identifying information. However, if they have some level of false identity, like a fake social media account, then they are considered “misattributed” which falls under a different authority less widely available. 

This is part of a larger problem, often called “going dark”, and is defined broadly by law enforcement as having the legal authority to intercept and access communications or information pursuant to court orders, but lacking the technical ability to carry out those orders because of the rapid shift in communications services and technologies. The situation is a catch twenty-two; for an analyst to collect critical information today, they must register themselves for access to sites on the deep web. However, if they do not use misattributed accounts, they are putting themselves and their organization at risk of being uncovered or attacked online while trying to do their job. 

In my opinion, the idea that any activity can be completely non-attributed is absurd and based on realities that predate today’s internet – there is always some level of attribution in everything a person does online. The analyst’s public IP address will always be visible. Their browser fingerprint will be seen by websites, and many forms of tracking can be used to identify and follow them through the internet. Its a matter of masking and misattribution not non-attribution by shaping your apparent identity. But you don’t have the option of no identity at all.

Rather than drawing the line at website registration or account creation, policies regulating OSINT analysts should be based on the level of engagement. Simply creating an account on Instagram does not make something a covert activity. I would argue that what should be restricted is when an analyst tries to engage with another online user by “friending” them, exchanging emails, or direct messaging. Passively observing intentionally public communications on these platforms is more appropriately part of collecting PAI. This should be consistent with 17 CFR 160.3 as written but would require an update to policy.

I believe that the real “going dark” problem is the self-imposed exile of our military and intelligence analysts from most of the internet. Restricting these analysts from accessing publicly available information puts their investigations at a disadvantage by effectively limiting access to communications for which there is not even a hint of an expectation of privacy from the user. By avoiding these sites, analysts run the risk of missing critical information about terrorist, criminals, and other hostile actors. All government organizations should have access to this information in order to keep our nation safe. Only by evolving our policies on when different authorities apply and what they allow, can we ensure the government’s ability to collect all the public information it needs and still provide robust privacy protection for users who are making the choice to communicate only to “friends” or closed groups.

Written By

Click to comment

Expert Insights

Related Content


Zendesk is informing customers about a data breach that started with an SMS phishing campaign targeting the company’s employees.


The release of OpenAI’s ChatGPT in late 2022 has demonstrated the potential of AI for both good and bad.


Video games developer Riot Games says source code was stolen from its development environment in a ransomware attack


A new study by McAfee and the Center for Strategic and International Studies (CSIS) named a staggering figure as the true annual cost of...


Artificial intelligence is competing in another endeavor once limited to humans — creating propaganda and disinformation.


The FBI dismantled the network of the prolific Hive ransomware gang and seized infrastructure in Los Angeles that was used for the operation.


A digital ad fraud scheme dubbed "VastFlux" spoofed over 1,700 apps and peaked at 12 billion ad requests per day before being shut down.


Cybercriminals earned significantly less from ransomware attacks in 2022 compared to 2021 as victims are increasingly refusing to pay ransom demands.