Security Experts:

Antiquated Policy Complicates Threat Intelligence Collection

Threat Intelligence Gathering

Antiquated Policy Complicates Efforts of Intelligence Teams Tasked to Leverage Deep Web Data

Before the world began sending over 500 million tweets and posting more than four million Facebook messages each day, the practice of Open Source Intelligence (OSINT) gathering, conducted by law enforcement and government agencies for the purpose of evaluating threats to national security, largely involved analyzing and subscribing to newspapers delivered from all over the world. Today, the internet drives much of the world’s intelligence gathering, but its rapid evolution and lack of flexible policy-making is affecting how analysts do their jobs.  

Specifically, the rise of the deep web — online material that is invisible to Google and other search engines, but accessible to anyone in the public with an account, such as Facebook posts — has inadvertently created a complicated set of restrictions that are preventing the U.S. intelligence community from gathering information that could be of value in ensuring the safety of Americans. Many of these problems could easily be resolved, but are based on problematic policies and faulty interpretations of federal laws intended to protect privacy; laws that were never intended to bar analysts’ access to social-media postings that are explicitly marked as public.

The internet is in a state of exponential growth, producing high volumes of content that is available to be captured and investigated by both the public and seasoned OSINT analysts from the intelligence community. However, most of this data is concentrated in a small number of giant social media and deep web platforms, such as LinkedIn, Instagram, Snapchat and WhatsApp. It is this shift in how we share data that has led to questions about what legal authority the different agencies have to collect this information from the deep web. 

Some military and intelligence analysts only have legal authority to collect Publicly Available Information (PAI). PAI is defined in 17 CFR 160.3 as any information that you reasonably believe is lawfully made available to the general public, such as through widely distributed media, under federal, state, or local law. Unpacking that further, we find “widely distributed media” includes information from “a web site that is available to the general public on an unrestricted basis. A web site is not restricted merely because an Internet service provider or a site operator requires a fee or password, so long as access is available to the general public.”

Based on this directive, public posts on generally available social media platforms should qualify as PAI. For example, if you have a Facebook account and you publish something as “public” you are intending for anyone with a Facebook account to see it. But this is where it gets complicated for analysts and intelligence gathering. Typically, an analyst is allowed to collect information online if they are considered to be “non-attributed”, meaning that they are providing  no identifying information. However, if they have some level of false identity, like a fake social media account, then they are considered “misattributed” which falls under a different authority less widely available. 

This is part of a larger problem, often called “going dark”, and is defined broadly by law enforcement as having the legal authority to intercept and access communications or information pursuant to court orders, but lacking the technical ability to carry out those orders because of the rapid shift in communications services and technologies. The situation is a catch twenty-two; for an analyst to collect critical information today, they must register themselves for access to sites on the deep web. However, if they do not use misattributed accounts, they are putting themselves and their organization at risk of being uncovered or attacked online while trying to do their job. 

In my opinion, the idea that any activity can be completely non-attributed is absurd and based on realities that predate today’s internet - there is always some level of attribution in everything a person does online. The analyst's public IP address will always be visible. Their browser fingerprint will be seen by websites, and many forms of tracking can be used to identify and follow them through the internet. Its a matter of masking and misattribution not non-attribution by shaping your apparent identity. But you don’t have the option of no identity at all.

Rather than drawing the line at website registration or account creation, policies regulating OSINT analysts should be based on the level of engagement. Simply creating an account on Instagram does not make something a covert activity. I would argue that what should be restricted is when an analyst tries to engage with another online user by “friending” them, exchanging emails, or direct messaging. Passively observing intentionally public communications on these platforms is more appropriately part of collecting PAI. This should be consistent with 17 CFR 160.3 as written but would require an update to policy.

I believe that the real “going dark” problem is the self-imposed exile of our military and intelligence analysts from most of the internet. Restricting these analysts from accessing publicly available information puts their investigations at a disadvantage by effectively limiting access to communications for which there is not even a hint of an expectation of privacy from the user. By avoiding these sites, analysts run the risk of missing critical information about terrorist, criminals, and other hostile actors. All government organizations should have access to this information in order to keep our nation safe. Only by evolving our policies on when different authorities apply and what they allow, can we ensure the government’s ability to collect all the public information it needs and still provide robust privacy protection for users who are making the choice to communicate only to “friends” or closed groups.

view counter
Lance Cottrell founded Anonymizer in 1995, which was acquired by Ntrepid (then Abraxas) in 2008. As Chief Scientist, Lance continues to push the envelope with the new technologies and capabilities required to stay ahead of rapidly evolving threats. Lance is a well-known expert on security, privacy, anonymity, misattribution and cryptography. He speaks frequently at conferences and in interviews. Lance is the principle author on multiple Internet anonymity and security technology patents. He holds an M.S. in physics from the University of California, San Diego and a B.S. in physics from the University of California, Santa Cruz. In his spare time Lance grows high-end pinot noir grapes in the Russian River Valley AVA.