Security Experts:

Like Their Adversaries, Threat Hunters Need Anonymity

The pivot to remote work forced by the Covid-19 outbreak was sudden, but security stepped up to the challenge. According (ISC)², the association of certified cybersecurity professionals, three out of ten said they had a day or less to secure their employers’ remote workers. 

Now that operations are returning to normal—or the New Normal, which is bound to include a big segment of remote work—security needs to adjust to a number of workers who will continue to work from home. And that includes a fair amount of security analysts and other network security staff. 

That opens up a new front in the war against cybercrime, often right in the white hats’ homes. How can we be sure that threat hunters stay safe, and don’t themselves become a threat to the systems they protect? 

Remote threat hunting 

Security researchers that work remotely were essential for IT organizations even before the pandemic made working from home a norm, organizations had been facing up to the shortage of threat analysts and other infosec talent by contracting out and using remote workers. According to a 2020 SANS survey, almost two-thirds of security operations centers use some external service provider to handle aspects of cybersecurity, among the most popular being penetration testing and threat intelligence. 

Besides supplementing staff shortages, remote work also adds a level of safety to an organization's systems. Even before the pandemic, security work was often done by individuals outside of the corporate environment, because no one wants the malware they are investigating to sit on their network any length of time. Conducting cybersecurity work from remote locations has the advantage of protecting business networks from accidental infection by malware that escapes sandboxes and moves laterally into the network.

[ Virtual Event: SecurityWeek's Threat Hunting Summit ]

But conducting threat intelligence and incident response from unsecure locations can expose threat hunters to discovery by the very hackers they are chasing and opens up technical, legal and governance challenges. Most cybersecurity practitioners are very good, but can they withstand a nation-state attack?

Even when threat hunters work outside the company network, adversaries are becoming more sophisticated, especially in their use of social engineering techniques. For threat researchers who work for Fortune 100 organizations, critical infrastructure, etc., it's hard to hide. Bad guys want to know who they are, and will use social engineering to track them down and try to find a way to get into their company’s network. Especially if that network involves sensitive assets or intelligence. 

Cybersecurity tradecraft and governance

But that quest to be safe means those threat intelligence artifacts are being analyzed in environments which are not controlled by the enterprise, and this can create compliance and legal risks. These go beyond the threat of bad guys doxing good guys. It can also expose the organization to legal consequences if malware investigations are not conducted in compliance with regulations. For example, when a company is breached, management must disclose their findings to law enforcement and possibly insurers, and they're going to want a forensic trail of who-knew-what-when. If this discovery leads to someone sitting in their basement and you either don't know about it or don't disclose it, legal risks develop.

Meanwhile, as a matter of tradecraft, threat hunting needs to be housed in a secure, shared services environment that allows for malware research, detonation, and tool integration. If an adversary knows their malware is being analyzed, threat hunters will lose their ability to trace its source and activity, and defend against it. To maintain a competitive advantage, security researchers can’t let bad guys know what they found and what they're working on.

Organizations cannot afford the risk of having staff set up their own environments to do this work. It needs to be done somewhere safe, where threat hunters can share information in an environment that is obfuscated so it’s not tied back to the parent company. 

A secure sandbox 

Threat hunting therefore, must be non-attributable, while maintaining a clear audit trail to satisfy legal and governance requirements. Meanwhile, organizations must maintain control over environments where malware research is occurring to satisfy compliance requirements in the face of growing oversight and concerns over cybersecurity. A safe, obfuscated, sandbox allows threat hunters to continue their work, but it does not pose legal and security risks for the organization. At the end of the day, it's all about protecting the company.

view counter
Gordon Lawson is CEO of NetAbstraction, a company that specializes in network privacy, non-attribution and obfuscation for enterprises worldwide. Previously, he served as president at RangeForce Inc. Gordon has nearly two decades of experience in the security sector with a focus on SaaS optimization and global enterprise business development from global companies including Reversing Labs, Cofense (formerly PhishMe) and Pictometry. As a naval officer, Gordon conducted operational deployments to the Arabian Gulf and Horn of Africa, as well as assignments with the Defense Intelligence Agency, US Marine Corps, and Special Operations Command. He is a graduate of the US Naval Academy and holds an MBA from George Washington University.