Virtual Event: Threat Detection and Incident Response Summit - Watch Sessions
Connect with us

Hi, what are you looking for?


Network Security

Integrating Actionable Intelligence

As I discussed in my previous column entitled “Understanding The Challenges In Information Sharing”, information sharing is an integral part of a mature security operations program.

As I discussed in my previous column entitled “Understanding The Challenges In Information Sharing”, information sharing is an integral part of a mature security operations program. The combined knowledge of many organizations is greater than the individual knowledge of just one. Information sharing often involves the exchange of intelligence in the form of Indicators of Compromise (IOCs), and in this piece, I would like to discuss the integration of intelligence into security operations.

Wikipedia defines an Indicator of Compromise as “an artifact observed on a network or in an operating system that with high confidence indicates a computer intrusion.” In order for data, or information, to qualify as an Indicator of Compromise, it must include important contextual information that helps organizations to understand how best to leverage that information. Only information plus context can be considered intelligence.

Actionable Security Intelligence

There are many potential sources of intelligence. These can include publicly available open sources, vendor or paid services, government and/or non-profit sources, peer organizations, formal or informal information sharing exchanges, and others. Different sources of intelligence will provide different points of view, angles, and perspectives, along with differing qualities of intelligence. It’s important to build and nurture relationships with several reliable sources of intelligence, as they can provide an organization’s security operations program with a tremendous amount of added value.

If you’ve done a good job building bridges and nurturing relationships for your security operations program, you will find yourself with several reliable sources of intelligence. From these sources, you will receive data, either regularly through a feed-like mechanism, or in an ad hoc manner as relevant data becomes available. Getting the data is a great first step, but what you do with that data is even more important. Most organizations will begin by manually searching through their logs going back a few weeks or months to see if they’ve got any hits.  That’s a great start, of course. But what if an attack hits tomorrow, after you’ve already run your search and (likely) “discarded” that intelligence?

That’s where a robust intelligence analysis function and process can help. Joining information sharing groups, purchasing intelligence from vendors, working collaboratively with peer organizations, and building bridges and trusted relationships can all net you decent intelligence. As that intelligence streams in, it should be immediately vetted.  In other words, given the context (which is extremely important) of a particular piece of intelligence (e.g., is it a payload delivery site, command and control site, malicious email sender, etc.), is it reliable as an Indicator of Compromise?  Does it produce a large number of false positives, or is the noise relatively tame (making it more useful/reliable as an Indicator of Compromise)?

Vetting serves an important purpose. In my experience, quality of intelligence is more important than quantity of intelligence. In other words, properly vetting intelligence before it is added to the organization’s intelligence repository has a number of benefits.  Aside from the improved signal-to-noise ratio (ratio of true positives to false positives) resulting from improved intelligence, it also helps with the constant tug-of-war between the need to retain data and the finite resources available to retain that data. When there is less “garbage” consuming retention resources, it allows us to retain our intelligence longer using the same amount of retention resources.

Once an indicator has been vetted and deemed reliable, it should be retained. I’ve seen a number of organizations use some variety of an intelligence repository to retain the vetted, reliable, high fidelity, actionable intelligence they have. It’s important to retain detailed, granular records of both intelligence and its sources in the intelligence repository. These records should include a variety of information, but some important fields that I will list explicitly here include date received, source information, attack stage (e.g., payload delivery, command and control, email, etc.), indicator type (e.g., MD5 hash, URL pattern, domain name, email address, email subject, etc.), and references to any supporting research and/or background information.

Advertisement. Scroll to continue reading.

Detailed records in our intelligence repository offer us a number of attractive benefits. First, they allow us to show, through metrics, how important our intelligence program is. For example, when asking for additional resources, or when budgeting season comes around, wouldn’t it be nice to show that a sizeable percentage of all timely detection arises from intelligence? Additionally, detailed records in the intelligence repository allow us to continually evaluate the value we’re getting from each of our sources. Say I am paying $X for source A, which yields me 50 true positives and 10 false positives each month, while I am paying $5X for source B, which yields me 10 true positives and 50 false positives each month. In this situation, it’s fairly easy to see that source A delivers much greater value for much less money. If there is a need to re-allocate funding for whatever reason, I can base my decisions on fact, rather than on conjecture.

Once intelligence is retained, of course, it should be fully leveraged. Properly leveraging intelligence requires many things, but the most important principle is to integrate vetted and retained intelligence seamlessly into the operational workflow. This includes writing automated queries to run data in the intelligence repository against log data regularly, as well as real-time alerting when appropriate. In order to do this, it requires that data not only needs to go into the intelligence repository, but that it also has to be easy to get it back out. Only then can using that intelligence be truly integrated into the operational workflow. Why is this important?  Because, as we’ve seen with other subjects, if something isn’t easy and integrated, it won’t get done, and in won’t be used. In the case of intelligence, that would result in leaving a lot of very important knowledge out of the workflow.

Conceptually, integrating actionable intelligence is a logical endeavor, though it does contain details requiring specialized skills and technical knowledge. It’s amazing to me how many organizations don’t properly retain and leverage the intelligence they receive, despite its importance. Four simple steps — collecting, vetting, retaining, and leveraging can make a world of difference to an organization’s intelligence program and overall security posture. Take a look within your own organization — if you can better collect, vet, retain, and leverage intelligence, it will serve you well in the long run!

Related Reading: 5 “Actionable Intelligence” Questions Enterprises Should Ask

Written By

Joshua Goldfarb (Twitter: @ananalytical) is currently a Fraud Solutions Architect - EMEA and APCJ at F5. Previously, Josh served as VP, CTO - Emerging Technologies at FireEye and as Chief Security Officer for nPulse Technologies until its acquisition by FireEye. Prior to joining nPulse, Josh worked as an independent consultant, applying his analytical methodology to help enterprises build and enhance their network traffic analysis, security operations, and incident response capabilities to improve their information security postures. He has consulted and advised numerous clients in both the public and private sectors at strategic and tactical levels. Earlier in his career, Josh served as the Chief of Analysis for the United States Computer Emergency Readiness Team (US-CERT) where he built from the ground up and subsequently ran the network, endpoint, and malware analysis/forensics capabilities for US-CERT.

Click to comment

Daily Briefing Newsletter

Subscribe to the SecurityWeek Email Briefing to stay informed on the latest threats, trends, and technology, along with insightful columns from industry experts.

SecurityWeek’s Threat Detection and Incident Response Summit brings together security practitioners from around the world to share war stories on breaches, APT attacks and threat intelligence.


Securityweek’s CISO Forum will address issues and challenges that are top of mind for today’s security leaders and what the future looks like as chief defenders of the enterprise.


Expert Insights

Related Content

Identity & Access

Zero trust is not a replacement for identity and access management (IAM), but is the extension of IAM principles from people to everyone and...

Cybersecurity Funding

Network security provider Corsa Security last week announced that it has raised $10 million from Roadmap Capital. To date, the company has raised $50...

Identity & Access

Hackers rarely hack in anymore. They log in using stolen, weak, default, or otherwise compromised credentials. That’s why it’s so critical to break the...

Network Security

Attack surface management is nothing short of a complete methodology for providing effective cybersecurity. It doesn’t seek to protect everything, but concentrates on areas...

Network Security

NSA publishes guidance to help system administrators identify and mitigate cyber risks associated with transitioning to IPv6.


Websites of German airports, administration bodies and banks were hit by DDoS attacks attributed to Russian hacker group Killnet

Application Security

Fortinet on Monday issued an emergency patch to cover a severe vulnerability in its FortiOS SSL-VPN product, warning that hackers have already exploited the...

Network Security

Our networks have become atomized which, for starters, means they’re highly dispersed. Not just in terms of the infrastructure – legacy, on-premises, hybrid, multi-cloud,...