Having spent a significant amount of time on process design recently, the importance of balancing process inputs and outputs is clear to me. In even the most basic process flows, a mismatch between expectations can cause significant issues. As I work with organizations to first understand then document and improve their threat intelligence programs, this issue becomes more evident.
Consider this multi-step threat intelligence process, derived from Optiv’s 11-step model:
Acquisition —> Triage —> Enrichment —> Distribution —> Execution
Threat data comes into the system, is analyzed and then combined with additional information to create an actionable piece of intelligence. This piece is distributed in either a machine or human-readable fashion and some action is taken. The model described above is common and in wide use among companies adopting threat intelligence programs. It also is a model that is commonly mismatched and, thus, out of balance.
Consider the following characteristics of each of these process (functional) elements to understand how a mismatch in maturity can cause chaos.
An organization has subscribed to a threat intelligence feed from a number of organizations. The feeds are primarily tactical in nature, meaning, they describe IP addresses, domain names, and other information about malware. That information comes into the organization in the form of a PDF document emailed several times a day to an analyst who then takes it, reads it and determines if that information is valid. The analyst then performs various searches to find out more information about the threat described in the report. That analyst then looks through the organization’s various logging platforms to determine if any of the indicators in the PDF reports show up. If a match is found the analyst then sends an email to the security operations team who create rules to block the offending internal IP addresses, as well as the known command and control (C2) servers on the Internet globally. The operations team then executes that request by pushing rules changes at the next change window.
Let’s break this down:
Threat data in the form of PDF documents, coming into the organization over email and to a specific party demonstrates low maturity primarily due to the individual nature of the intelligence acquisition work. Tactical information such as IP addresses, domain names and other things like MD5 hashes of malware are meant to be consumed at a rapid pace primarily through automated means. When human analysts are involved directly they either have to parse the files manually or have to create scripts to consume the information quickly and in an automated fashion. The resultant automation can create a high-maturity element.
An analyst adding external context to the supplied intelligence data is a higher maturity function and requires skills and capabilities that are beyond basic. Being able to successfully contextualize and then “enrich” the basic atomic indicators is not something common to low-maturity organizations.
The rest of the functional elements are essentially done by hand, using simple persons in the organization. This type of situation is difficult to scale and can be very inefficient. With the volume and velocity of malicious activity today, it’s highly unlikely a human being can keep up with the volume of work to process the information in a timely fashion.
Where we see a mismatch is at volume and velocity. Atomic indicators such as malware hashes are typically large updates with potentially hundreds of thousands of bad files in each update of the list. Taking a higher maturity element and processing it largely in a disconnected fashion and by hand is an example of a higher maturity process feeding into a lower maturity one. The process of distributing the information to the operations team, then to automated systems will suffer due to the volume of inbound information and the velocity at which it is received. Receiving potentially thousands of IP addresses or MD5 hashes takes time to parse and process. An analyst can very quickly acquire a significant backlog of data to process, while never being able to catch up.
In this example it is important to match the inputs and outputs. It would be significantly useful to develop a workflow for taking the inbound information and putting it into some sort of automation platform so that the data does not need to be processed by hand. Also, it could be processed in a timely and efficient manner.
Why worry about matching input/output maturity in workflows like this one? The answer is simple for anyone who’s been in this situation. The mismatched functional elements create frustration, inefficiency and ultimately leads to analyst burnout and process ineffectiveness.
In order for an organization to maximize investments in people and technologies, processes must be matched up as closely in maturity as possible. Failure to do so can be costly, in that costly resources are wasted without gaining their benefits.
Sometimes, while counter-intuitive, the best answer is to improve the maturity level of all of the program elements equally before adding advanced maturity capabilities. As an example, hiring an analyst who can reverse engineer malware effectively is only useful if the organization has the capability to transform that person’s work into findings that can be directly applied to protect the organization. Otherwise, it’s just something cool that has no effect on security.
Remember, security is the ultimate exercise in process efficiency. Only by processing information quickly and acting on it can defenders be effective. When process maturity is out of sync, the organization is likely wasting money, time, and precious human talent and not increasing security.