Incident Response

Using Relative Metrics to Measure Security Program Success

In my previous column, I discussed the “So What Factor”, which reminds us that we must know our audience.  Many of the people we interact with professionally will not be as enamored by the beauty or elegance of a technical solution as we are.  Instead, they will be more concerned with consequences, effects, and results.

<p class="MsoNormal"><span><span>In my previous column, I discussed the “<a href="http://www.securityweek.com/so-what-factor-information-security">So What Factor</a>”, which reminds us that we must know our audience.<span style="mso-spacerun: yes;">  </span>Many of the people we interact with professionally will not be as enamored by the beauty or elegance of a technical solution as we are.<span style="mso-spacerun: yes;">  </span>Instead, they will be more concerned with consequences, effects, and results.<span

In my previous column, I discussed the “So What Factor”, which reminds us that we must know our audience.  Many of the people we interact with professionally will not be as enamored by the beauty or elegance of a technical solution as we are.  Instead, they will be more concerned with consequences, effects, and results. As such, it’s important to remember to communicate appropriately towards those ends.

If we are successful in our communication efforts, we may obtain the budgetary resources we are after. If we are able to obtain those budgetary resources, we will be able to implement the programs necessary to address the risks we communicated.  At that point, our executives will soon want to measure the effectiveness of their investment. This should come as no surprise, of course. But how can we effectively measure our success and progress?  This is an extremely important question, and one that presents a great challenge to most organizations.  It is a topic that I would like to try and address in this post, as previously promised.

When looking to measure the success and progress of a security program, it is important to think about what success and progress actually mean.  Or, to put it another way, it helps to think about what we are trying to achieve when thinking about how to measure our achievement of that.  What are the risks we are trying to mitigate?  What are the goals and priorities we have enumerated?

When we think about metrics in these terms, it’s clear that absolute metrics don’t help us very much in measurement. In other words, they don’t help us understand our success and progress against our goals and priorities because they are not in any way tied to or mapped to our goals and priorities.  We need more meaningful metrics in order to properly evaluate ourselves – metrics that are relevant to the risk and threats faced by the business.

To see this illustrated in a simple example, let’s examine at a simple metric that many organizations use on a daily or weekly basis: the number of compromised endpoints over a given time period.  At first glance, this may seem like a decent metric, but allow me to ask a simple question. If we see one compromised endpoint in a given day, does that mean we’re doing better than if we see 100 compromised endpoints in a given day?  Find it difficult to answer that question?  You’re not alone.

There are a number of issues with looking at metrics like these.  First off, our example metric implicitly assumes timely and accurate detection. If that assumption were true, perhaps it would be easier to work with this metric. What is the issue with this assumption you ask?  Our example metric assumes that an organization’s detection works, and that it works well. In practice, however, this turns out to be a fairly unreasonable assumption.  Detection is difficult. Timely and accurate detection is extremely difficult (though not impossible).  Because of this, we are always effectively measuring a biased sample. Even if we assume that our analysis and forensics bring us to the correct conclusion 100% of the time, we can still never know if what we’ve detected is anywhere near ground truth.

Further, this metric actually penalizes organizations that have better detection over organizations that have worse detection.  How so?  Say Company A has good detection.  Given the volume of malicious re-directs and phishing emails out there, sooner or later, no matter how good Company A’s defenses are, their users will fall victim to some of these campaigns.  When that happens, Company A will detect the compromised endpoints and contain and remediate them in a timely manner.  Say that Company B does not have good detection.  Because of this, they will not detect these compromised endpoints, and the endpoints will remain infected for days, weeks, or months.  If we look only at absolute metrics, such as our example metric, Company B seems to have “better security” than Company A.  Preposterous!  Yet, unfortunately, this is how many organizations measure the effectiveness of their security programs.

In addition to the sample bias I’ve just discussed, absolute metrics have another critical flaw. Absolute metrics do not actually help us understand our success and progress relative to our goals and priorities.  Ultimately, what we measure ought to tell us what we are doing well, and where we need to improve.  That is one key purpose of metrics, at least as I understand them.

Enter relative metrics. What are relative metrics you ask?  Relative metrics are metrics that are tied to the risks, goals, and priorities that the organization has enumerated strategically. The goal of any security program is to mitigate the risk posed by the threat landscape through goals and priorities strategically focused on achieving that end.  This concept has been discussed elsewhere, including in one of my previous SecurityWeek pieces entitled “Is Security An Unsolvable Problem?

Advertisement. Scroll to continue reading.

Let’s walk through an example of a relative metric to help convey the concept.  Say that one of the risks we want to mitigate is the theft of payment card data.  This example is likely on the minds of many people these days.  Let’s use a simple metric to measure how we are doing at mitigating that risk: number of payment cards stolen from our organization over a given time period.  Let’s revisit Company A and Company B and see how they perform.  Say Company A detects 100 infected Point-of-sale (POS) systems, but is able to contain and remediate those endpoints before any data has been taken due to Company A’s timely and accurate detection capabilities.  On the other hand, Company B does not detect any compromised Point-of-sale (POS) systems.  The POS systems remain infected for months, and millions of payment cards are stolen. Recall that with the absolute metric, Company B appeared to have a better security program than Company A. When we use a relative metric, however, it is clear that Company A’s security program is much better – at least when it comes to mitigating this particular risk that we are measuring.

Organizations have historically used absolute metrics to measure the success and progress of their security programs.  While there is much conventional wisdom around this approach, there is little logical basis for it. Instead, organizations should seek to develop relative metrics to measure their success and progress against the risks, goals, and priorities they have enumerated. These more meaningful metrics allow organizations to hone in on what they do well and what they can improve. These more accurate measurements allow organizations to make changes in a more scientific manner, ultimately leading to an improved security posture.

Related Content

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version