Incident Response

Intelligence is Not a Numbers Game

If you’ve ever dabbled in data analytics, product design, or digital marketing, you’re likely familiar with vanity metrics. Just as their name implies, vanity metrics are numbers that look good on paper and appear to indicate performance or value. In reality, however, these numbers are deceptive, trivial, and far less useful than meets the eye.

<p><span><span>If you’ve ever dabbled in data analytics, product design, or digital marketing, you’re likely familiar with vanity metrics. Just as their name implies, vanity metrics are numbers that look good on paper and appear to indicate performance or value. In reality, however, these numbers are deceptive, trivial, and far less useful than meets the eye.</span></span></p>

If you’ve ever dabbled in data analytics, product design, or digital marketing, you’re likely familiar with vanity metrics. Just as their name implies, vanity metrics are numbers that look good on paper and appear to indicate performance or value. In reality, however, these numbers are deceptive, trivial, and far less useful than meets the eye.

For example, let’s say that a blog sees a 500 percent increase in page views within a 24-hour period. Such a figure sounds impressive. But it turns out that the blog had launched just the day before, so this figure, though still an accurate percentage, reflects an increase from a mere 10 page views to 50. This scenario reinforcses why page views is a vanity metric—they can be easily manipulated to sound impressive but reveal little about how a website is actually performing.

Although the term itself is lesser-known outside the types of business functions I mentioned above, the underlying concept of vanity metrics persists across the enterprise—including among intelligence programs. 

Indeed, many of the key performance indicators (KPIs) intelligence programs often use to evaluate their operations aren’t really KPIs at all because they don’t accurately indicate performance—they’re simply vanity metrics masquerading as KPIs. Common examples include:

– Number of intelligence reports written

– Number of indicators of compromise (IoCs) processed

– Number of data points collected

– Number of keyword alerts received

Advertisement. Scroll to continue reading.

The problem with relying too heavily on vanity metrics like these is that they can distort our perception of how an intelligence operation is performing. For example, let’s say that the objective of our operation is to reduce fraud losses. We might be tempted to measure its progress by tracking, among other metrics, the number of data points collected in support of the operation. But while a large or larger-than-average number of data points collected would look good on paper and might even suggest increased collections efficiency, it would not definitively indicate that the operation was successful in achieving its objective of reducing fraud losses.

In order to accurately evaluate such an operation, we’d need to choose KPIs that 1) map to the operation’s requirements and objectives; 2) we can feasibly measure at scale; and 3) will help us identify any blind spots or areas for improvement that could arise. Rather than the number of data points collected, the right KPIs for this operation might include:

– Number of emerging fraud schemes identified

– Number of new anti-fraud measures informed

– Number of fraud attempts that were prevented versus the number that were successful

– Relative reduction in fraud losses

Aside from distorting our perception of operational performance, however, vanity metrics can also distort our perception of value—particularly with respect to vendor offerings. Many intelligence programs obtain the data, intelligence, and tools on which their operations rely from third-party vendors, but choosing the right vendor can be difficult given the abundance of misleading claims in the market—many of which are fueled by vanity metrics.

As I mentioned in one of my previous articles, collection strategies are perhaps the biggest differentiator among intelligence vendors. But because many intelligence consumers and decision-makers aren’t always aware of exactly which types of data and sources are best suited for their operations, they often—and understandably—assume that more is better. This assumption is largely why many vendors choose to highlight the following vanity metrics in order to further differentiate their offerings:

– Number of data sources 

– Number of total data points

– Number of new data points collected per day

– Number of customers

But similar to how web page views aren’t indicative of website performance, these types of metrics aren’t indicative of the extent to which an intelligence offering is suitable to a program’s needs and objectives. It’s crucial to remember that regardless of how they are marketed or perceived, the quantitative aspects of a vendor’s collection strategy do not reflect the qualitative aspects. This same principle also applies to your operational KPIs. In other words, there’s no point in having billions of data points if those data points aren’t timely, accurate, actionable, and adequately map to your intelligence objectives and requirements.

Related Content

Copyright © 2024 SecurityWeek ®, a Wired Business Media Publication. All Rights Reserved.

Exit mobile version