Paul Krugman’s “Productivity isn’t everything, but, in the long run, it is almost everything” applies to companies well. Companies seek efficiency and productivity. No organisation wants to spend more money or effort than it absolutely has to. That of course applies to companies’ security function. Security must be provably efficient.
In Corporateland, KPIs reign supreme. From boardrooms to SOC floors, there’s a constant demand for metrics that are easy to quantify, digest, and act upon. While metrics are useful for demonstrating value, monitoring progress, and thus justifying budgets; they introduce a particular problem for security professionals.
The Appeal of easily measurable metrics
Organizations gravitate toward metrics that are easy to understand and somewhat easy to measure.
Number of SIEM alerts processed per day, mean time to detect/respond to incidents, patch compliance rate, employee security awareness training compliance rates; these metrics are attractive because they’re straightforward to track and report. They fit neatly into dashboards, quarterly reviews and executive summaries. They offer a sense of control over a complex and often chaotic security landscape.
Why “chaotic”? Because security is highly contextual (you can’t approach securing a system running on Azure the same way you’d tackle a system hosted on bare metal machines in a colocated DC); and today’s interconnected systems are mind bogglingly complex.
So metrics come in really handy for auditors. Auditors make a living by scrutinising wildly different organisations, with variying contexts and threat models. They cannot afford the time necessary to understand an organisation’s business context, and let’s not even talk about the technical context. So the industry develops abstractions, aka compliance standards, to make audits easier to run, and audit reports easier to read and understand. SOC 2 for example, provides a reading grid so that auditors can look at an organisation’s metrics, and definitively state if that org complies with the standard.
However, there are 2 problems with this: audit standards don’t paint the whole picture, and auditors can be mislead.
Let me give you an example. Imagine 2 organisations that enforce 2FA for all users. The first one allows time-based OTP, and requires an OTP challenge once every 30 days. On average, employees receive an OTP challenge once every 25 days. The second one allows time-based OTP and Yubikeys, and requires an MFA challenge every 7 days. On average, employees receive an MFA challenge once a day. Both companies comply with SOC 2 requirements, but aren’t on the same level in terms of security assurance.
Here’s another example. A company has 100 vulnerabilities affecting the container images in its registry. That company’s engineers create a ticket for each vulnerability, some get fixed, most get ignored. Another company has 10000 vulnerabilities affecting the container images in its registry. This company does not bother tracking vulnerabilities with tickets, but rebuilds all images that run on production daily, using the latest upstream base images and system packages. Which company do you think is more secure? And which is SOC 2 compliant? I think you get my point.
Optimizing for the metric
Security teams under pressure to hit specific metrics often end up prioritizing what is being measured over what actually matters. For example, a focus on the sheer number of alerts processed by the SOC can lead to rushing through investigations or prematurely closing cases without thorough review. You also might pride yourself on the percentage of coworkers who fell victim to a phishing simulation, but a paper published by Grant Ho et al. from the University of Chicago unambiguously states: “anti-phishing training programs, in their current and commonly deployed forms, are unlikely to offer significant practical value in reducing phishing risks”.
Collecting, reporting, and justifying metrics, even easily measurable ones, generates a significant administrative overhead that can detract from higher value work, specifically because of modern systems high complexity. Moreover, easily measurable metrics rarely capture the nuance and context of cybersecurity efforts, leaving professionals to defend their work against shallow KPIs. Advanced persistent threats (APTs), insider threats, and supply chain attacks can’t be boiled down to a single number. Security is an ongoing, iterative process.
What’s the Alternative?
To make metrics work for security rather than against it, we all need to rethink our approach.
Balance Quantitative and Qualitative Metrics
Complement hard numbers with narrative insights. For example, combine “percentage of incidents resolved” with case studies of high-impact incidents that demonstrate the value of in-depth investigations.
Prioritize Context Over Raw Data
Instead of tracking the number of vulnerabilities in a docker container, measure how many were introduced by components added on top of base layers. This shifts focus from volume to engineering excellence.
Apply risk coefficients
Tie metrics to the organization’s unique risk landscape. For instance, tracking vulnerabilities in high-value systems is more meaningful than measuring vulnerabilities in internal applications.
Communicate Security as a Journey, Not a Destination
Help leadership understand that cybersecurity metrics are a snapshot, not the full picture. Talk about risk scenarios and how you’re addressing them instead.
Necessary evil
Metrics are vital for managing cybersecurity programs and demonstrating their value. Overreliance on shortcut metrics without contextual nuance is conducive to a poor security program. Next time a security dashboard flashes an impressive-looking statistic, think about the story that number is telling, and ask yourself: is that a story I care about?