Security programs are difficult to evaluate because the absence of a breach proves nothing. An organization that had no security incidents last year might have been secure, or might have been breached without knowing it, or might have been lucky. Incident rates are a lagging indicator with high noise.
The metrics that indicate whether a security program is actually working are leading indicators: measures of security state that correlate with breach risk. For container security programs, these leading indicators are measurable, improvable, and directly connected to the decisions the program makes.
KPI 1: Attack Surface Reduction Percentage
What it measures: The reduction in total installed packages across the container fleet, from baseline (pre-hardening) to current state. A 75% reduction means the average container in the fleet is running with 25% of the package set it started with.
Why it correlates with breach risk: Every package is a potential CVE source. Every CVE is a potential exploit vector. Reducing the package count reduces the number of potential entry points for exploitation. The correlation is direct.
How to measure it:
attack_surface_reduction = 1 – (current_avg_packages / baseline_avg_packages)
Target: 70-90% reduction is achievable for most application containers. ML containers may achieve 50-60% due to tightly coupled dependency requirements.
KPI 2: CVE Density (CVEs per Image)
What it measures: The average number of CVEs per production container image. Tracked over time, this shows whether the CVE backlog is increasing or decreasing.
Why it correlates with breach risk: Higher CVE density means more known vulnerability vectors. Critical CVE density specifically correlates with breach risk: a production image with active Critical CVEs is more likely to be exploitable than one without.
How to measure it:
cve_density = total_cves_in_production_images / count_of_production_images
Report separately: overall CVE density, Critical CVE density, CVE density in active execution paths vs. installed-but-unused packages.
KPI 3: Mean Time to Remediate (MTTR) by Severity
What it measures: Average time from CVE discovery to verified resolution, measured separately by severity tier and by remediation method (patch vs. removal).
Why it correlates with breach risk: Exposure window is directly related to MTTR. A Critical CVE that is open for 7 days creates 7× the exposure window compared to a CVE resolved in 1 day.
Target benchmarks:
- Critical CVEs in active execution path: < 7 days
- Critical CVEs resolved by automated removal: < 24 hours
- High CVEs: < 30 days
KPI 4: SLA Compliance Rate
What it measures: Percentage of CVE findings remediated within the defined SLA for each severity tier.
Why it matters: SLA compliance rate is the operational integrity metric. A security program with good processes achieves high SLA compliance. A security program that sets SLAs it cannot meet undermines credibility with stakeholders and with developers.
A practical note: A 100% SLA compliance rate may indicate overly lenient SLAs. A 70% compliance rate may indicate appropriate SLAs being consistently worked toward. Track the trend as well as the absolute value.
KPI 5: Actionable Finding Rate
What it measures: Percentage of security findings that result in a documented remediation action or formal exception, as opposed to being ignored or marked as “accepted risk” without review.
Why it correlates with program effectiveness: High actionable finding rates indicate that findings are relevant, credible, and that developers are engaging with them. Low actionable finding rates indicate finding overload or credibility problems — developers are ignoring findings because most are false positives or genuinely not important.
Automated vulnerability remediation improves this metric by filtering out findings in unused packages before they reach developers. When the findings that reach developers are exclusively in packages they actually use, the actionable finding rate increases.
Instrumenting the Five KPIs
The data requirements for these KPIs are straightforward if the security pipeline captures them:
Build-time data: Image digest, build timestamp, pre-hardening CVE count, post-hardening CVE count, packages removed, hardening timestamp.
Deployment data: Which image digest is deployed to which environment, when.
CVE data: For each CVE finding, the image it was found in, the discovery timestamp, and the resolution timestamp (from a re-scan that confirms the CVE is no longer present).
Exception data: For CVE findings marked as accepted risk, the rationale and the review date.
With this data, all five KPIs can be calculated from automated records. No manual reporting activity is required.
Frequently Asked Questions
What metrics do you use to measure the effectiveness of your security operations?
The five leading indicators that correlate directly with breach risk in container security programs are: attack surface reduction percentage (packages removed relative to baseline), CVE density per production image, Mean Time to Remediate by severity, SLA compliance rate (percentage of CVEs resolved within defined timelines), and actionable finding rate (percentage of findings that result in remediation or formal exception). These metrics are measurable from automated pipeline records without requiring separate reporting activities.
How do you measure the effectiveness of a DevSecOps security program?
Effective DevSecOps programs are measured through leading indicators that predict risk, not lagging indicators like incident rates. Track CVE density trends across the container fleet, MTTR for Critical CVEs (target under 7 days for findings in active execution paths), and attack surface reduction percentage. Compare these against a pre-program baseline — the same number “11 CVEs per image” means nothing without knowing it was 58 before the program launched. Improvement trends presented to leadership demonstrate a program that is working, not just running.
How to measure the success of your DevSecOps security awareness program?
For container security specifically, the actionable finding rate is the clearest signal of developer engagement with security findings. When developers resolve the majority of findings they receive, it indicates that findings are relevant, credible, and appropriately scoped. When actionable rates are low, it signals finding overload or credibility problems — often caused by scanners surfacing CVEs in packages that are installed but never executed. Filtering findings to active execution paths dramatically improves developer response rates.
How to measure security effectiveness in DevSecOps?
Security effectiveness in DevSecOps is measured by tracking whether specific controls are operating continuously and producing outcomes against defined targets. Instrument the pipeline to capture build-time data (pre- and post-hardening CVE counts, packages removed), deployment data (which image digest is running where), CVE data (discovery and resolution timestamps per finding), and exception data (accepted risk rationale). With this data, all key security metrics calculate automatically from records already being generated — no manual reporting layer needed.
The Baseline Imperative
These metrics only demonstrate improvement if there is a baseline to compare against. Before implementing a new security tool or process change, measure the current state:
- Run a full fleet scan and record CVE counts per image
- Calculate average packages per image
- Review the last 90 days of CVE remediation records for MTTR
With a baseline, every subsequent measurement shows the delta. Hardened container images that reduce CVE density from 58 to 11 demonstrate measurable improvement. The same number “11” without the baseline is a data point without context.
Security program managers who present improvement trends to leadership demonstrate a program that is working. Security program managers who present static metrics demonstrate a program that is running.
