This presentation was by Chris Wysopal, the CTO of Veracode.  My notes are below:

“To measure is to know.” – James Clerk Maxwell

“Measurement motivates.” – John Kenneth Galbraith

Metrics do Matter

  1. Metrics quantify the otherwise unquantifiable
  2. Metrics can show trends and trends matter more than measurements do
  3. Metrics can show if we are doing a good job or bad job
  4. Metrics can show if you have no idea where you are
  5. Metrics establish where “You are here” really is
  6. Metrics build bridges to managers
  7. Metrics allow cross sectional comparisons
  8. Metrics set targets
  9. Metrics benchmark yourself against the opposition
  10. Metrics create curiosity

Metrics Don’t Matter (Mike Rothman)

  • It is too easy to count things for no purpose other than to count them
  • You cannot measure security so stop
  • This following is all that matters and you can’t map security metrics to them:
    • Maintenance of availability
    • Preservation of wealth
    • Limitation on corporate liability
    • Compliance
    • Shepherding the corporate brand
  • Cost of measurement not worth the benefit

Bad metrics are worse than no metrics

Security Metrics Can Drive Executive Decision Making

  • How secure am I?
  • Am I better off than this time last year?
  • Am I spending the right about of money?
  • How do I compare to my peers?
  • What risk transfer options to I have?

Goals of Application Security Metrics

  • Provide quantifiable information to support enterprise risk management and risk-based decision making
  • Articulate progress towards goals and objectives
  • Provides a repeatable, quantifiable way to assess, compare, and track improvements in assurance
  • Focus activities on risk mitigation in order of priority and exploitability
  • Facilitate adoption and improvement of secure software design and development processes
  • Provide and objective means of comparing and benchmarking projects, divisions, organizations, and vendor products

Use Enumerations

  • Enumerations help identify specific software-related items that can be counted, aggregated, evaluated over time
  • CVE – Common Vulnerabilities and Exposures
  • CWE – Common Weakness Enumeration
  • CAPEC – Common Attack Pattern Enumeration and Classification

Organizational Metrics

  • Percentage of application inventory developed with SDLC (which version of SDLC?)
  • Business criticality of each application in inventory
  • Percentage of application inventory tested for security (what level of testing?)
  • Percentage of application inventory remediated and meeting assurance requirements
  • Roll up of testing results

Organizational Metrics

  • Cost to fix defects at different points in the software lifecycle
  • Cost of data breaches related to software vulnerabilities

Testing Metrics

  • Number of threats identified in threat model
  • Size of attack surface identified
  • Percentage code coverage (static and dynamic)
  • Coverage of defect categories (CWE)
  • Coverage of attack pattern categories (CAPEC)

SANS Top 25 Mapped to Application Security Methods (CWE, Title, Education?, Manual Process?, Tools?, Threat Model?)

Weakness Class Prevalence based on 2008 CVE data (Mitre?)

Basic Metrics: Defect Counts

  • Design and implementation defects
    • CWE identifier
    • CVSS score
    • Severity
    • Likelihood of exploit

Automated Code Analysis Techniques

  • Static Analysis (White Box Testing)
  • Dynamic Analysis (Black Box Testing)

Manual Analysis

  • Manual Penetration Testing
  • Manual Code Review
  • Manual Design Review
  • Threat Modeling

WASC Web Application Security Statistics Project 2008

  • Goals
    • Identify the prevalence and probability of different vulnerability classes
    • Compare testing methodologies against what types of vulnerabilities they are likely to identify
  • Summary
    • 12186 web applications with 97554 detected vulnerabilities
    • More than 13% of all reviewed sites can be compromised completely automatically
    • About 49% of web applications contain vulnerabilities of high risk level detected by scanning
    • Manual and automated assessment by white box method allows to detect these high risk level vulnerabilities with the probability up to 80-96%
    • 99% of web applications are not compliant with PCI DSS standard
  • Compare to 2007 WASC Project
    • Number of sites with SQL Injection fell by 13%
    • Number of sites with Cross-site Scripting fell 20%
    • Number of sites with different types of Information Leakage rose by 24%
    • Probability to compromise a host automatically rose from 7 to 13%