This presentation was by Keith Turpin from The Boeing Company.   About three years ago, all of Boeing’s assessments were coming from outsourced service providers.  They realized that they were unable to have control over the people and process and had difficulties integrating the controls into the SDLC and decided to bring these functions in house.  The goal of this presentation is to show some of the issues they ran into and how they addressed those problems.  My notes from the presentation are below:

Contraced Services Considerations

  • Some Advantages:
    • Highly skilled
    • Established tools, processes, and standards
    • Unbiased
    • Available as needed
  • Some Disadvantages:
    • Expensive, especially for an extended engagement
    • Less control and flexibility
    • Not familiar with company processes and culture
    • Rotating staff

Planning

  • Considerations for establishing an internal team:
    • Time to staff and train the team
    • Overlap of external and internal teams
    • Development of processes and standards
    • Acquiring necessary tools

Service Model

  • Define the services your team will provide.  This will be greatly influenced by:
    • The team’s size and skills
    • The number of applications you have to support
    • The tools available
    • The level of executive support
    • The funding model
      • Who pays for your services
    • The team’s role
      • Development support, pre-deployment testing or post deployment auditing and pen testing

Staffing the Team

  • Decide how to staff your team and what skills you need.  Possible candidates include:
    • Experienced Application Testers
      • This is ideal from a skills standpoint, but people in this category may be harder to find, cost more, may not be familiar with your company and or fit its culture.
    • Experienced Developers
      • Developers will have a good understanding of the technologies, but may not understand security principles.  Their focus is on what an application is intended to do, not what it can be made to do.
    • Other IT Security Professionals
      • They have a good understanding of security principles, but may lack specific technical skills.  However, some skills may provide a useful overlap, like experienced OS or network testers.
    • Service and Project Managers
      • Building a new team, defining processes and standards, managing work flow and handling customer relations requires a set of skills as important, but distinct, from technical testing skills.

Selecting Tools

  • There are a lot of options when it comes to tools.  What you choose depends on the services you want to provide, your team’s skills and your budget.
    • Commercial vs. Free or Low Cost Tools
      • Commercial tools scale to support enterprise use, utilize a higher degree of automation and come with product support.  They also come with a big price tag.
      • Open source and low cost tools allow for more customization, and are free or inexpensive, usually have a supportive user community, but often require a higher degree of user knowledge and skill.
    • Types of Tools
      • Vulnerability Scanners
        • Commercial examples include IBM AppScan, HP WebInspect and Cenzic Hailstorm
      • Source Code Analysis
        • There are commercial options like Fortify or open source tools like the OWASP Yasca Project
      • Client Side Web Proxies
        • Options include WebScarab, Burp Suite and Charles Proxy
      • Other Tools
        • These include password crackers, hex editors, text extractors, browser plug-ins, integrated development environments, network mapping, network traffic analysis, and exploitation tools

What to Assess

  • Measuring an application’s risk:
    • The Types of Users
      • Privileged Users, employees, suppliers, customers or the general public
    • The Sensitivity of the Data
      • Intellectual Property, PII or other regulatory requirements
    • Availability and Integrity Requirements
      • The impact to the business if compromised
    • Technology and Environmental Consideration
      • What technologies are used, where is it deployed,…

Gather Necessary Information

  • Before starting an assessment you will need to gather important information:
    • Application contacts
    • Server contacts
    • The process for getting accounts
    • A description of what the application does
    • The description or diagram of the system architecture

Assessment Planning Meeting

  • Meet with the application development and support teams:
    • Get a demonstration of the application
    • Review the information gathered to support the assessment
    • Discuss the testing process and ground rules
      • No changes to the code during testing
      • Backups of the application servers and databases
      • How to address system crashes during testing
      • Database corruption issues
      • Emails generated by the application

Testing Notifications

  • You should have a process to notify affected parties before the actual testing begins.
    • Key system contacts
    • Intrusion detection teams
    • Other assessors
  • Information to include in the notification:
    • Source IP addresses
    • Target IP addresses, URL, system name
    • Testing schedule
    • Assessment team contacts

Conducting the Assessment

  • If you are using automated scanning tools, beware of false positives and negatives
    • Pattern recognition has limitations
    • Combine various testing methods
      • Automated scanning
      • Code review
      • Manual testing
    • Learn what your tools do and do not do well
    • Validate every finding
    • Keep detailed notes

Establish Standards

  • Assessments performed by two different people or the same person over time, may result in the same finding being presented very differently
    • This may result in inconsistent descriptions of the vulnerability or different recommendations for remediation
    • Without standard findings you may also find it difficult to produce meaningful metrics about discovered vulnerabilities

Standard Findings

  • Opinions about how to standardize software vulnerabilities are like noses, everyone has one.
  • At Boeing we have categorized vulnerabilities into approximately 70 standard findings like:
    • SQL Injection
    • Path Traversal
    • Session Fixation
    • Excessive Authentication Attempts
    • Forced Browsing
    • System information Leakage

Data Elements for Standard Findings

  • Each finding is made up of the following data elements:
    • Name
    • Control Classification
    • Severity (Likelihood + Impact)
    • Company Policy References
    • Industry References
    • Summary Description (one sentence)
    • Impact Statement (one sentence)
    • Detailed Description (basic introduction to vulnerability + detailed description of how it manifests within their application)
    • Recommendation (standard remediation recommendations tied into SDLC practices)

Control Classifications

  • We group individual vulnerabilities into control classifications.  This helps us determine how effective we are at implementing control types.
  • Our classifications:
    • Input and output controls
    • Authentication and password management
    • Authorization and access management
    • Sensitive information storage or transmission
    • System configuration and management
    • General coding errors

Reporting Findings

  • Developing a standardized reporting template will allow you to deliver a consistent, branded message
    • Cover Page
      • Provides information necessary to identify the assessment, what was assessed and who the key people were
    • Executive Summary
    • Findings Summary
    • Detailed Findings
    • Conclusion
      • Summary of assessment results, discussion of next steps and links to additional resources
    • Appendixes
      • Information on how severity ratings are determined, description of control classifications
    • Attachments
      • Typically raw scan files

Managing Corrective Actions

  • Once a report is issued you need a closed loop process to ensure serious issues are addressed.  Considerations include:
    • Tracking Findings:
      • Critical and high findings should be tracked to resolution
      • Medium findings are less straight forward
      • Low or informational findings may not be value added
    • Customer Responses to Findings:
      • Implement a technical fix to address the finding
      • Implement a process fix to address the finding
      • The business formally accepts the risk of not remediating

When to Re-Evaluate an Application

  • Depending on the number of applications you support and the frequency with which they change you may need to establish re-evaluation guidelines.  Soem criteria to consider include:
    • Fixes to previously accepted risk
    • User population changes
    • Data sensitivity changes
    • Business’s dependency on the application has increased
    • Authentication mechanism has changed
    • Authorization mechanism has changed

Application Assessment Process Flow Version

  • Create a document that shows the process flow for both requested and targeted assessment (ask for document from presenter?)
  • Formal closure process

Conclusion

  • Building an assessment team from the ground up takes:
    • Executive Support
    • A lot of planning
    • Staffing
    • The right tools
    • Training
    • Standards
    • Supporting Processes