Web Admin Blog Real Web Admins. Real World Experience.

24Mar/090

PCI Compliance – Convert Drudgery Into a Powerful Security Framework

For my last session of the day at TRISC 2009, I decided to attend Joseph Krull's presentation on PCI Compliance.  Joe works as a consultant for Accenture and has performed 60+ PCI engagements for various companies.  If your organization does any processing of credit card information, my notes from that session below should be useful:

  • As many as 65% of merchants are still not PCI compliant
  • Fines can be just the beginning; service charges and market share price dilution for non-compliant merchants have already had substantial repercussions in the US and may soon reach other regions·
  • Many retailers still don’t have a clear view of compliance, and cannot effectively identify gaps
  • The first steps to PCI compliance are a thorough internal assessment and gap analysis – many merchants skip these steps and launch multiple costly projects
  • PCI provides a regulatory and compliance framework to help prevent credit card fraud for organizations that process card payments
  • The framework is comprehensive and effective but adherence to the specific standards is often challenging – primarily due to the complexities involved in both program design and implementation
  • Any merchant that accepts or processes credit cards must maintain compliance with the PCI DSS.  Specific obligations vary based on transaction volumes.
  • Focus right now is on the Level 4’s.
  • TJX subject to 20 years of mandatory computer systems audits after massive breach

Challenges

  • Providing adequate and clear program management for all of the entire spectrum of PCI remediation activities (60-70% give to “Compliance guy” and typically fail.  Should go to senior security guy)
  • Accurately scoping requirements throughout the organization, including remote sites and international operations
  • Evaluating and then implementing a wide variety of complex technologies – including encryption
  • Redesigning or replacing internal applications and payment systems to adequately protect cardholder data
  • Developing, implementing and enforcing new or revised policies and procedures across the entire organization
  • Differing opinions with auditors regarding PCI compliance requirements, especially related to the concept of “Compensating Controls”
  • Verifying PCI compliance for 3rd party partners that process data on behalf of the merchant

Differences from PCI DSS 1.1 to 1.2

  • Active monitoring plans for all 3rd party PCI Service Providers (Requirement 12.8)
  • Visits to offsite data storage locations at least annually
  • Mandatory phase out of weak encryption for wireless networks
  • Additional requirements for the use of “Compensating Controls” for specific PCI security requirements
  • Assessor testing procedures changed from “Observe the use of…” to “Verify the use of”
  • Quality assurance program for PCI assessors
  • Process restricts or eliminates assessors from performing PCI work due to poor quality assessments
  • Assessors must now go beyond cursory observation of security controls and provide statistical samples
  • Assessors now going much deeper to include verifying individual system settings, requesting and analyzing configuration files, studying data flows, …

The Cost of Compliance and Non-Compliance

  • According to a comprehensive Forrester Research report on PCI compliance, companies spend between 2%-10% of their IT budget on PCI compliance
  • Credit card companies are levying fines on non-compliant merchants
    • Up to $25,000 per month for each month of non-compliance for L1’s ($5,000 for L4’s)
    • $10,000-$100,000 per month for prohibited storage of magnetic stripe data
    • Up to $500,000 per incident if a confirmed compromise occurs
    • Continued non-compliance may result in revocation of CC processing privileges
  • Banks and acquirers may increase processing fees for non-complinat merchants.  In 2008, one retailer estimated an annual increase in operational costs of $18 million due to this increase in processing fees on VISA card transactions alone.
  • Banks and acquirers can often pass on damages they incur to merchants
  • Repeat or additional PCI assessments & internal audits

Corporate Compliance Framework

  • Although PCI provides compliance requirements in most areas, it’s only a subset
  • ISO 27002:2005 is what they used for PCI
  • Good general requirements, but no explanation on how to do it
  • PCI sets best practices
  • For example, ISO 5.1.1 maps to PCI 12.1, 12.4, and 12.6.2

How to “Sell” PCI Compliance to Senior Management

  • Gloom and Doom
    • Fines and sanctions will sink us
    • Probability of success 40-50%
  • The PCI Umbrella
    • We need these 15 projects and ten new security products to be PCI compliant
    • Probability of success 40-50%
    • Who has done the gap assessment
  • The Long Term Approach
    • If we achieve PCI compliance we will also be well on our way to other requirements
  • PCI compliance is not a project or technology based solution – it is being able to demonstrate that an organization has the means in place to protect sensitive information
  • Use as a building block to sell to senior management
24Sep/081

Mastering PCI Section 6.6 – OWASP AppSec NYC 2008

This presentation is by Jacob West in the Security Research Group and Taylor McKinsley in Product Marketing from Fortify software.  I'd like to note that Fortify is a developer of a source code analysis tool and so this presentation may have a bias towards source code analysis tools.

56% of organizations fail PCI section 6.  Poorly coded web applications leading to SQL injection vulnerabilities is one of hte top five reasons for a PCI audit failure.  Section 6 is becoming a bigger problem: #9 in 2006 reason for failure, #2 in 2007.

PCI Section 6 has to do with guidelines to "Develop and maintain secure systems and applications".  Section 6.6 reads "Ensure that all web-facing applications are protected against known attacks by either of the following methods: Having all custom application code reviwed for common vulnerabilities by an organization that specializes in web application secure" or by using a web application firewall.  Further clarifications say that automated tools are acceptable, web application penetration testing is allowed, and vulnerability assessments can be performed by an internal team.

Comparing Apples, Oranges, and Watermelons

  • Setup: Source code analysis (+2) is good because it works on existing hardware, but must live where your source code lives.  Penetration testing (+3) is good because you only need one to assess everything and works on existing hardware, but needs to talk to a running program.  Application firewall (+1)is good because it lives on the network, but you must model program behavior.
  • Optimization: Source code analysis (+2) is good because you can specify generic antipatterns in code, but you must understand vulnerability in detail.  Penetration testing (+2) is good because tests are attacks, but you must successfully attack your application.  Application firewalls (+1) are good because they share configuration across programs, but must differentiate good from bad.
  • Performance: Source code analysis (+3) is good because it simulates all application states and is non-production, but scales with build time and not the number of tests.  Penetration testing (+2) is good because you get incremental results and is non-production, but you must exercise each application state.  Application firewall (+1) is good because it's a stand-alone device and scales with $$$, but impacts production performance and scales with $$$.
  • Human resources: Source code analysis (+1) is good because it enables security in development and reports a root cause, but makes auditors better and does not replace them.  Penetration testing (+2) is good because it is highly automatable, but reports symptoms and not the root cause.  Application firewall (+2) is good because once it's configured it functions largely unattended, but requires extensive and ongoing configuration.
  • Security know-how: Source code analysis (+3) is good because it gives code-level details to an auditor, but you must understand security-relevant behavior of APIs.  Penetration testing (+1) is good because it automates hacks, but a hacker is required to measure success and optimize.  Application firewall (+2) is good because it identifies common attacks out of the box and is a community effort, but a hacker is required to measure success and customize.
  • Development expertise: Source code analysis (+1) is good because it focuses attention on relevant code, but you must understand code-level program behavior.  Penetration testing (+2) is good because basic attacks ignore internals, but advanced attacks require internal knowledge.  Application firewalls (+2) are good because they live on the network, but you must understand the program to tell good from bad.
  • False positives: Source code analysis (+1) is good because it gives auditors details to verify issues, but reports impossible application states.  Penetration testing (+2) is good because results come with reproduction steps, but it is difficult to oracle some bugs.  Application firewalls (+1) are good because it is attacks instead of vulnerabilities, but there is an evolving definition of valid behavior.
  • False negatives: Source code analysis (+3) is good because it simulates all program states and models the full program, but it must be told what to look for.  Penetration testing (+1) is good because it is good at finding what hackers find, but is difficult to oracle some bugs and has missed coverage.  Application firewalls (+1) are good because it uses attacks instead of vulnerabilities, but there is an evolving attack landscape.
  • Technology support: Source code analysis (+2) is good because parsing is separable from the analysis and is interface-neutral, but it must adapt to new program paradigms.  Penetration testing (+2) is good because it is independent from program paradigms, but is tied to protocols and is limited to network interfaces.  Application firewalls (+2) are good because they are independent from program paradigms, but are tied to protocols and are limited to network interfaces.

Working Towards a Solution

  • Assessment: Proving the problem or meeting the regulatory requirement.  Recurring cost that does not "fix" anything
  • Remediation: Fixing security issues found during assessments.  Lowering business risk at a single point in time.
  • Prevention: Get security right hte first time.  Minimizing business risk systematically.

Do your own comparison and fill out the scorecard yourself (presenters ratings are noted in parentheses above).

Taylor did interviews with three companies to get their experiences deploying each (source code analysis, penetration testing, and application firewall) and had them evaluate based on the nine criteria both before and after buying.  Not going to list each company's results in the blog, but it was just a basic table with each criteria and a number rating for both before purchase and after deployment.  To sum it up, Source Code Analysis was a 14 rating before purchase and a 17 rating after deployment.  Penetration testing was a 21 rating before purchase and a 21 rating after deployment.  Application firewalls were a 21 rating before purchase and a 16 rating after deployment.  It seems like the first organization had a large amount of developers and that factored into their decision to purchase a source code analysis tool.  The second organization had a far fewer number of developers and was more of an IT shop and chose the penetration testing tool.  The last organization was a smaller shop in general (still fairly large) and went with the WAF because they wanted something they could just put in place and manage.

Analysis: All three solutions required more effort than expected.  All three solutions produce reasonably accurate results.  Varying levels of expertise needed.

How do you demonstrate that your application is protected against known attacks?

  • Verification that the application was analyzed
  • A report showing no critical security issues identified
  • Document showing how the tool fits into your architecture

How do you show that the user is appropriately trained?

  • Document explaining prior experience or an informal interview

How do you show that you have configured the tool appropriately?

  • Document explaining how the tool was configured and what new rules had to be added.

Summary: PCI section 6 is evolving to become increasingly precise.  Compare technologies in your environment along nine criteria.  Demonstrating compliance is an art, not a science.

18Aug/0825

An Evaluation of Rapid7 NeXpose

I've been focusing a lot of my time lately on our PCI initiatives.  One sub-topic that I've spent a particularly large amount of time on has been Requirement 11.2 which says that you need to have internal and external network vulnerability scans performed by a scan vendor qualified by PCI.  We already employ one such tool, but I've been working to evaluate several other vulnerability scanning tools to see where our current tool is at in comparison.  I'll post my evaluations of each of these tools in time, but for now I'll start with my evaluation of Rapid7 NeXpose.

22May/085

Log Management for Dummies (aka Splunk)

Logs are one thing that I think are severely underutilized by most systems administrators. Most of us have taken the first step by actually logging the data, but neglect organizing it into any sort of manageable form. You'll probably argue that any hardcore *nix admin would be able to take the raw logs using grep, cut, awk, and a handful of other *nix power tools and turn it into consumable information, but that'll only get you so far.

Several months ago we evaluated a bunch of log management solutions with several goals in mind. We wanted a solution that was agile enough to be able to take in a wide variety of log formats as well as configuration files. It needed to shield sensitive information (passwords, credit card information, etc) from unauthorized users. It needed to provide us with a customizable interface where we could report on all of the log data it gathered. Lastly, it needed to provide our customers (developers) with the ability to self-service their own log files. After evaluating most of the major players in the log management arena, we found our ideal solution in a product called Splunk.

The first thing I noticed when evaluating Splunk was that they're not like everyone else. They're not trying to sell you some sort of logging appliance and they offer their software free for customers with 100 MB/day or less worth of logging. Getting Splunk installed was a breeze. You can have it up and running in minutes. It truly is Log Management for Dummies in that respect, but under the hood there is a highly configurable and customizable tool with an API that you could use to write your own applications to examine log files.

At this point I've mucked around with Splunk for a few months and our configuration is pretty intense. I've added in custom indexes to make my custom dashboards load faster. I've set Splunk up to create queryable metadata fields based on information in the logs. I've added filters for custom timestamps and auditing so we can tell if a log file has been modified. I've even set up a "deployment server" to distribute Splunk's configuration bundles to my various types of servers. This brings me to the one drawback of Splunk: Upgrading. Rumor has it that they are working on making it easier to upgrade from one version to the next, but for the time being it involves logging in to each server, stopping Splunk, upgrading the files, and restarting Splunk again. If you only had to upgrade every once in a while it would be fine, but they maintain a very active development team so I find myself constantly wanting to upgrade to get the latest bug fixes and features.

Other than that, Splunk does exactly what I tell it to do. It grabs all of our logs and presents them in a single intuitive interface. Think of it as a search engine for log and configuration files. Then, once I have the log data in front of me, I can create custom reports based on that data. If I want to, I can even alert based on information Splunk finds in my logs (send an e-mail to a developer every time their application throws an error message). Oh, did I mention that Splunk has a PCI Dashboard that you can install for free? Ask those other guys how much they charge for their PCI solution.

The next time you have some free time be sure to download Splunk and install it on one of your development servers. You won't be disappointed.

21May/080

PCI Security Scanning Services

Recently I've been doing a lot of work looking at various vendors for the vulnerability scanning portion of PCI compliance (PCI Requirement 6.5). I've been talking to many different companies. Some sell tools and some sell services. We're looking at vendors to either supplement or replace our current tool set. The only real specific requirement in regards to PCI is that you need to follow standard guidelines such as the OWASP Top 10. Seems like a pretty simple task right? Not really. One vendor I'm talking to seems to be going out of their way to not give us an evaluation before we purchase. Granted, this particular vendor prides themselves on having manual checkpoints throughout their scanning process, so there is additional setup cost to them for an evaluation, but still. How can they expect a customer to drop tens of thousands of dollars on their product without evaluating what it's capable of and comparing it to other vendors? Another vendor bombards us almost daily with calls asking "What can we do to get you to buy today?" I've explained several times that I want to do a comprehensive evaluation and compare their product to several others. Honestly, all this pushiness seems to do is make me think about what is so wrong with their product that they have to push this hard to sell. Every time their sales guy calls me, I cringe.

It's not like our current solution is bad or anything. It finds what it's supposed to find. Heck, it's found some stuff that these other guys never did. National Instruments has invested a good chunk of change in these tools and I'm pretty happy using them. I was reading up on Dark Reading the other day when I came across a blog from John H. Sawyer from the IT Security Team at the University of Florida. He said...

I’m trying not to be cynical, but it’s getting to the point that choosing a reputable PCI scanning service for your Website is like politics, where you’re left choosing the lesser of two evils. If you’ve got experience, good or bad, with vendors such as McAfee, Qualys, or Rapid7, I’d be interested in hearing about them. I’d love to find a happy customer, and not one in the “ignorance is bliss” sense of the term.

I truly feel his pain. The vulnerability scanning space is full of vendors who promise the moon, but then fail to deliver on so many levels. Even if I buy the hype and purchase one of these miraculous tools or services, can I really consider us to be secure? Maybe I should just show all of these vendors the door and opt for Scanless PCI instead as it'd probably help me sleep just as well at night.