Web Admin Blog Real Web Admins. Real World Experience.

22Jul/103

Static Application Vulnerability Testing: Binary Scanning vs Source Code Scanning

I had a meeting yesterday with a vendor who sells a SaaS solution for binary application vulnerability testing. They tell a very interesting story of a world where dynamic testing ("black box") takes place alongside static testing ("white box") to give you a full picture of your application security posture. They even combine the results with some e-Learning aspects so that developers can research the vulnerabilities in the same place they go to find them. In concept, this sounds fantastic, but I quickly turned into a skeptic and as I dug deeper into the details I'm not sure I like what I found.

I wanted to make sure I fully understood what was going on under the hood here so I started asking questions about the static testing and how it works. They've got a nice looking portal where you name your application, give it a version, assign it to a group of developers, and point it to your compiled code (WAR, EAR, JAR, etc). Once you upload your binaries, their system basically runs a disassembler on it to get it into assembly code. It's then at this level that they start looking for vulnerabilities. They said that this process takes about 3 days initially and then maybe 2 days after the first time because they are able to re-use some data about your application. Once complete, they say they are able to provide you a report detailing your vulnerabilities and how to fix them.

The thing that immediately struck me as worth noting here was the 2-3 day turnaround. This means that our developers would need to wait a fairly substantial amount of time before getting any feedback on the vulnerability status of their code. In a world full of Agile development, 2-3 days is a lifetime. Compare that to static source code testing where you get actionable results at compile time. The edge here definitely goes to source code testing as I believe most people would prefer the near-instant gratification.

The next thing worth noting was that they are taking binary files and disassembling them in order to find vulnerabilities. This lends itself to one major issue which is how can you determine with any accuracy the line number of a particular vulnerability written in let's say Java from assembly code generated by disassembling the binaries. By default, it's simply not possible. This vendor claimed that they can by adding in some debug strings at compile time, but even then I'd contend that you're not going to get much. I'm guessing they have some heuristics that are able to tell what function generated a set of assembly code, but I'm extremely skeptical that they can do anything with variable names, custom code functions, etc. I've seen some source code scanners, on the other hand, that not only tell you what line of code is affected, but are able to give you an entire list of parameters that have been consequently affected by that vulnerability. The edge here definitely goes to source code testing.

The main benefit that I can see with binary testing vs source code testing is that we can test code that we didn't write. Things like APIs, third-party applications, open source, etc are all things that we now have visibility into. The only problem here is that while we now can see the vulnerabilities in this software, they are unfortunately all things that we can't directly influence change in, unless we want to send our developers off to work on somebody else's software. I'd argue that scanning for vulnerabilities in that type of code is their responsibility, not ours. Granted, it'd be nice to have validation that there aren't vulnerabilities there that we're exposing ourselves to by uptaking it, but in all honesty are we really going to take the time to scan somebody else's work? Probably not. The edge here goes to binary testing with the caveat being that it's in something that I frankly don't care as much about.

This isn't the complete list of pros and cons by any means. It's just me voicing in writing some concerns that I had about the technology while talking to this particular vendor. In my opinion, the benefits of doing source code testing far outweigh any benefits that we could get from testing compiled binary files. What do you think about the benefits of one versus the other? I'd certainly love for someone to try to change my mind here and show me where the real value lies in binary testing.

23Feb/1028

A XSS Vulnerability in Almost Every PHP Form I’ve Ever Written

I've spent a lot of time over the past few months writing an enterprise application in PHP.  Despite what some people may say, I believe that PHP is as secure or insecure as the developer who is writing the code.  Anyway, I'm at the point in my development lifecycle where I decided that it was ready to run an application vulnerability scanner against it.  What I found was interesting and I think it's worth sharing with you all.

Let me preface this by saying that I'm the guy who gives the training to our developers on the OWASP Top 10, writing secure code, etc.  I'd like to think that I have a pretty good handle on programming best practices, input validation, and HTML encoding.  I built all kinds of validation into this application and thought that the vulnerability scan would come up empty.  For the most part I was right, but there was one vulnerability, one flaw in particular, that found it's way into every form in my application.  In fact, I realized that I've made this exact same mistake in almost every PHP form that I've ever written.  Talk about a humbling experience.

So here's what happened.  I created a simple page with a form where the results of that form are submitted back to the page itself for processing.  Let's assume it looks something like this:

<html>
 <body>
  <?php
  if (isset($_REQUEST['submitted']) && $_REQUEST['submitted'] == '1') {
    echo "Form submitted!";
  }
  ?>
  <form action="<?php echo $_SERVER['PHP_SELF']; ?>">
   <input type="hidden" name="submitted" value="1" />
   <input type="submit" value="Submit!" />
  </form>
 </body>
</html>

It looks fairly straightforward, right? The problem has to do with that $_SERVER['PHP_SELF'] variable. The intent here is that PHP will display the path and name of the current page so that the form knows to submit back to the same page.  The problem is that $_SERVER['PHP_SELF'] can actually be manipulated by the user.  Let's say as the user I change the URL from http://www.webadminblog.com/example.php to http://www.webadminblog.com/example.php"><script>alert('xss');</script>.  This will end the form action part of the code and inject a javascript alert into the page.  This is the very definition of cross site scripting.  I can't believe that with as long as I've been writing in PHP and as long as I've been studying application security, I've never realized this.  Fortunately, there are a couple of different ways to fix this.  First, you could use the HTML entities or HTML special character functions to sanitize the user input like this:

htmlentities($_SERVER['PHP_SELF]);

htmlspecialchars($_SERVER['PHP_SELF]);

This fix would still allow the user to manipulate the URL, and thus, what is displayed on the page, but it would render the javascript invalid.  The second way to fix this is to use the script name variable instead like this:

$_SERVER['SCRIPT_NAME'];

This fix would just echo the full path and filename of the current file.    Yes, there are other ways to fix this.  Yes, my code example above for the XSS exploit doesn't do anything other than display a javascript alert.  I just wanted to draw attention to this issue because if it's found it's way into my code, then perhaps it's found it's way into yours as well.  Happy coding!

13Nov/090

Application Security Metrics from the Organization on Down to the Vulnerabilities

This presentation was by Chris Wysopal, the CTO of Veracode.  My notes are below:

"To measure is to know." - James Clerk Maxwell

"Measurement motivates." - John Kenneth Galbraith

Metrics do Matter

  1. Metrics quantify the otherwise unquantifiable
  2. Metrics can show trends and trends matter more than measurements do
  3. Metrics can show if we are doing a good job or bad job
  4. Metrics can show if you have no idea where you are
  5. Metrics establish where "You are here" really is
  6. Metrics build bridges to managers
  7. Metrics allow cross sectional comparisons
  8. Metrics set targets
  9. Metrics benchmark yourself against the opposition
  10. Metrics create curiosity

Metrics Don't Matter (Mike Rothman)

  • It is too easy to count things for no purpose other than to count them
  • You cannot measure security so stop
  • This following is all that matters and you can't map security metrics to them:
    • Maintenance of availability
    • Preservation of wealth
    • Limitation on corporate liability
    • Compliance
    • Shepherding the corporate brand
  • Cost of measurement not worth the benefit

Bad metrics are worse than no metrics

Security Metrics Can Drive Executive Decision Making

  • How secure am I?
  • Am I better off than this time last year?
  • Am I spending the right about of money?
  • How do I compare to my peers?
  • What risk transfer options to I have?

Goals of Application Security Metrics

  • Provide quantifiable information to support enterprise risk management and risk-based decision making
  • Articulate progress towards goals and objectives
  • Provides a repeatable, quantifiable way to assess, compare, and track improvements in assurance
  • Focus activities on risk mitigation in order of priority and exploitability
  • Facilitate adoption and improvement of secure software design and development processes
  • Provide and objective means of comparing and benchmarking projects, divisions, organizations, and vendor products
18Aug/0825

An Evaluation of Rapid7 NeXpose

I've been focusing a lot of my time lately on our PCI initiatives.  One sub-topic that I've spent a particularly large amount of time on has been Requirement 11.2 which says that you need to have internal and external network vulnerability scans performed by a scan vendor qualified by PCI.  We already employ one such tool, but I've been working to evaluate several other vulnerability scanning tools to see where our current tool is at in comparison.  I'll post my evaluations of each of these tools in time, but for now I'll start with my evaluation of Rapid7 NeXpose.

21May/080

PCI Security Scanning Services

Recently I've been doing a lot of work looking at various vendors for the vulnerability scanning portion of PCI compliance (PCI Requirement 6.5). I've been talking to many different companies. Some sell tools and some sell services. We're looking at vendors to either supplement or replace our current tool set. The only real specific requirement in regards to PCI is that you need to follow standard guidelines such as the OWASP Top 10. Seems like a pretty simple task right? Not really. One vendor I'm talking to seems to be going out of their way to not give us an evaluation before we purchase. Granted, this particular vendor prides themselves on having manual checkpoints throughout their scanning process, so there is additional setup cost to them for an evaluation, but still. How can they expect a customer to drop tens of thousands of dollars on their product without evaluating what it's capable of and comparing it to other vendors? Another vendor bombards us almost daily with calls asking "What can we do to get you to buy today?" I've explained several times that I want to do a comprehensive evaluation and compare their product to several others. Honestly, all this pushiness seems to do is make me think about what is so wrong with their product that they have to push this hard to sell. Every time their sales guy calls me, I cringe.

It's not like our current solution is bad or anything. It finds what it's supposed to find. Heck, it's found some stuff that these other guys never did. National Instruments has invested a good chunk of change in these tools and I'm pretty happy using them. I was reading up on Dark Reading the other day when I came across a blog from John H. Sawyer from the IT Security Team at the University of Florida. He said...

I’m trying not to be cynical, but it’s getting to the point that choosing a reputable PCI scanning service for your Website is like politics, where you’re left choosing the lesser of two evils. If you’ve got experience, good or bad, with vendors such as McAfee, Qualys, or Rapid7, I’d be interested in hearing about them. I’d love to find a happy customer, and not one in the “ignorance is bliss” sense of the term.

I truly feel his pain. The vulnerability scanning space is full of vendors who promise the moon, but then fail to deliver on so many levels. Even if I buy the hype and purchase one of these miraculous tools or services, can I really consider us to be secure? Maybe I should just show all of these vendors the door and opt for Scanless PCI instead as it'd probably help me sleep just as well at night.