Web Admin Blog Real Web Admins. Real World Experience.

15Sep/132

Enterprise Risk Management for the Masses

A couple of years ago I decided, along with support from my management, that Enterprise Risk Management would become a focal point for my Information Security Program.  I was convinced that framing vulnerabilities in the form of risks was essential to giving management visibility into issues they currently didn't know existed and to give our staff the comfort of knowing that the issues that caused them to lose sleep at night were now being considered for mitigation by management.  I couldn't have been more right.

I began by collecting the risks submitted by each team in Excel spreadsheets and Word documents.  They had all of the pertinent information like a subject, owner, risk assessment, etc, but very quickly I became a victim of my own success.  Before I knew it, I had more risks than I could efficiently track in this format.  First off, it was extremely cumbersome to try to manually maintain the risk index in Excel.  While Excel is good at calculating formulas, it sucks at maintaining links to external documents.  It can be done, but it requires quite a bit of manual effort to do so.  Second, maintaining your risk details in Word documents is something they should reserve only for your worst enemies.  They are difficult to update, difficult to track updates with, difficult to search and, well, just plain difficult.  I thought to myself that there has to be a better way, yet, this is what the unfortunate majority out there are currently stuck with today.

After some research, it turns out that many years back, my company had another security professional who was interested in Enterprise Risk Management.  Apparently, they had come to similar conclusions as I did with the Word documents and Excel spreadsheets, but they were able to get some internal development time to create a Lotus Notes based risk management database.  It was everything that I needed, or so I thought, so I started to manually enter all of my new risks into this old risk management database.  At first, things seemed to be working well.  I had some different views into my data that would allow me to see way more information than I could before.  I also had the ability for management of our various teams to be able to see their risks without involving me.  It was much better, but soon I began to realize the limitations of this approach.  The database itself was rigid.  Changes required me to go through another internal team for resources and it often took a long time to make them.  Also, any updates that were made didn't modify the current risks, only the ones submitted after that point.  Once, I found myself opening and re-saving hundreds of risks just because I decided to change my risk calculation formula slightly.  I began looking again for another way.

Soon, my new round of research brought me to a special set of tools called Governance, Risk, and Compliance or GRC for short.  There are a number of such tools out there by well-resepcted companies such as EMC Archer and CA.  They looked completely awesome and seemed to solve all of my problems with many more features to spare so I started to get some SWAG quotes from a few of the vendors.  Low and behold, these tools hold a pricetag of $100k to half a million dollars and beyond.  A request for budget for one of these tools was dismissed immediately with management literally laughing at my suggestion.  OK, so maybe it was on me, right?  Maybe I didn't do a good enough job of selling the tool?  Maybe I didn't engage the right stakeholders to back my request?  I guess you could call me a glutton for punishment, but I decided to keep trying.  This time I gathered people I thought would be interested in risk from all different areas of our business for a demo of one of the tools.  Trade Compliance, Health and Safety, Facilities, Legal, and many more.  They watched the presentation, asked some fantastic questions, and ultimately left that meeting saying that they thought that a GRC solution was a fantastic idea.  That was until I mentioned the price tag.  If even with a budget split between half a dozen different teams, it wasn't going to happen, I knew that it simply wasn't going to happen.

As I began to think about the situation that I was in, I realized that I wasn't alone in all this.  I talked with friends at various state agencies, friends at risk consultancies, and friends at companies large and small.  They had gone through the same trials and tribulations that I had and fared no better for the most part.  Having spent the better part of the last decade coding random applications and websites in PHP and MySQL, I decided that there may be something that I could do about it.  I would go home from work and start coding until the wee hours of the morning.  I would wake up early on my weekends and start coding again until the family awoke.  After several weeks of this, I had a working prototype for a new risk management system based on some simplifications of the NIST 800-30 risk management framework and running on my LAMP (Linux Apache MySQL PHP) stack.  SimpleRisk was born.

At the time of this writing, I have released 7 official versions of SimpleRisk since March of this year.  It has come a long way since then, but still holds true to it's roots.  SimpleRisk is free and open source.  The methodology was designed to be as simple as possible, hence the name.  A five step process walks you through the basics of risk management:

  1. Submit your risks
  2. Plan your mitigations
  3. Perform management reviews
  4. Prioritize for project planning
  5. Review regularly

It has every basic feature required of an enterprise risk management system and I'm adding new ones all the time.  It has five different ways to weight classic risk calculations (ie. likelihood and impact) and can perform CVSS scoring as well.  It has it's own built-in authentication system, but I've built an extra module to do LDAP authentication that I'm giving away to anyone who donates $500 or more to the cause.  It also has a half-dozen different ways to report on the risks and many more reports should be complete soon.  You can check out the demo (minus the Administrator interface) using the username "user" and password "user" at http://demo.simplerisk.org.  Or, if you're ready to dive right in, you can obtain the download package for free at http://www.simplerisk.org.

In order to make your foray into SimpleRisk as simple as possible, I've created a SimpleRisk LAMP Installation Guide that you can use to have the tool up and running in about 30-60 minutes.  And if all else fails and that proves too difficult or time consuming, then you should make your way to http://www.hostedrisk.com where for a fraction of what it would cost to buy a GRC solution, you will have your own dedicated SimpleRisk instance, running on hardware dedicated to you, built with security in mind, including extra modules not part of the standard distribution, and you'll never have to worry about installing or upgrading risk management software ever again.  Hopefully you won't ever need this, but the option is always there in case you do.

My frustrations with a lack of efficient and cost-effective risk management tools led me to create one of my own.  My hope is that by making SimpleRisk free and open source, it will benefit the rest of the security community as much as it has already benefited me.  If you have any questions or requests for features that you would like to see included in the tool, I'm always here to help.  SimpleRisk is simple, enterprise risk management, for the masses.

22Jul/103

Static Application Vulnerability Testing: Binary Scanning vs Source Code Scanning

I had a meeting yesterday with a vendor who sells a SaaS solution for binary application vulnerability testing. They tell a very interesting story of a world where dynamic testing ("black box") takes place alongside static testing ("white box") to give you a full picture of your application security posture. They even combine the results with some e-Learning aspects so that developers can research the vulnerabilities in the same place they go to find them. In concept, this sounds fantastic, but I quickly turned into a skeptic and as I dug deeper into the details I'm not sure I like what I found.

I wanted to make sure I fully understood what was going on under the hood here so I started asking questions about the static testing and how it works. They've got a nice looking portal where you name your application, give it a version, assign it to a group of developers, and point it to your compiled code (WAR, EAR, JAR, etc). Once you upload your binaries, their system basically runs a disassembler on it to get it into assembly code. It's then at this level that they start looking for vulnerabilities. They said that this process takes about 3 days initially and then maybe 2 days after the first time because they are able to re-use some data about your application. Once complete, they say they are able to provide you a report detailing your vulnerabilities and how to fix them.

The thing that immediately struck me as worth noting here was the 2-3 day turnaround. This means that our developers would need to wait a fairly substantial amount of time before getting any feedback on the vulnerability status of their code. In a world full of Agile development, 2-3 days is a lifetime. Compare that to static source code testing where you get actionable results at compile time. The edge here definitely goes to source code testing as I believe most people would prefer the near-instant gratification.

The next thing worth noting was that they are taking binary files and disassembling them in order to find vulnerabilities. This lends itself to one major issue which is how can you determine with any accuracy the line number of a particular vulnerability written in let's say Java from assembly code generated by disassembling the binaries. By default, it's simply not possible. This vendor claimed that they can by adding in some debug strings at compile time, but even then I'd contend that you're not going to get much. I'm guessing they have some heuristics that are able to tell what function generated a set of assembly code, but I'm extremely skeptical that they can do anything with variable names, custom code functions, etc. I've seen some source code scanners, on the other hand, that not only tell you what line of code is affected, but are able to give you an entire list of parameters that have been consequently affected by that vulnerability. The edge here definitely goes to source code testing.

The main benefit that I can see with binary testing vs source code testing is that we can test code that we didn't write. Things like APIs, third-party applications, open source, etc are all things that we now have visibility into. The only problem here is that while we now can see the vulnerabilities in this software, they are unfortunately all things that we can't directly influence change in, unless we want to send our developers off to work on somebody else's software. I'd argue that scanning for vulnerabilities in that type of code is their responsibility, not ours. Granted, it'd be nice to have validation that there aren't vulnerabilities there that we're exposing ourselves to by uptaking it, but in all honesty are we really going to take the time to scan somebody else's work? Probably not. The edge here goes to binary testing with the caveat being that it's in something that I frankly don't care as much about.

This isn't the complete list of pros and cons by any means. It's just me voicing in writing some concerns that I had about the technology while talking to this particular vendor. In my opinion, the benefits of doing source code testing far outweigh any benefits that we could get from testing compiled binary files. What do you think about the benefits of one versus the other? I'd certainly love for someone to try to change my mind here and show me where the real value lies in binary testing.

12Sep/081

Beware the Deceptive SLA, My Friend

We're trying to come to an agreement with a SaaS vendor about performance and availability service level agreements (SLAs).  I discussed this topic some in my previous "SaaS Headaches" post.  I thought it would be instructive to show people the standard kind of "defense in depth" that suppliers can have to protect against being held responsible for what they host for you.

We've been working on a deal with one specific supplier.  As part of it, they'll be hosting some images for our site.  There's a business team primarily responsible for evaluating their functionality etc., we're just in the mix as the faithful watchdogs of performance and availability for our site.

Round 1 - "What are these SLAs you speak of?"  The vendor offers no SLA.  "Unacceptable," we tell the project team.  They fret about having to worry about that along with the 100 other details of coming to an agreement with the supplier, but duly go back and squeeze them.  It takes a couple squeezes because the supplier likes to forget about this topic - send a list of five questions with one of them being "SLA," you get four answers back, ignoring the SLA question.

Round 2 - "Oh, you said 'SLA'!  Oh, sure, we have one of those."  We read the SLA and it only commits to their main host being pingable.  Our service could be completely down, and it doesn't speak to that.  Back to our project team, who now between the business users, procurement agent, and legal guy need more urging to lean on the supplier.  The supplier plays dumb for a while, and then...

Tagged as: , Continue reading
5Aug/082

Cloud Headaches?

The industry is abuzz with people who are freaked out about the outages that Amazon and other cloud vendors have had.  "Amazon S3 Crash Raises Doubts Among Cloud Customers," says InformationWeek!

This is because people are going into cloud computing with retardedly high expectations.  This year at Velocity, Interop, etc. I've seen people just totally in love with cloud computing - Amazon's specifically but in general as well.  And it's a good concept for certain applications.  However, it is a computing system just like every other computing system devised previously by man.  And it has, and will have, problems.

Whether you are using in house systems, or a SaaS vendor, or building "in the cloud," you have the same general concerns.  Am I monitoring my systems?  What is my SLA?  What is my recourse if my system is not hitting it?  What's my DR plan?

Cloud computing is also being called "PaaS," or Platform as a Service.  It's a special case of SaaS.  And if you're a company relying on it, when you contract with a SaaS vendor you get SLAs established and figure out what the remedy is if they breach it.  If you are going into a relationship where you are just paying money for a cloud VM, storage, etc. and there is no enforceable SLA in the relationship, then you need to build the risk of likely and unremediable outages into your business plan.

I hate to break it to you, but the IT people working at Amazon, Google, etc. are not all that smarter than the IT people working with you.  So an unjustified faith in a SaaS or cloud vendor - "Oh, it's Amazon, I'm sure they'll never have an outage of any sort - their entire system or localized to my part - and if they do I'm sure the $100/month I'm paying them will cause them to give a damn about me" - is unreasonable on its face.

Clouds and cloud vendors are a good innovation.  But they're like every other computing innovation and vendor selling it to you.  They'll have bugs and failures.  But treating them differently is a failure on your part, not theirs.

Tagged as: , , 2 Comments
15Jul/083

SaaS Headaches

There's a lot of promise in the new SaaS (software as a service; what used to be called ASPs, or Application Service providers, till Microsoft crapped all over that acronym) and newer PaaS (platform as a service) spaces (and look for a steady stream of new "aaS"es to come).  However, there are a lot of gotchas in signing on with a SaaS vendor.  You'd like to be able to believe that they have decent performance, uptime, security, etc., especially after the tell you "Oh, all kinds of big companies use us; Dell, IBM..."  This is exacerbated by SaaS often being an "end run" around IT in the enterprise, so naive users can get sold a bill of goods without proper technical oversight.  SaaS is a big buzzword now, and there are a lot of startups springing up that do not necessarily have experience running large scale sites.  Think about how many MMORPG games still get scuttled due to poor operational performance.  SaaS is the same.

Here's some things to keep in mind when selecting a SaaS vendor, laced with real life horror stories from our experiences.

1.  Performance/Availability.  Set a hard performance/availability SLA in the contract.  Many vendors won't even have an SLA clause, or they'll have one that says "99.9% uptime!" without any remedy clause for what if they don't hit that.  You want a clear SLA with a clear measurement method and clear "money back" if they don't hit it.  We use a 2 second global performance SLA as measured by a Keynote Global 35 monitor.  But the SLA isn't the whole story - you are counting on these people to accomplish your goals.

Tagged as: , Continue reading