Web Admin Blog Real Web Admins. Real World Experience.


Enterprise Risk Management for the Masses

A couple of years ago I decided, along with support from my management, that Enterprise Risk Management would become a focal point for my Information Security Program.  I was convinced that framing vulnerabilities in the form of risks was essential to giving management visibility into issues they currently didn't know existed and to give our staff the comfort of knowing that the issues that caused them to lose sleep at night were now being considered for mitigation by management.  I couldn't have been more right.

I began by collecting the risks submitted by each team in Excel spreadsheets and Word documents.  They had all of the pertinent information like a subject, owner, risk assessment, etc, but very quickly I became a victim of my own success.  Before I knew it, I had more risks than I could efficiently track in this format.  First off, it was extremely cumbersome to try to manually maintain the risk index in Excel.  While Excel is good at calculating formulas, it sucks at maintaining links to external documents.  It can be done, but it requires quite a bit of manual effort to do so.  Second, maintaining your risk details in Word documents is something they should reserve only for your worst enemies.  They are difficult to update, difficult to track updates with, difficult to search and, well, just plain difficult.  I thought to myself that there has to be a better way, yet, this is what the unfortunate majority out there are currently stuck with today.

After some research, it turns out that many years back, my company had another security professional who was interested in Enterprise Risk Management.  Apparently, they had come to similar conclusions as I did with the Word documents and Excel spreadsheets, but they were able to get some internal development time to create a Lotus Notes based risk management database.  It was everything that I needed, or so I thought, so I started to manually enter all of my new risks into this old risk management database.  At first, things seemed to be working well.  I had some different views into my data that would allow me to see way more information than I could before.  I also had the ability for management of our various teams to be able to see their risks without involving me.  It was much better, but soon I began to realize the limitations of this approach.  The database itself was rigid.  Changes required me to go through another internal team for resources and it often took a long time to make them.  Also, any updates that were made didn't modify the current risks, only the ones submitted after that point.  Once, I found myself opening and re-saving hundreds of risks just because I decided to change my risk calculation formula slightly.  I began looking again for another way.

Soon, my new round of research brought me to a special set of tools called Governance, Risk, and Compliance or GRC for short.  There are a number of such tools out there by well-resepcted companies such as EMC Archer and CA.  They looked completely awesome and seemed to solve all of my problems with many more features to spare so I started to get some SWAG quotes from a few of the vendors.  Low and behold, these tools hold a pricetag of $100k to half a million dollars and beyond.  A request for budget for one of these tools was dismissed immediately with management literally laughing at my suggestion.  OK, so maybe it was on me, right?  Maybe I didn't do a good enough job of selling the tool?  Maybe I didn't engage the right stakeholders to back my request?  I guess you could call me a glutton for punishment, but I decided to keep trying.  This time I gathered people I thought would be interested in risk from all different areas of our business for a demo of one of the tools.  Trade Compliance, Health and Safety, Facilities, Legal, and many more.  They watched the presentation, asked some fantastic questions, and ultimately left that meeting saying that they thought that a GRC solution was a fantastic idea.  That was until I mentioned the price tag.  If even with a budget split between half a dozen different teams, it wasn't going to happen, I knew that it simply wasn't going to happen.

As I began to think about the situation that I was in, I realized that I wasn't alone in all this.  I talked with friends at various state agencies, friends at risk consultancies, and friends at companies large and small.  They had gone through the same trials and tribulations that I had and fared no better for the most part.  Having spent the better part of the last decade coding random applications and websites in PHP and MySQL, I decided that there may be something that I could do about it.  I would go home from work and start coding until the wee hours of the morning.  I would wake up early on my weekends and start coding again until the family awoke.  After several weeks of this, I had a working prototype for a new risk management system based on some simplifications of the NIST 800-30 risk management framework and running on my LAMP (Linux Apache MySQL PHP) stack.  SimpleRisk was born.

At the time of this writing, I have released 7 official versions of SimpleRisk since March of this year.  It has come a long way since then, but still holds true to it's roots.  SimpleRisk is free and open source.  The methodology was designed to be as simple as possible, hence the name.  A five step process walks you through the basics of risk management:

  1. Submit your risks
  2. Plan your mitigations
  3. Perform management reviews
  4. Prioritize for project planning
  5. Review regularly

It has every basic feature required of an enterprise risk management system and I'm adding new ones all the time.  It has five different ways to weight classic risk calculations (ie. likelihood and impact) and can perform CVSS scoring as well.  It has it's own built-in authentication system, but I've built an extra module to do LDAP authentication that I'm giving away to anyone who donates $500 or more to the cause.  It also has a half-dozen different ways to report on the risks and many more reports should be complete soon.  You can check out the demo (minus the Administrator interface) using the username "user" and password "user" at http://demo.simplerisk.org.  Or, if you're ready to dive right in, you can obtain the download package for free at http://www.simplerisk.org.

In order to make your foray into SimpleRisk as simple as possible, I've created a SimpleRisk LAMP Installation Guide that you can use to have the tool up and running in about 30-60 minutes.  And if all else fails and that proves too difficult or time consuming, then you should make your way to http://www.hostedrisk.com where for a fraction of what it would cost to buy a GRC solution, you will have your own dedicated SimpleRisk instance, running on hardware dedicated to you, built with security in mind, including extra modules not part of the standard distribution, and you'll never have to worry about installing or upgrading risk management software ever again.  Hopefully you won't ever need this, but the option is always there in case you do.

My frustrations with a lack of efficient and cost-effective risk management tools led me to create one of my own.  My hope is that by making SimpleRisk free and open source, it will benefit the rest of the security community as much as it has already benefited me.  If you have any questions or requests for features that you would like to see included in the tool, I'm always here to help.  SimpleRisk is simple, enterprise risk management, for the masses.


Are Invisible Barbarians At Your Gates?

A couple of weeks back, HD Moore posted a blog entry entitled
"Security Flaws in Universal Plug and Play: Unplug, Don't Play" supporting a Rapid7 Whitepaper in which he discusses the 81 million unique IP addresses that respond to UPnP discovery requests on the Internet and the 23 million fingerprints that match a version of libupnp that exposes the systems to remote code execution.  His research on the subject is fascinating and I highly recommend reading it over, but that's not the reason why I'm writing this.  The first question this research had me asking myself is whether or not my organization utilizes UPnP for anything.  As far as I can tell, the answer to this question is, thankfully, no.  Next, out of curiosity I began to wonder how many people were out there actively trying to find these exploits.  A perfect opportunity to fire up our new LYNXeon tool.

Our LYNXeon tool is configured to consume NetFlow data provided by literally hundreds of routers and switches in our global environment.  One of the most interesting things about it is that it can be used to see the traffic that comes in from our edge routers before it gets squashed by our firewall.  Utilizing this tool in this way, we can visualize the so-called "Barbarians" at our gates.  These are the hackers that are out there trying to find the weak spots in our security in order to get in.  And since I know that UPnP is not a service that we offer up to the Internet at large, it makes finding the guys who are looking to exploit it that much easier.

I fire up LYNXeon and my first step is to generate what is known as "PQL" or "Pattern Query Language".  While their Cyber Analytics Catalog offers up a ton of templates to use to find potential threats, PQL is the base of all those queries and writing your own allows you to define your own catalog of things to look for.  The language is pretty easy to understand.  First you define the characteristics of the connections that you are looking to find.  After doing some research, I found out that HD was looking for openings on UPnP's Simple Service Discovery Protocol (SSDP) service which typically runs on UDP/1900.  So, my query is for connections from external source IPs to internal source IPs using the UDP protocol on port 1900.  Once the connections have been defined, all that is left to do is define the data that you want to see in the results.  In total, my PQL code is 15 lines of code:


Now it's officially time to make these invisible Barbarians visible.  I tell LYNXeon to only show me results over the last day (to reduce the amount of time the search takes) and then tell it to "Execute Pattern Search" using the pattern file that I just created.  Searches will vary in time based upon the timeframe searched, the number of forwarding devices, and how complicated your search criteria are.  For me, this search returned 539 results in one minute and 38 seconds.


Now that I have results, I just need to select how to view them.  My personal favorite is viewing the results in the Link Explorer.  This will show my data as nodes on a pictoral graph.  I make one quick adjustment using a organizational feature called "Force Directed Layout" to make the pictures look pretty and voila!


OK, so zoomed out it looks like a bunch of spider webs.  Now the fun begins as we begin zooming in on each cluster to see what is going on.


I've blacked out the IP address of the system these guys are connecting to as it is irrelevant for the purposes of this post, but you can clearly see that in the past day this one system has had eight unique IP addresses attempt to connect to it on UDP port 1900.  I've got dozens more just like these on that big graph above with varying degrees of complexity.  From here, LYNXeon allows me to resolve DNS and/or ARIN names for the associated IP addresses.  I can also expand upon those sources to see what else of mine they've been talking to.  Is that cool or what?  It's taken me minutes to find these potential threats and with little more than a few clicks of the mouse.  The Barbarians are most definitely at my gates silently pounding away and chances are pretty good that they are doing the same to you.  The question is....can you find them?