A couple of years ago I decided, along with support from my management, that Enterprise Risk Management would become a focal point for my Information Security Program. I was convinced that framing vulnerabilities in the form of risks was essential to giving management visibility into issues they currently didn't know existed and to give our staff the comfort of knowing that the issues that caused them to lose sleep at night were now being considered for mitigation by management. I couldn't have been more right.
I began by collecting the risks submitted by each team in Excel spreadsheets and Word documents. They had all of the pertinent information like a subject, owner, risk assessment, etc, but very quickly I became a victim of my own success. Before I knew it, I had more risks than I could efficiently track in this format. First off, it was extremely cumbersome to try to manually maintain the risk index in Excel. While Excel is good at calculating formulas, it sucks at maintaining links to external documents. It can be done, but it requires quite a bit of manual effort to do so. Second, maintaining your risk details in Word documents is something they should reserve only for your worst enemies. They are difficult to update, difficult to track updates with, difficult to search and, well, just plain difficult. I thought to myself that there has to be a better way, yet, this is what the unfortunate majority out there are currently stuck with today.
After some research, it turns out that many years back, my company had another security professional who was interested in Enterprise Risk Management. Apparently, they had come to similar conclusions as I did with the Word documents and Excel spreadsheets, but they were able to get some internal development time to create a Lotus Notes based risk management database. It was everything that I needed, or so I thought, so I started to manually enter all of my new risks into this old risk management database. At first, things seemed to be working well. I had some different views into my data that would allow me to see way more information than I could before. I also had the ability for management of our various teams to be able to see their risks without involving me. It was much better, but soon I began to realize the limitations of this approach. The database itself was rigid. Changes required me to go through another internal team for resources and it often took a long time to make them. Also, any updates that were made didn't modify the current risks, only the ones submitted after that point. Once, I found myself opening and re-saving hundreds of risks just because I decided to change my risk calculation formula slightly. I began looking again for another way.
Soon, my new round of research brought me to a special set of tools called Governance, Risk, and Compliance or GRC for short. There are a number of such tools out there by well-resepcted companies such as EMC Archer and CA. They looked completely awesome and seemed to solve all of my problems with many more features to spare so I started to get some SWAG quotes from a few of the vendors. Low and behold, these tools hold a pricetag of $100k to half a million dollars and beyond. A request for budget for one of these tools was dismissed immediately with management literally laughing at my suggestion. OK, so maybe it was on me, right? Maybe I didn't do a good enough job of selling the tool? Maybe I didn't engage the right stakeholders to back my request? I guess you could call me a glutton for punishment, but I decided to keep trying. This time I gathered people I thought would be interested in risk from all different areas of our business for a demo of one of the tools. Trade Compliance, Health and Safety, Facilities, Legal, and many more. They watched the presentation, asked some fantastic questions, and ultimately left that meeting saying that they thought that a GRC solution was a fantastic idea. That was until I mentioned the price tag. If even with a budget split between half a dozen different teams, it wasn't going to happen, I knew that it simply wasn't going to happen.
As I began to think about the situation that I was in, I realized that I wasn't alone in all this. I talked with friends at various state agencies, friends at risk consultancies, and friends at companies large and small. They had gone through the same trials and tribulations that I had and fared no better for the most part. Having spent the better part of the last decade coding random applications and websites in PHP and MySQL, I decided that there may be something that I could do about it. I would go home from work and start coding until the wee hours of the morning. I would wake up early on my weekends and start coding again until the family awoke. After several weeks of this, I had a working prototype for a new risk management system based on some simplifications of the NIST 800-30 risk management framework and running on my LAMP (Linux Apache MySQL PHP) stack. SimpleRisk was born.
At the time of this writing, I have released 7 official versions of SimpleRisk since March of this year. It has come a long way since then, but still holds true to it's roots. SimpleRisk is free and open source. The methodology was designed to be as simple as possible, hence the name. A five step process walks you through the basics of risk management:
- Submit your risks
- Plan your mitigations
- Perform management reviews
- Prioritize for project planning
- Review regularly
It has every basic feature required of an enterprise risk management system and I'm adding new ones all the time. It has five different ways to weight classic risk calculations (ie. likelihood and impact) and can perform CVSS scoring as well. It has it's own built-in authentication system, but I've built an extra module to do LDAP authentication that I'm giving away to anyone who donates $500 or more to the cause. It also has a half-dozen different ways to report on the risks and many more reports should be complete soon. You can check out the demo (minus the Administrator interface) using the username "user" and password "user" at http://demo.simplerisk.org. Or, if you're ready to dive right in, you can obtain the download package for free at http://www.simplerisk.org.
In order to make your foray into SimpleRisk as simple as possible, I've created a SimpleRisk LAMP Installation Guide that you can use to have the tool up and running in about 30-60 minutes. And if all else fails and that proves too difficult or time consuming, then you should make your way to http://www.hostedrisk.com where for a fraction of what it would cost to buy a GRC solution, you will have your own dedicated SimpleRisk instance, running on hardware dedicated to you, built with security in mind, including extra modules not part of the standard distribution, and you'll never have to worry about installing or upgrading risk management software ever again. Hopefully you won't ever need this, but the option is always there in case you do.
My frustrations with a lack of efficient and cost-effective risk management tools led me to create one of my own. My hope is that by making SimpleRisk free and open source, it will benefit the rest of the security community as much as it has already benefited me. If you have any questions or requests for features that you would like to see included in the tool, I'm always here to help. SimpleRisk is simple, enterprise risk management, for the masses.
This presentation was by Boaz Belboard, the Executive Director of Information Security for Wireless Generation and the Project Leader for the OWASP Security Spending Benchmarks Project. My notes are below:
It does cost more to produce a secure product than an insecure product.
Most people will still shop somewhere, go to a hospital, or enroll in a university after they have had a data breach.
Why do we spend on security? How much should we be spending?
- Security imposes extra costs on organizations
- The "security tax" is relatively well knnown for network and IT security - 5 to 10% (years of Gartner, Forrester, and other studies)
- No comparable data for development or web apps
- Regualtions and contracts usually require "reasonable measures". What does that mean?
OWASP Security Spending Benchmarks Project
- 20 partner organizations, many contributors
- Open process and participation
- Raw data available to community
Reasons For Investing in Security
- Contractual and Regulatory Compliance
- Incident Prevention, Risk Mitigation
- Cost of Entry
- Competitive Advantage
Technical and Procedural Principles
- Managed and Documented Systems
- Business-need access
- Minimization of sensitive data use
- Security in Design and Development
- Auditing and Monitoring
- Defense in Depth
Specific Activities and Projects
- Security Policy and Training
- DLP-Type Systems
- Internal Configurations Management
- Credential Management
- Security in Development
- Locking down internal permissions
- Secure Data Exchange
- Network Security
- Application Security Programs
This presentation was by Rohit Sethi, the Project Leader for the Secure Pattern Analysis Project at OWASP and he works at Security Compass, a security analysis and training company. My notes from the session are below:
- Before anyone starts building complex systems, they need to design.
- We create threat models on completed designs.
- What about during design?
- Book: "Core J2EE Patterns Best Practices and Design Strategies"
- If you use J2EE development, chances are you're using patterns documented here
- Core J2EE patterns are used extensively
- Patterns are used in JSF, Velocity, Struts, Tapestry, Spring, and Proprietary Frameworks
Example: Project: Analyze Patterns
Use to Implement:
- Synchronization Tokens as Anti-CSRF Mechanism
- Page-level authorizations
- XSLT and Xpath vulnerabilities
- XML Denial of Service
- Disclosure of information in SOAP faults
- Publishing WSDL files
- Unhandled commands
- Unauthorized commands
- Analyze patterns for security pitfalls to avoid
- Determine how patterns can implement security controls
- Provide advice portable to most frameworks
A security pattern is not the same as a security analysis of a pattern.
- Designing new web application frameworks (make the next generation of frameworks secure by default)
- Designing new apps that use the patterns
- Source code review of existing apps
- Runtime assessment of existing apps
- Integrate with threat modeling of new or existing apps
You can help:
- Tell developers
- Improve the analysis
- Add code review and examples to the existing pattern book
- Look at other pattern books to see if there are other patterns that we should analyze
- New web application framework idea + Design-time security analysis = Secure-by-default web application framework
This presentation is by Christian Heinrich, the project leader for the OWASP "Google Hacking" project. Presentation published on http://www.slideshare.net/cmlh Dual licensed under OWASP License and AU Creative Commons 2.5.
OWASP Testing Guide v3 - Spiders/Robots/Crawlers
1. Automatically traverses hyperlinks
2. Recursively retrieves content referenced
Behavior governed by the robots exclusion protocol. New method is <META NAME="Googlebot" CONTENT="nofollow"> Not supported by all Robots/Spiders/Crawlers. Traditional method is robots.txt located in web root directory. Regular expressions supported by minority only. "User-agent: *" applies to all spiders/robots/crawlers or you can specify a specific robot name. Can be intentionally ignored. Not for httpd access control or digital rights management.
Testing - Robots Exclusion Protocol
- Sign into Google Webmaster Tools
- On the dashboard, click the URL
- Click "Tools"
- Click "Analyze robots.txt"
Search Engine Discovery
Microsoft Remote Desktop Web Connection: intitle:Remote.Desktop.Web.Connection inurl: tsweb
VNC: "VNC Desktop" inurl:5800
Outlook Web Access: inurl:"exchange/logon.asp"
Outlook Web Access: intitle:"Microsoft Outlook Web Access - Logon"
Adobe Acrobat PDF: filetype:pdf
Google caught onto this and is now displaying a "We're sorry" message with certain searches. To get around, use different search queries that returns overlapping results.
Google Advanced Search Operators: "site:" and "cache:" Two ways of using "site:". EIther as "site:www.google.com" where you get that specific subdomain's results or "site:google.com" where you get all hostnames and subdomains. Use "cache:www.owasp.org" to display an indexed web page in the google cache. There is also a site operator labeled "Cached" which will do the same thing.
You can get updates of the latest relevant Google results (web, news, etc) using Google Alerts.
Download Indexed Cache
Google SOAP Search API. Query limited to either 10 words or 2048 bytes. One thousand search queries per day and limited to search results within 0-999. Up to 10K possible results from 10 different search queries.
$Google_SOA_Search_API -> doGoogleSearch( $key, $q, $start, $maxResults, $filter, $restricts, $safeSearch, $lr, $ie, $oe );
See presentation for response.
Proof of concept tool is "dic.pl" or "Download Indexed Cache" that downloads the search results. Licensed under the Apache License 2.0. Tool produces a URL and cachedSize response.
OWASP Google Hacking Project
Tools built using Perl using CPAN Modules SOAP::Lite, Net::Google, and Perl::Critic. Development environmetn is based on Eclipse with EPIC Plug-in. Subversion repository is at code.google.com.
Upcoming presentations at ToorCon X in San Diego, SecTor 2008 in Toronto, Canada, and RUXCON 2K8 in Sydney, Australia.
"TCP Input Text" Proof of Concept
"Speak English" Google Translate Workaround
Refactor and 3rd Project review of PoC Perl Code with public release at RUXCON 2K8 in November 2008.
Check in at code.google.com after RUXCON 2K8
4 hr "half day" training course Q1 2009