Web Admin Blog Real Web Admins. Real World Experience.

1Oct/130

Rating Your Options for Password Policies and Access Management

Today I did an interesting experiment that I thought was worth sharing with others.  I tried to come up with a ten item list of password/access management policies based on increasing levels of security.  On my list, a "10" effectively means the most secure access management and password polices whereas as "0" effectively means nothing.  Here is my list:

  1. Open Authentication: Exactly like it sounds.  No username or password required.  If you can access the application, then you can get in.
  2. Non-Limited Credentialed Authentication: The use of a username or some form of non-open authentication, but perhaps a globally shared account or extremely weak password like "password".  (ex: "password")
  3. Weak Credentialed Authentication: The use of a unique username plus a basic password policy simply limiting the number of characters, but not the content. (ex: "google")
  4. Complex Credentialed Authentication: The use of a unique username plus a basic password policy not only limiting the number or characters, but also requiring some level of complexity.  This can be easily defeated using Supercharged Password Cracking Techniques.  (ex: "G@@gl3")
  5. Complex Non-Random Credentialed Authentication: The use of a unique username plus a password policy limiting the number of characters, requiring complexity, and checking against permutations of dictionary and commonly used passphrases, but supporting combinations of multiple dictionary words. (ex: "i@teAHors3")
  6. Complex Random Lengthy Credentialed Authentication: The use of a unique username plus a password policy where passwords are truly randomly generated and are of a sufficient length to effectively be unbreakable.  Something in the neighborhood of 20+ characters is pretty decent.  This is the point where passwords become so complicated that the average user cannot remember them on their own and are forced to record them somewhere. (ex: "Hh#8qcFhe9H$#324dnakfB3q4fUY@*")
  7. Non-Complex Two-Factor Authentication: The use of a unique username plus what is commonly referred to as "two factor" authentication.  The use of "something you know", like a basic 6-digit PIN that is easy for you to remember, alongside "something you have" that is a unique and randomly seeded value.  The key here is that the "something you have" part is something that changes on a very frequent basis such as every 60 seconds. (ex: "147246965201")
  8. Complex Two-Factor Authentication: The use of a unique username plus the two factor authentication where the "something you know" is a password that is more complex than a simple numeric and the "something you have" is unique, randomly seeded, and changing frequently like above.  Perhaps the "something you have" is even something more than a simple numeric here as well. (ex: "H8n@m78an1vA")
  9. Non-Complex Three-Factor Authentication: The use of a unique username, the "two factor" authentication piece in number seven above, plus a third "something you are" component.  This is a unique biometric value such as a palm scan, fingerprint scan, or retinal scan.
  10. Complex Three-Factor Authentication: The use of a unique username, the "two factor" authentication piece in number eight above, plus a third "something you are" unique biometric value.

I will admit that this list is dumbed down quite a bit as there are other factors that most certainly can weigh in here as well.  For example, having a maximum password age of one day instead of ninety days can significantly change the amount of time that an attacker has to brute force an account.  Other influencing factors could be the remembered password history and our account lockout policy.  For the most part, though, these can be a component of any of the above policies.

Worth noting here is that as security increases, the user experience is likely decreasing due to increased complexity.  The exception here may be as we move from number six to number seven on my list.  The move to two-factor authentication should add enough constant change to our formula to allow us to have a more simple password while still making it more difficult to brute force the account.  There is a trade-off of having to always carry a token retrieval device with you and having to handle the situation where a user is unable to access their token, but otherwise, this becomes the real sweet spot where usability meets security.

What do you think?  Is my list fairly accurate or am I way off base here?  Am I missing something important?  Please feel free to comment and provide your own list and/or feedback for mine.

15Sep/132

Enterprise Risk Management for the Masses

A couple of years ago I decided, along with support from my management, that Enterprise Risk Management would become a focal point for my Information Security Program.  I was convinced that framing vulnerabilities in the form of risks was essential to giving management visibility into issues they currently didn't know existed and to give our staff the comfort of knowing that the issues that caused them to lose sleep at night were now being considered for mitigation by management.  I couldn't have been more right.

I began by collecting the risks submitted by each team in Excel spreadsheets and Word documents.  They had all of the pertinent information like a subject, owner, risk assessment, etc, but very quickly I became a victim of my own success.  Before I knew it, I had more risks than I could efficiently track in this format.  First off, it was extremely cumbersome to try to manually maintain the risk index in Excel.  While Excel is good at calculating formulas, it sucks at maintaining links to external documents.  It can be done, but it requires quite a bit of manual effort to do so.  Second, maintaining your risk details in Word documents is something they should reserve only for your worst enemies.  They are difficult to update, difficult to track updates with, difficult to search and, well, just plain difficult.  I thought to myself that there has to be a better way, yet, this is what the unfortunate majority out there are currently stuck with today.

After some research, it turns out that many years back, my company had another security professional who was interested in Enterprise Risk Management.  Apparently, they had come to similar conclusions as I did with the Word documents and Excel spreadsheets, but they were able to get some internal development time to create a Lotus Notes based risk management database.  It was everything that I needed, or so I thought, so I started to manually enter all of my new risks into this old risk management database.  At first, things seemed to be working well.  I had some different views into my data that would allow me to see way more information than I could before.  I also had the ability for management of our various teams to be able to see their risks without involving me.  It was much better, but soon I began to realize the limitations of this approach.  The database itself was rigid.  Changes required me to go through another internal team for resources and it often took a long time to make them.  Also, any updates that were made didn't modify the current risks, only the ones submitted after that point.  Once, I found myself opening and re-saving hundreds of risks just because I decided to change my risk calculation formula slightly.  I began looking again for another way.

Soon, my new round of research brought me to a special set of tools called Governance, Risk, and Compliance or GRC for short.  There are a number of such tools out there by well-resepcted companies such as EMC Archer and CA.  They looked completely awesome and seemed to solve all of my problems with many more features to spare so I started to get some SWAG quotes from a few of the vendors.  Low and behold, these tools hold a pricetag of $100k to half a million dollars and beyond.  A request for budget for one of these tools was dismissed immediately with management literally laughing at my suggestion.  OK, so maybe it was on me, right?  Maybe I didn't do a good enough job of selling the tool?  Maybe I didn't engage the right stakeholders to back my request?  I guess you could call me a glutton for punishment, but I decided to keep trying.  This time I gathered people I thought would be interested in risk from all different areas of our business for a demo of one of the tools.  Trade Compliance, Health and Safety, Facilities, Legal, and many more.  They watched the presentation, asked some fantastic questions, and ultimately left that meeting saying that they thought that a GRC solution was a fantastic idea.  That was until I mentioned the price tag.  If even with a budget split between half a dozen different teams, it wasn't going to happen, I knew that it simply wasn't going to happen.

As I began to think about the situation that I was in, I realized that I wasn't alone in all this.  I talked with friends at various state agencies, friends at risk consultancies, and friends at companies large and small.  They had gone through the same trials and tribulations that I had and fared no better for the most part.  Having spent the better part of the last decade coding random applications and websites in PHP and MySQL, I decided that there may be something that I could do about it.  I would go home from work and start coding until the wee hours of the morning.  I would wake up early on my weekends and start coding again until the family awoke.  After several weeks of this, I had a working prototype for a new risk management system based on some simplifications of the NIST 800-30 risk management framework and running on my LAMP (Linux Apache MySQL PHP) stack.  SimpleRisk was born.

At the time of this writing, I have released 7 official versions of SimpleRisk since March of this year.  It has come a long way since then, but still holds true to it's roots.  SimpleRisk is free and open source.  The methodology was designed to be as simple as possible, hence the name.  A five step process walks you through the basics of risk management:

  1. Submit your risks
  2. Plan your mitigations
  3. Perform management reviews
  4. Prioritize for project planning
  5. Review regularly

It has every basic feature required of an enterprise risk management system and I'm adding new ones all the time.  It has five different ways to weight classic risk calculations (ie. likelihood and impact) and can perform CVSS scoring as well.  It has it's own built-in authentication system, but I've built an extra module to do LDAP authentication that I'm giving away to anyone who donates $500 or more to the cause.  It also has a half-dozen different ways to report on the risks and many more reports should be complete soon.  You can check out the demo (minus the Administrator interface) using the username "user" and password "user" at http://demo.simplerisk.org.  Or, if you're ready to dive right in, you can obtain the download package for free at http://www.simplerisk.org.

In order to make your foray into SimpleRisk as simple as possible, I've created a SimpleRisk LAMP Installation Guide that you can use to have the tool up and running in about 30-60 minutes.  And if all else fails and that proves too difficult or time consuming, then you should make your way to http://www.hostedrisk.com where for a fraction of what it would cost to buy a GRC solution, you will have your own dedicated SimpleRisk instance, running on hardware dedicated to you, built with security in mind, including extra modules not part of the standard distribution, and you'll never have to worry about installing or upgrading risk management software ever again.  Hopefully you won't ever need this, but the option is always there in case you do.

My frustrations with a lack of efficient and cost-effective risk management tools led me to create one of my own.  My hope is that by making SimpleRisk free and open source, it will benefit the rest of the security community as much as it has already benefited me.  If you have any questions or requests for features that you would like to see included in the tool, I'm always here to help.  SimpleRisk is simple, enterprise risk management, for the masses.

14Jan/102

Book Review: Smart & Gets Things Done, by Joel Spolsky

Joel Spolsky is a bit of an internet cause célèbre, the founder of Fog Creek Software and writer of joelonsoftware.com, an influential programming Web site.

The book is about technical recruiting and retention, and even though it’s a small format under 200 page book, it covers a lot of different topics.  His focus is on hiring programmers but I think a lot of the same principles apply to hiring for systems admin/Web systems positions.  Hiring has been one of the hardest parts of being a Web systems manager, so I got a lot out of the book and tried putting it into practice.  Results detailed below!

The Book

The first chapter talks about the relative effectiveness of programmers.  We often hire programmers and pay the good ones 10% more than the bad ones.  But he has actual data, drawn from a Yale professor who repeatedly teaches the same CS class and assigns the same projects, which shows something that those of us who have been in the field for a long time know – which is that the gap in achievement between the best programmers and the worst ones is a factor of ten.  That’s right.  In a highly controlled environment, the best programmers completed projects 3-4 times faster than the average and 10x faster than the slowest ones.  (And this same relationship holds when adjusting for quality of results.)  I’ve been in IT for 15 years and I can guarantee this is true.  You can give the same programming task to a bunch of different programmers and get results from “Here, I did it last night” to “Oh, that’ll take three months.”  He goes on to note other ways in which you can get 10 mediocre programmers that cannot achieve the same “high notes” as one good programmer.  This goes to reinforce how important the programmer, as human capital, is to an organization.

Next, he delves into how you find good developers.  Unfortunately, the easy answers don’t work.  Posting on monster.com or craigslist gets lots of hits but few keeps.  Employee referrals don’t always get the best people either.  How do you?  He has three suggestions.

  1. Go to the mountain
  2. Internships
  3. Build your own community

“Go to the mountain” means to figure out where the smart people are that you want to hire, and go hang out there.  Conferences.  Organizations.  Web sites.  General job sites are zoos, you need venues that are more specifically spot on.  Want a security guy?  Post on OWASP or ISSA forums, not monster.

We do pretty well with internships, even enhancing that with company sponsored student sourcing/class projects and a large campus recruiting program.  He has some good sub-points however – like make your offers early.  If you liked them as an intern, offer them a full-time job at that point for when they graduate, don't wait.  Waiting puts you into more of a competitive situation.  And interns should be paid, given great work to do, and courted for the perm job during the internship.

Building a community – he acknowledges that’s hard.  Our company has external communities but not really for IT.  For a lot of positions we should be on our our forums like fricking scavengers trying to hire people that post there.

13Nov/090

OWASP Top 10 – 2010

This presentation was by Dave WIchers, COO of Aspect Security and an OWASP Board Member.  My notes are below:

What's Changed?

  • It's about Risks, not just vulnerabilities
    • New title is: "The Top 10 Most Critical Web Application Security Risks"
  • OWASP Top 10 Risk Rating Methodology
    • Based on the OWASP Risk Rating Methodology, used to prioritize Top 10
  • 2 Risks Added, 2 Dropped
    • Added: A6 - Security Misconfiguration
      • Was A10 in 2004 Top 10: Insecure Configuration Management
    • Added: A8 - Unvalidated Redirects and Forwards
      • Relatively common and VERY dangerous flaw that is not well know
    • Removed: A3 - Malicious File Execution
      • Primarily a PHP flaw that is dropping in prevalence
    • Removed: A6 - Information Leakage and Improper Error Handling
      • A very prevalent flaw, that does not introduce much risk (normally)
  1. A1- Injection: Tricking an application into including unintended commands in the data sent to an interpreter. (http://www.owasp.org/index.php/SQL_Injection_Prevention_Cheat_Sheet)
  2. A2 - Cross Site Scripting (XSS): Raw data from attacker is sent to an innocent user's browser.  For large chunks of user supplied HTML, use OWASP's AntiSamy to sanitize this HTML to make it safe.  (http://www.owasp.org/index.php/XSS_(Cross_Site_Scripting)_Prevention_Cheat_Sheet)
  3. A3 - Broken Authentication and Session Management: Means credentials have to go with every request.  Should use SSL for everything requiring authentication.
  4. A4 - Insecure Direct Object Reference: This is part of enforcing proper "Authorization", along with A7 - Failure to Restrict URL Access.
  5. A5 - Cross Site Request Forgery (CSRF): An attack where the victim's browser is tricked into issuing a command to a vulnerable web application.  Vulnerability is caused by browsers automatically including user authentication data with each request.  (Check out OWASP CSRFGuard, OWASP CSRFTester, http://www.owasp.org/index.php/CSRF_Prevention_Cheat_Sheet)
  6. A6 - Security Misconfiguration: All through the network and platform.  Don't forget the development environment.  Think of all the places your source code goes.  All credentials should change in production.
  7. A7 - Failure to Restrict URL Access: This is part of enforcing proper "authorization", along with A4 - Insecure Direct Object References.
  8. A8 - Unvalidated Redirects and Forwards: Web application redirects are very common and frequently include user supplied parameters in the destination URL.  If they aren't validated, attacker can send victim to a site of their choice.
  9. A9 - Insecure Cryptographic Storage: Storing sensitive data insecurely.  Failure to identify all sensitive data.  Failure to identify all the places that this sensitive data gets stored.  Failure to properly protect this data in every location.
  10. A10 - Insufficient Transport Layer Protection

OWASP Top 10 Risk Rating Methodology

  • Attack Vector (How hard for an attacker to use this flaw - 1 (Easy), 2 (Average), 3 (Difficult))
  • Weakness Prevalence (How often is it found - 1 (Widespread), 2 (Common), 3 (Uncommon))
  • Weakness Detectability (How hard is it for an attacker to find the flaw - 1 (Easy),  2 (Average), 3 (Difficult))
  • Technical Impact (1 (Severe), 2 (Moderate), 3 (Minor))

This is generic across the internet, not specific to any organization.

Started a new "Prevention Cheatsheet Series" that the Top 10 references (XSS, SQL Injection, Transport Layer Security, CSRF, Direct Object Reference).

What is actually being released is RC1 of the Top 10 and they are encouraging people to provide comments through the end of the year and then use that feedback to post the final Top 10 in January 2010.

23Mar/090

The Importance of Log Management in Today’s Insecure World

For my last session of the first day of the TRISC 2009 Conference, I made the mistake of attending Ricky Allen and Randy Holloway's presentation on "The Importance of Log Management in Today's Insecure World".  I say "mistake" because out of all of the presentations I attended over the entire three days of the conference this was by far the most vendory, the least security oriented, and the worst presentation.  Both of these guys work for ArcSight and while they certainly know their log managment, it was just a lame excuse for a presentation and if I was able to go back in time I would have attended Chip Meadows' presentation on "Pocket protectors, Purple hair and Paranoia" instead as I heard he did a fantastic job.  Anyway, my notes from this presentation are below and the actual slides can be found here:

What is log management?

  • Ensuring your enterprise log data is accessible, easily retrievable and forensically sound
  • Properly dealing with mammoth amounts of event data stores in thousands of vendor generated log files
  • Achieving compliance (SOX, HIPAA, PCI, FISMA), Security and IT operation usage of log data that does not break the bank
  • Log data now represents over 30% of ALL data generated by enterprises – creating a real need for log management
  • Dominant uses for log data include:
    • IT operations – systems/network health and availability
    • Security monitoring – perimeter or insider threat detection
    • Compliance monitoring – for regulations and industry standards

Why should I care?

  • Overwhelming flood of logs
  • Islands of defense
  • Week long manual investigations
  • Massive false positives
  • Heterogeneous consoles
  • Many different formats
  • Regulations and their commonly used frameworks impose various requirements when it comes to log management
  • Regulatory mandates have further increased log retention requirements
  • Increased need to store both security and non-security
  • There continues to be an increased emphasis on audit quality data collection
  • Regulatory requirements
    • SOX: 7yrs
    • PCI: 1yr
    • GLBA: 6yrs
    • EU DR Directive: 2yrs
    • Basel II: 7yrs
    • HIPAA: 6/7yrs
  • Compliance requirements
    • More logging
    • More types of devices
    • Higher volumes of log data
    • Extensive reporting requirements
    • Broader user access
    • Long term retention requirements
    • Audit quality data

What can effective log management do for me?

  • Self-managing & scalable
  • Automated & cost-effective audits
  • IT Operations SLA Efficiency
  • Compliance
  • Simplified Forensic Investigations

Best Practices – NIST 800-92

  • Common log management problems
    • Poor tools and training for staff
    • Laborious and boring
    • Reactive analysis reduces the value of logs
    • Slow response
  • Solutions
    • Establish log management policies & procedures
    • Prioritize log management appropriately
    • Create and maintain a secure log management infrastructure
    • Provide proper support for all staff with log management responsibilities
    • Establish standard log management processes for system-level admins
  • The directive to only log and analyze data this is of the greatest importance helps provide sanity to the logging process
  • Collecting and storing all data regardless of its usefulness increases complexity and deployment costs
  • Secure storage and transmission guideline directly points to the importance of secure and robust capture, transmission and storage of logs
  • Organizations should carefully review the collection architecture, transmission security and access control capabilities of SEM solutions to ensure support of this section of the standard
  • Filtering and aggregation are recommended as a means to only capture logs of security and compliance value based on the corporate retention policy
  • Guideline helps organizations support a “reasonableness” position in not collecting useless log data

Developing a Log Management Program

  • Understand your log management needs (regulatory and operational requirements)
  • Review NIST 800-92
  • Understand your environment
    • Lots devices to collect logs from
    • Multiple locations with no IT staff
    • Collection agents are not an option
    • Network time settings
    • Low bandwith links
  • Devices
    • Firewalls/VPN
    • IDS/IPS
    • Servers and desktop OS
    • Network equipment
    • Vulnerability assessment
    • Anti-virus
    • Applications
    • DBs
    • Physical infrastructure
  • Establish prioritized log management policies & procedures

Log Management Checklist

  1. Scalable architecture
  2. Minimal footprint at remote sites
  3. Transaction assurance
  4. Audit and litigation quality data
  5. Universal event collection
  6. Ease of manageability
  7. ….
Tagged as: , No Comments
22May/085

Log Management for Dummies (aka Splunk)

Logs are one thing that I think are severely underutilized by most systems administrators. Most of us have taken the first step by actually logging the data, but neglect organizing it into any sort of manageable form. You'll probably argue that any hardcore *nix admin would be able to take the raw logs using grep, cut, awk, and a handful of other *nix power tools and turn it into consumable information, but that'll only get you so far.

Several months ago we evaluated a bunch of log management solutions with several goals in mind. We wanted a solution that was agile enough to be able to take in a wide variety of log formats as well as configuration files. It needed to shield sensitive information (passwords, credit card information, etc) from unauthorized users. It needed to provide us with a customizable interface where we could report on all of the log data it gathered. Lastly, it needed to provide our customers (developers) with the ability to self-service their own log files. After evaluating most of the major players in the log management arena, we found our ideal solution in a product called Splunk.

The first thing I noticed when evaluating Splunk was that they're not like everyone else. They're not trying to sell you some sort of logging appliance and they offer their software free for customers with 100 MB/day or less worth of logging. Getting Splunk installed was a breeze. You can have it up and running in minutes. It truly is Log Management for Dummies in that respect, but under the hood there is a highly configurable and customizable tool with an API that you could use to write your own applications to examine log files.

At this point I've mucked around with Splunk for a few months and our configuration is pretty intense. I've added in custom indexes to make my custom dashboards load faster. I've set Splunk up to create queryable metadata fields based on information in the logs. I've added filters for custom timestamps and auditing so we can tell if a log file has been modified. I've even set up a "deployment server" to distribute Splunk's configuration bundles to my various types of servers. This brings me to the one drawback of Splunk: Upgrading. Rumor has it that they are working on making it easier to upgrade from one version to the next, but for the time being it involves logging in to each server, stopping Splunk, upgrading the files, and restarting Splunk again. If you only had to upgrade every once in a while it would be fine, but they maintain a very active development team so I find myself constantly wanting to upgrade to get the latest bug fixes and features.

Other than that, Splunk does exactly what I tell it to do. It grabs all of our logs and presents them in a single intuitive interface. Think of it as a search engine for log and configuration files. Then, once I have the log data in front of me, I can create custom reports based on that data. If I want to, I can even alert based on information Splunk finds in my logs (send an e-mail to a developer every time their application throws an error message). Oh, did I mention that Splunk has a PCI Dashboard that you can install for free? Ask those other guys how much they charge for their PCI solution.

The next time you have some free time be sure to download Splunk and install it on one of your development servers. You won't be disappointed.