Much like many other companies these days, National Instruments hires many of our developers straight out of school. Many times when engaging with these new hire developers, I will ask them what kind of security they learned at their university. In almost all cases I've found that the answer hasn't changed since I graduated back in 2002. Occassionally I'll get a developer who mentions one particular professor or class where they discussed secure coding practices, but most of the time the answer is "I didn't learn security in school". This absolutely kills me. It's like asking an architect to design a building without them knowing anything about support structures and load distribution. The end result may look awesome on the outside, but the slightest breeze will knock it over. With computers being embedded into literally every aspect of our society, do you really want code that crumbles the moment a user does something other than what was explicitly intended?
This leads me to the conclusion that security should be considered a fundamental part of code development and not an afterthought. We should be teaching security to students at a University level so that when they graduate, corporations don't spend valuable time re-training them on proper development techniques. I've heard rumors of large companies like Oracle actually being able to impact college curriculum by telling universities they simply won't hire developers without security training. Unfortunately, most companies aren't in a position to make demands like that, but it certainly wouldn't hurt to develop relationships with faculty at your local university and tell them what you'd like to see out of their students. I did some poking around on the internet and it seems like some professors are already starting to get the memo. For example, I found a great paper written by three professors at the USAF Academy Dept. of Computer Science called Incorporating Security Issues Throughout The Computer Science Curriculum where they say:
While the general public is becoming more aware of security issues, what are our universities doing to produce graduates ready to address our security needs? Computer science as a discipline has matured to the point that students are regularly in tructed in software engineering principles--they learn the importance of life cycle issues in the development and maintenance of software. Where are they receiving similar instruction on security concerns in the software life cycle? The authors propose that security should be taught throughout every computer science curriculum--that security should always be a concern and should be considered in the development of all software just as structured programming and documentation are.
Gentlemen, I couldn't agree more. Security needs to be a foundational piece of every Computer Science program in the country. Not one class. Not one professor. Secure programming techniques need to be a consideration in every CS class in every university. Universities teach students how to write functions, create object-oriented code, and do proper documentation, but when graduates don't know the basic tenets of input validation, then we have a real problem. If you agree with me, then I challenge you to write to the Dean of your local CS program and ask them what they are doing to ensure graduates are familiar with secure coding practices. I'd be very interested in hearing back from you as to what their response was.
Notice anything wrong with this picture?
I was walking by one of the Iron Mountain Secure Shredding bins at work one day several months ago and noticed that the lock wasn't actually locked. Being the security conscious individual that I am, I tried to latch the lock again, but the lock was so rusted that it wouldn't close as hard as I tried. I can't just leave it there like that so I call the number on the bin's label and there is an automated message that tells me that they're not taking local calls anymore and gave me a different number to try. I call that number and they ask me for my company ID number which I had no idea what it was. She informed me that without that ID number I couldn't submit a support request. I informed the lady that this bin contained sensitive personal and financial information and that the issue couldn't wait for some random company ID to be found. Fortunately, she gave in and created the support ticket for me saying that I should hear back from someone within four hours.
One week later, on Friday, Iron Mountain finally calls me back and says that they will come to replace the lock the following Monday before 5 PM. When the lock hadn't been replaced yet on Monday evening, I called Iron Mountain back up. Looking at their records, they showed that a new lock had been delivered, but they had no idea where and the signature was illegible. I work on a three-building campus with 14 floors between them and almost 3,000 people. If they can't tell me where the lock is, then there's no way for me to track it down. They said that they would investigate and call me back.
After not hearing back from them again for a couple of days, I called them back. The woman I spoke with had no real update on the investigation. She said that she would send another message "downstairs" and escalate to her supervisor. At this point it had been almost three weeks with sensitive documents sitting in a bin with a malfunctioning lock. The next day they called me back and said they were never able to track down who the new lock was left with so they would bring us a new one at no charge. Finally, after a total of 24 days with a unlocked Secure Shredding bin, Iron Mountain was able to replace the lock. Iron Mountain......FAIL.
Part of my new role as the Information Security Program Owner at NI is taking care of our regulatory compliance concerns which means I spend quite a bit of time dealing with auditors. Now auditors are nice people and I want to preface what I'll say next by saying that I think auditors do perform a great service to companies. I'm sure that most of them are hard-workers and understand compliance requirements probably better than I do, but they just don't understand security.
As a case in point, we're in the middle of our annual audit by one of those "Big Four" audit firms which I won't name here to protect the innocent. I sent an email checking in with our auditors to make sure that they had everything they needed before we went into our four-day holiday weekend. They said that they had received everything they needed except for documentation on "privileged users from the current OS and Database environments" as well as "evidence of current password settings from the application servers, OS, and Database". We go through a round of translation from Auditorese to Techie and figure out that they want exports of some specific user, profile, role, and privilege tables from the database and copies of /etc/passwd, /etc/shadow, and /etc group from the servers.
So we obtain the requested documentation and I shoot them back an email message to find out their proposed method for transferring the files. Secure FTP? No. PGP encryption? Nope. Their response back was astonishing:
How large do you think they'll be? Email should be fine.
Seriously? These are the guys that we're paying to verify that we're properly protecting our systems and they're suggesting that sending our usernames and password hashes via cleartext email is an appropriate method of file transfer. I respond back:
I'm not really concerned about the size of the files, but rather, the data that they contain. Sending files containing the users, groups, and password hashes for our financial systems via cleartext is probably not a good plan considering the point of this process is protecting that data.
And they respond with:
Whatever you'd like Josh. As long as you have the files as of today, we're good.
So now I'm convinced that auditors (or at least these auditors) view security as nothing more than a checklist. The people telling me what I need to do in order to protect my systems really have no clue about the fundamentals of security. If it's not on their checklist, then it must not be of importance. In this particular situation it may be easier or more convenient to send the documents via email, but any security professional worth their salt would tell you that's not secure nor appropriate for that data. Either our auditors hold themselves to a very different standard than the rest of us security professionals or they just don't understand security unless it's on a checklist.
This presentation was by Rohit Sethi, the Project Leader for the Secure Pattern Analysis Project at OWASP and he works at Security Compass, a security analysis and training company. My notes from the session are below:
- Before anyone starts building complex systems, they need to design.
- We create threat models on completed designs.
- What about during design?
- Book: "Core J2EE Patterns Best Practices and Design Strategies"
- If you use J2EE development, chances are you're using patterns documented here
- Core J2EE patterns are used extensively
- Patterns are used in JSF, Velocity, Struts, Tapestry, Spring, and Proprietary Frameworks
Example: Project: Analyze Patterns
Use to Implement:
- Synchronization Tokens as Anti-CSRF Mechanism
- Page-level authorizations
- XSLT and Xpath vulnerabilities
- XML Denial of Service
- Disclosure of information in SOAP faults
- Publishing WSDL files
- Unhandled commands
- Unauthorized commands
- Analyze patterns for security pitfalls to avoid
- Determine how patterns can implement security controls
- Provide advice portable to most frameworks
A security pattern is not the same as a security analysis of a pattern.
- Designing new web application frameworks (make the next generation of frameworks secure by default)
- Designing new apps that use the patterns
- Source code review of existing apps
- Runtime assessment of existing apps
- Integrate with threat modeling of new or existing apps
You can help:
- Tell developers
- Improve the analysis
- Add code review and examples to the existing pattern book
- Look at other pattern books to see if there are other patterns that we should analyze
- New web application framework idea + Design-time security analysis = Secure-by-default web application framework
I've been really surprised that for as long as I've been active with OWASP, I've never seen a proxy presentation. After all, they are hugely beneficial in doing web application penetration testing and they're really not that difficult to use. Take TamperData for example. It's just a firefox plugin, but it does header, cookie, get, and post manipulation just as well as WebScarab. Or Google Ratproxy, which works in the background while you browse around QA'ing your web site and gives you a nice actionable report when you're done. I decided it was time to educate my peers on the awesomeness of proxies.
This past Tuesday I presented to a crowd of about 35 people at the Austin OWASP Meeting. The title of my presentation was "Using Proxies to Secure Applications and More". Since so many people came up to me afterward telling me what a great presentation it was and how they learned something they can take back to the office, I decided (with a little insistance from Ernest) that it was worth putting up on SlideShare and posting to the Web Admin Blog.
The presentation starts off with a brief description of what a proxy is. Then, I talked about the different types of proxies. Then, the bulk of the presentation was just me giving examples and demonstrating the various proxies. I included anonymizing proxies, reverse proxies, and intercepting proxies. While my slides can't substitue for the actual demo, I did try to include in them what tool I used for the demo. If you have any specific questions, please let me know. All that said, here's the presentation.
This presentation was by Hans Zaunere, Managing Member, and it is entitled "PHundamental Security - Ecosystem Review, Coding Secure with PHP, and Best Practices". Take a look at http://www.nyphp.org/phundamentals/ for the ongoing guide and best practices. Guru Stefan Esser recently presented an excellent talk at Zendcon.
Security fundamentals are common across the board. Different environments have different requirements (desktop apps different from web/internet apps). Web/internet have a huge number of touch points. PHP isn't responsible for all of them, but the developer is. Different languages handle in different ways. PHP is no different except "More internet applications speak PHP than any other". PHP gets a bad rap. Low point of entry and great flexibility. There's been some mistakes like weak default configuration, too forgiving for amateurs, the infamous magic_* of PHP, PHP Group argues what's a security flaw.
It's easy to shoot yourself in the foot with C. In C++ it's harder to shoot yourself in the foot, but when you do, you blow off your whole leg. - Bjarne Stroustrup, Inventor of C++
Three zones of responsibility. PHP is effectively a wrapper around libraries and data sources. Many external dependencies and touch points.
- Poorly written code by amateur developers with no programming background. Primary cause for the security ecosystem around PHP. Laziness - letting PHP do it's magic_*. "Program smart"
- Extensions and external libraries. PHP's greatest asset. Sometimes library binding is faulty. Sometimes the external library has faults, or behaves in an unforeseen way when in a web environment - possible in any environment. Know what extensions you're using, use the minimal number of extensions, and be aware of the environment they were originally designed for. "Know thy extensions"
- PHP Core - "PHP". Secunia says 19 advisories for PHP between 2003-2008. Java had 38+ and Ruby 11+. "The list goes on - PHP is not alone". One advisory in 2008. "More internet applications speak PHP than any other"
- Best practices are common to any well run enterprise environment. PHP is growing into this environment very quickly.
- Web security is largely about your data and less about exploits in the underlying platform. Buffer overflows aren't so much the hot-topic.
- Installation - Avoid prepackaged installs, including RPMs, .deb, etc. If you use them, review their default deployment. Installation touch points also typically include MySQL/Apache.
- Configuration - Use php.ini-recommended. Better yet, take the time to know what you're doing and tune configuration files yourself.
- Don't make PHP guess what you mean. Be explicit with variables and types. Don't abuse scope - know where your variables come from. Avoid magic_* and implicitness - BE EXPLICIT.
- Keep code small, organized, and maintainable. Use OOP techniques to enforce code execution paths. Use includes to keep things organized.
- Don't use super-globals directly - wrap for protection.
Be aggressive - B.E. aggressive
- It's always about data
- One of PHP's greatest strengths - loosely typed. Also it's biggest weakness. Don't make PHP guess what you mean.
- Cast variables, know their type and the data you expect. Let PHP do it's magic only when you want it to - not by chance.
- Keep tabs on your data's path, lifecycle and type. Know where it's come from, what it's doing, and where it's going. Filter/escape/cast and throw exceptions every step of the way.
- Input validation, output validation, CASTING.
- Don't be lazy, be explicit, use OOP.
Casting isn't just for movie producers
- No system has a single security pressure point
- Don't take the easy way out just because you can
- Put PHP in the same well managed enterprise environment as other technologies
- PHP/AMP respond very well to TLC
PHP is just part of the ecosystem and there is awareness and experience on the PHP side. The ying/yang of PHP's history overshadows reality. Stand by PHP and it'll stand by you. Web/internet applications are deep and complex. Users, interoperability, data, architecture, support, compliance. Phishing, hijacking, spam, sopcial engineering - BROWSERS!
PHP is the least of your worries
This presentation was by Jeff Williams, OWASP Chair, on the Enterprise Security API.
Vulnerabilities and Security Controls
- Missing - 35%
- Broken - 30%
- Ignored - 20%
- Misused - 15%
Goal is to enable developers. Need to give them hands-on training, a secure coding guideline, and an Enterprise Security API.
The problem with Security Libraries: overpowerful, incomplete, not integrated, broken, can't update, custom.
Enterprise Security API (ESAPI) includes authentication, user, AccessController, AccessReferenceMap, Validator, ENcoder, HTTPUtilities, Encryptor, EncryptedProperties, Randomizer, Exception Handling, Logger, IntrusionDetection, and SecurityConfiguration. Built on top of your existing enterprise services or libraries.
- Input Validation - validation engine and decoding engine that will take input and provide safe output for web pages
- Output Encoding - need to use the right encoding for the right place you are putting the encoding
- Authentication - creates a user object and functions to login() or logout(). Provides additional functionality for encrypted cookies, changing SESSIONID, remember me cookies, etc.
- Access Control - provides functionality to check if a user is authorized for URLs, functions, data, services, or files.
- Direct Object Reference Protection - use an access reference map that does an indirect translation between an object and it's reference. Use getDirectReference() and getIndirectReference() functions.
- Error, Logging, and Detection - Configurable thresholds. Responses are log intrusion, logout user, and disable account. User object is available anywhere in the application so the logger links the messages logged to a user. Exceptions sent to an intrusion detector which has thresholds set.
OWASP ESAPI Covers Majority of OWASP Top Ten
- A1. XSS - Validator, Encoder
- A2. Injection Flaws - Encoder
- A3. Malicious File Execution - HTTPUtilities (Safe Upload)
- A4. Insecure Direct Object Reference - AccessReferenceMap, AccessController
- A5. CSRF - User (CSRF TOken)
- A6. Leakage and Improper Error Handling - EnterpriseSecurityException, HTTPUtils
- A7. Broken Authenticationa nd Sessions - Authenticator, User, HTTPUtils
- A8. Insecure Cryptographic Storage - Encryptor
- A9. Insecure Communications - HTTPUtilities (Secure Cookie, Channel)
- A10. Failure to Restrict URL Access - AccessController
MITRE found that all application security tool vendors' claims put together cover only 45% of the known vulnerability types (695). They found very little overlap between tools, so to get 45% you need them all (assuming their claims are true). This means that at least 55% is not covered by tools.
Latest version released in September 2008 (1.3.1) and are holding a summit later this year to determine if they got everything right. In active development. Java, .NET, PHP, classic ASP. Rich client extensions. Web service extensions. Framework (Struts) integration.
Written under the BSD license so it should be very easy for you to use it in your applciations.
Project Home Page: http://www.owasp.org/index.php/ESAPI
Expert advisory/design/implementation team that has collectively reviewed over 100 million lines of code. ~600 JUnit test cases. FindBugs, PMD, Ounce, and Fortify clean. Code review by several Java security experts. Penetration test of sample applications. Full Javadoc for all functions.
Presentation will be posted on homepage. Includes a list of banned API's that ESAPI replaces. Has example of enterprise cost savings. All of ESAPI is only 5000 lines of code. Building a ESAPI swingset which has a demo of insecure (what can go wrong) and secure (using ESAPI) programming and good tutorial on how to use. Login module shows last successful login, last failed login, number of failed logins, enforces a strong password policy.
As I'm preparing to take my trip to New York for the OWASP AppSec Conference, I came across a timely article on the risks involved with using a hotel network. The Center for Hospitality Research at Cornell University surveyed 147 hotels and then conducted on-site vulnerability testing at 50 of those hotels. Approximately 20% of those hotels still run basic ethernet hub-type networks and almost 93% offer wireless. Only six of the 39 hotels that had WiFi networks were using encryption (see my blog on why are people still using WEP for why this is necessary). What does this mean for you, Joe User? It means that both your personal and company information is at risk any time you connect to those networks. The next time you're surfing the web, start paying attention to all of the non-SSL links (http:// versus https://) that you visit. Then, think about the information that you are passing along to those sites. Are you signing in with a user name and password? Entering credit card information? Whatever it is, you better make sure that it's something that you wouldn't feel bad if it wound up on a billboard in Times Square, because that's about how risky your trasmission could be.
Before you get too concerned, there are a few things you can do to try to prevent this. First, DO NOT visit any links where you transmit information unencrypted. This is just asking for trouble. Since many man-in-the-middle type attacks can still be used to exploit this, my second suggestion is to use some sort of VPN tunnel. Whether it's a corporate VPN or just a freebie software VPN to your network back home, this allows you to encrypt all traffic over the untrusted hotel network. Make this your standard operating procedure anytime you connect to an untrusted network (not just a hotel) and you should keep your data much safer. Lastly, please be sure to have current firewall and anti-virus software on the computer you are using to connect to the untrusted network. The last thing you want is to get infected by some worm or virus just by plugging in to the network.
One other thing that I think that deserves mentioning here is that if you don't absolutely have to use the internet on an untrusted network, then don't do it. Obviously, there are times when you need access to do work, pay bills, etc, but if you can save those tasks until you reach a more familiar (and hopefully safer) network, that is far and away the best way to keep yourself and your data safe.
Since Michael Howard moved from Redmond to Austin, I've had the privilege to see him present several times now. This is the guy who literally wrote the book on writing secure code and the secure development lifecycle. He is a fantastic speaker and I'd highly recommend checking him out if you every get the opportunity. Yesterday, I heard that he was speaking on securing your code at the San Antonio OWASP meeting so I decided it was worth making the drive down to see his presentation. So, I give to you Michael Howard's Top 10 Strategies to Secure Your Code straight out of one of his Microsoft TechNet presentations.
Michael began by giving us the definition of a secure system. He said "A secure system does what it's supposed to do and no more." It's such a simple concept, but in practice such a hard thing to achieve. Here are his suggestions on how to accomplish that: