This presentation was on "Cryptography for Penetration Testers" and was by Chris Eng, the Senior Director of Security Research at VeraCode.
How much do you really have to know about cryptography in order to detect and exploit crypto weaknesses in web apps.
- Learn basic techniques for identifying and analyzing cryptographic data
- Learn black-box heauristics for recorgnizing weak crypto implementation
- Apply techniques
The Crypto that Matters in 6 Short Slides
Types of Ciphers
- Block Ciphers: Operates on fixed-length groups of bits, called blocks. Block sizes vary depending on the algorithm. Several different modes of operation for encrypting messages longer than the basic block size. Example ciphers include DES, 3DES, Blowfish, AES
- Stream Ciphers: Operates on plaintext one bit at a time
Block Ciphers: Electronic Code Book (ECB) Mode
- Fixed-size blocks of plaintext are encrypted independently
- Each plaintext block is substituted with ciphertext block, like a codebook
- Weaknesses: Structure in plaintext is reflected in ciphertext. Ciphertext blocks can be modified without detection.
Bliock Ciphers: Cipher Block Chaining (CBC) Mode
- Each block of plaintext is XORed with the previous ciphertext block before being encrypted
- Change of message affects all following ciphertext blocks
- Initialization Vector (IV) is used to encrypt first block
- Plaintext message is processed byte by byte (as a stream)
- Key scheduler algorithm generates a keystream using a key and an Initialization Vector (IV combined (XOR) with plaintext bit by bit
- Encrypt by XORing plaintext with the generated keystream
Common Crypto Mistakes
- Insecure cipher mode (usually ECB)
- Inappropriate key reuse
- Poor key selection
- Insufficient key length
- Insecure random number generation
- Proprietary or home-grown encryption algorithms (Don't do this ever!)
Dealing with Gibberish Data
What do you do when you are pen testing a web application and you encounter data that is not easy to interpret?
- Hidden fields
- Query string parameters
- POST parameters
How random is it?
- Output of cryptographic algorithms should be evenly distributed, given a sufficiently large sample size.
- Tools such as ENT (http://www.fourmilab.ch/random) will calculate entropy per byte, chi-square distribution, arithmetic mean, serial correlation, etc
Is the length a multiple of a common block size?
- Indicates that the application may be using a block cipher
Is the length the same as a known hash algorithm?
- For example, MD5 is usually represented as 32 hex characters
- May also indicate the presence of an HMAC
- Still may be worthwhile to hash various permutations of known data in case a simple unkeyed hash is being used
Does the length of the token change based on the length of some value that you can supply?
For a block cipher, you can determine the block size by incrementing input one byte at a time and observing when the encrypted output length jumps by multiple bytes (ie, the block size)
How does the token change in response to user-supplied data?
- Figure out how changing different parts of the input affects the output
- Is more than one block affected by a single character change in the input?
Deeper Block Cipher Inspection
Are there any blocks of data that seem to repeat in the same token or over multiple tokens?
- Possibly ECB mode, this doesn't just happen by coincidence
Context: A public-facing web portal for a large ISP. Used an encrypted cookie to authenticate identity. A new cookie is issued on each request. Base64 decoded EE cookies. Divided by 8 and found 8 byte blocks. Noticed some repetition in the same position. The only variable blocks are the last two (possibly a "last accessed" timestamp or similar timeout mechanism). Register a new account with a username of 'c' x 32, the maximum length permitted, and observe the value of the EE cookie.
'c' x 32 is Perl notation for "cccccccccccccccccccccccccccccccc"
The token is longer, meaning the username is probably stored in the cookie. Still noticed repition in same position. Register another account with a username of 'c' x 16 and compare to the EE cookie generated in the previous step. Didn't see two identical blocks for 'c' x 16 and four identical blocks for 'c' x 32. Reason is padding. The username doesn't align perfectly with the block offset. Want to figure out what position in the cookie the usernaem is located. Additional user accounts were created with specific usernames in order to determine if there is any initial padding in the first block. Now you know where the username is in the ciphertext.
Able to successfully subvert the authentication mechanism without any knowledge of the algorithm or the key, based solely on observed patterns in the ciphertext. The root cause was the insecure cipher mode and the lack of a verification mechanism. ECB mode shoul dnot be used (use CBC instead).
Token values observed in URLs. Changed every time we logged on to the application. Never the same for any two sessions or any two users. Base64 decoded values for several different "stmt" tokens. Statement numbers were displayed in the browser. Looked for correlations between statement number and cipher-text. Conclusion: It looks like a stream cipher. Use XOR to calculate 10 bytes of the keystream based on the known plain-text (ie. the statement number). Now try the same things against one of the other collected tokens, such as the one called "Ctxt". Get ASCII text that allows you to infer what it would say. Expand it out more and more to get the keystream. Repeat over and over until you have enough of the key to figure out anything in the application.
Through this iterative process, we can obtain the entire keystream (or rather, a sufficient amount of the keystream to encrypt and decrypt all of the cipher-text we encounter). Can replace the statement number with another valid statement number and view the contents.
Able to subvert the encryption mechanism without any knowledge of the algorithm or the key based solely on observed patterns in the ciphertext. They were using RC4 with a unique key generated for each user session. Root cause of the vulnerability is the re-use of the keystream.
This presentation was by John Steven who is the Senior Director of Advanced Technology Consulting at Cigital, Inc.
What is a threat?
- An agent who attacks you?
- An attack?
- An attack's consequence?
- A risk?
What is a threat model?
- Depiction of the system's attack surface, threats who can attack the system, and assets threats may compromise.
- Some leverage risk management practices. Estimate probability of attack. Weigh impact of successful attack.
Elements of a threat model
- Structural view
- Threat actors
- Attack vectors
- Capability: Access to the system, able to reverse engineer binaries, able to sniff the network
- Skill Level: Experienced hacker, script kiddie, insiders
- Resources and Tools: Simple manual execution, distributed bot army, well-funded organization, access to private information
- Threats help encourage thorough throught about how intentions for misuse and determine "out of bounds" scenarios.
A Few Words on STRIDE
- A conceptual checklist backed by data flow diagrams
- Aggregate attack possibilites
- Use OR, AND
- Allow for decoration (probability, cost, skills required, etc)
Threat Modeling as a Process
- Use threat modeling to identify where potential threats exist relative to the architecture, how threats escalate privilege, specify vectors of attack, identifies components and assets worth protecting.
Leading Up to Threat Modeling
- Identify threats
- Enumerate doomsday scenarios
- Document misuse/abuse
- Diagram structure, assets
- Annotate diagram with threats
- Enumerate attack vectors
Input: Goals, Doomsday Scenarios
Misuse/Abuse Cases (use case view and component view)
Inputs: Security Requirements (specified security features - "128 bit encryption", "software security != security software")
Anchor in Software Architecture
Consider where attacks occur:
- Top-down: enumerate business objects (sensitive data, privileged functionality)
- Bottom-Up: enumerate application
Output: Security Assessment & Test Design. Threat models drive assessments, Test design. Establish rules of engagement. Prioritize areas of interest. Manage a team in risk-based fashion. Establish a single tie between vulnerability and control.
Application Structure: No "One Size Fits All"
Application Structure: Topology - Coloration shows authorization by role. Arrows indicate resolution of principal/assertion propagation. Use structure to separate privilege.
Application Structure: Components - Component diagrams show critical choke points for security controls (input validation, authentication, output encoding).
Application Structure: Frameworks - Showing frameworks indicates where important service contracts exist "up" and "down".
Assets: Flow - Assets exist not only in rest, but also flow through the system. Use different types of flags to represent data flow of assets.
Use different colored arrows to represent each different attack vector.
Target Using Layered Attacks: Bootstrap later attacks with those that "deliver". Use one layer to exploit another (net, app). Combine attacks to reach desired target.
- Base threat model in software architecture
- When specific use (cases) and high-level architecture are defined: inventory roles, entitlements, if one doesn't exist and inventory assets, sensitive data, privileged components
- Enumerate initial attack vectors. Use common low hanging fruit.
- Elaborate more attacks. Find opportunities for privilege escalation. Layer attacks to target or "hop" to assets. Fill in gaps by "inventing" attacks.
- Use threat modeling to drive security testing
This presentation was by Jian Hui Wang (girl) who is a security professional, but "a nobody in NYC". Talking about Lotus Notes/Domino web application architecture and security features, web application common development mistakes and fixes, and test methodology.
Lotus Notes/Domino History
Lotus Notes is client and Domino is the server. Supports multiple protocols with one interface (HTTP, LDAP, SMTP/POP/IMAP, file sharing). Strong on workflow application and collaborative application. Used by .gov, .edu, .com. Google search shows 66 million notes databases facing the internet. People use it because it's easy to develop and deploy a simple application, granular access control, good logging method, and it integrates well with e-mails.
Notes databse is building block of Domino application (.nsf or .ntf). Notes Database is a container for data (document, message, web page), design elements (form, page, view, folder, navigator, agent, frameset, outline).
Two components in Domino server architecture. There is an HTTP Server and a Domino Engine (URL Parser, Command Handler, and Database).
Web Access Syntax
- Database = Notes Database
- NotesObject = the web accessible design element
- Action = the action on NotesObject
- Arguments = the qualifiers for the action (optional)
Notes Database Access Control List (ACL)
- Define users and groups access privileges on the database
- Seven access levels (manager, designer, editor, author, reader, depositor, and no access)
- Eight access options for each level (create/delete documents, create/delete folders/views, create/delete agents, create/delete public documents)
- Anonymous and -Default-
- Maximum internet and password access: only works for name-password authentication but not for certificate authentication. A web user cannot get the access greater than the "Maximum" access even if the access explicitly given is higher
- Further restriction can be done by conjunction with reader field, author field, and access list of documents for granular read and write access control
Notes Web Authentication
- Anonymous user - who does not have Person documents in DOmino Directory (names.nsf)
- Authentication occurs if anonymous access is disabled on server configuration document and Notes objects
- Name-password authentication: user/pass are authenticated to Person document and internet password in Domino directory (names.nsf). Basic authentication and session-based authentication. Internet password lockout function (Notes 8 only)
- SSL client certificate authentication
Common Security Mistakes in Development
- Unauthorized Access: Anonymous access. Anonymous privilege is assigned to Default access level if there is no anonymous group explicitly set. Default access level is Designer and Maximum Internet and Password Access is Editor of most built-in templates. Forceful browsing. Solutions are to setup anonymous group and assign it "no access". Review the ACLs of all databases and confidential documents.
- Using Default Objects (Databases): Default databases are statrep.nsf, schema.nsf, reports.nsf, names.nsf, log.nsf, events.nsf, doladmin.nsf, dbdirman.nsf certsrv.nsf, certlog.nsf, admin4.nsf, ... Anonymous users should not be allowed to access these databases.
- Default Objects (view): $DefautlView?OpenView, $DefaultNav?OpenNav, $DefaultForm?OpenForm, help?OpenHelp, $about?OpenAbout, $searchform?searchdomain, $searchform?searchsite, $searchform?searchview, $Icon?OpenIcon, $first, $file. Solutions are to use the URL redirection and mapping on server document, customize the default pages, and apply the appropriate access control.
- SQL Injection: Places to process User Input (@Commands, WebQueryOpen, WebQuerySave, WebQueryClose, @URLQueryString, OpenAgent, RunAgent). Solutions is input validation in fields by formula or lotus scripts
- Cross Site Scripting: Most cross site scripting vulnerabilities are persistent. Solutions are to use input validaton or to HTMLencode.
- Session Management: By default uses basic authentication. Username and password are sent in clear-text in teh packet of every request. Solution is to configure the server document to use session-based authentication. Do not append sensitive data to Querystring.
- Information Leakage: Hard coding username and password. Solutions are to remove the sensitive information from the source code and log and customize the error message.
- Operating System Interaction: LotusScript has system commands such as Shell, OSLoadProgram, OSLoadLibrary, FileCopy, Open, Kill, Get, Input, Close. Solution is to hardcode the path and validate the filename input.
Testing security is challenging but it can be done:
- Lotus Notes Designer (Design Synopsis)
- A good text editor
- Secure Domino Application
- Lotus Security Handbook
I was originally planning on going upstairs for the SaaS Security presentation, but I had to come downstairs again to get my lunch and this topic seemed interesting, especially given the prevalence of cross site scripting in websites (see OWASP Top 10). The presentation was by Arshan Dabirsiaghi, the director of research at Aspect Security. He actually began by talking about Clickjacking and said that Jeremiah Grossman and RSnake gave up enough clues for him to figure out the exploit as far as Adobe flash goes and says that he'd rate the vulnerability a 7/10 in flash and an overall 10/10. Example non-weaponized exploit at http://i8jesus.com/stuff/clickjacking/test1.html using iframes and CSS. Suggested fix is to apply framebreakers to your page.
Is an XSS worm really a worm?
5 components of a worm:
- Reconnaissance - "[the worm] has to hunt out other network nodes to infect"
- Attack - "[components] used to launch an attack against an identified target system"
- Communication - "nodes in the network can talk to each other"
- Command - "nodes in the worm network can be issued operation commands"
- Intelligence - "the worm network needs to know the location of the nodes as well as characteristics about them"
Short answer: 3/5 - probably
How are XSS worms different from traditional?
- Infection model - Current model requires user interaction, worm strictly contained within web application, passive and localized, no Warhol worms (15 mins of fame).
- Payload capability - Perform any application function (money transfer, close account). XSSProxy/Attack API. Malware (yikes)
- Target shift - Internet worms can own everything both in front of and behind a firewall (island hopping).
- Penetration - Need to trick the user into spreading between sites using a 3rd party proxy.
Traits of Current XSS Worms
- Static payloads
- Passive infection strategy
- Staty on the same domain (don't say nduja)
- Uncontrolled growth
- No command and control
Current Incident Response Options
- Fix the vulnerability
- Manual purging - can only be done by experts and doesn't scale
- Database snapshot restore - effectively removes all worm data from tained columns, but forces loss of other application data
- Search & Destory - works now. Tricky in the future, but possible.
Next Gen XSS Worm Reconnaissance: A reconnaissance component will be added to the client side to find more web applications to infect. Nodes can use HTML5 Workers/Google Gears WorkerPool/<insert tomorrow's new RIA technology>. What about SOP? Old and busted: utilize 3rd party proxy (a la jikto ~2007). What attackers should be doing now: malware - no SOP! Next gen hotness: cross-site XHR, XDR, postMessage. Allows cross-site bidirectional communication. Servers must opt in, like Flash, so absolutely no security issues there (kidding)
Cross-site communication in HTML5
- postMessage(): Cross-domain communication based on strings. What do developers do with strings? JSON/eval() SiteA + JSON + SiteB = Shared Security
Staniford, Paxson & Weaver's Reconnaisance Techniques
- "hit list scanning"
- Permutation Scanning
- Topological Scanning (not without malware, cross-site XHR)
Next Gen XSS Worm Attack: An attack component will be added to the client side. New client side piece delivered with reconnaissance piece to attack other off-domain web apps. 85% of websites have XSS (how much is reflected vs stored?) How likely is it to fnd a stored XSS in another web app
Next Gen XSS Worm Communication: A communication component will never occur in a XSS worm. Can't communicate directly from victim browser to another victim browser. "centralization" in worms is just another word for weakness.
Next Gen XSS Worm Command: A command component will be added to the worm payload. Communicationw ith operator necessary for command-and-control structure, data delivery (new target info, soruce updates, etc)
- Attacker quietly posts signed payloads
- Victim creates token
- Victim queries Google form token using JSON
- Victim finds a signed result
- Executes the signed payload
Next Gen XSS Worm Intelligence: An intelligence component will be used after initial worm stages, it can't be trusted (adversaries can poison). XSS worms probably don't need this, they typically follow a pattern where the first 24 hours it reaches massive infections through epic growth rate. After that, gone and never seen again.
Ways to Prevent Next Gen XSS Worms
- on demand exploit egress filters: popular sites need agile response techniques
- OWASP AntiSamy - safe rich input validation. Uses a positive security model for rich input validation. High assurance mechanism for stopping XSS (and phishing) attacks
- utilizing cross-domain workflows: letting the browser SOP protection prevent cookie disclosure + sensitive application information
- browser content restrictions: Doesn't make sense in a DOM. Requires parsers to honor end tag attributes.
This presentation, entitled "Security in Agile Development: Breaking the Waterfall Mindset of the Security Industry" was by Dave Wichers, member of the OWASP board and cofounder and COO of Aspect Security.
Manifesto for Agile Software Development
Individuals and interactions over processes and tools. Working software over comprehensive documentation. Customer collaboration over contract negotiation. Responding to change over following a plan.
- Agile practices test driven development, pair programming, and doing the simplest thing.
- Planning Sprint (Sprint 0) - define user stories
- Develop in sprints and focus on what the customer wants first in short iterative development cycles
Assurance is the Goal
- "Assurance is the level of confidence that software functions as intended and is free of vulnerabilities, either intentionally or unintentionally designed or inserted as part of the software" - DOD
- Can agile software development methods generate assurance?
- "test-driven development places (functional) assurance squarely at the heart of development" - Johan Peters
Waterfall Security is "Breadth First"
- Build assurance layer-by-layer
- Challenges are problem space is very large, difficult to prioritize, ...
Agile vs Security
- Where to insert security activities?
Security in Agile (nice chart here)
- Add Threat Modeling and Stakeholder Security Stories at the beginning between the Story FInding/Initial Estimation
- Do periodic security sprints (if needed) between writing the story and scenario and implementing functionality and acceptance tests
- Do some independent expert testing and security architecture review support in the quality assurance phase
- Add Application Security Assurance Review between system testing and release phases
Key Agile Security Enablers
- Standard Security Controls: See the OWASP Enterprise Security API (ESAPI) Project
- Secure Coding Standards: How to properly use your standard security controls. How to avoid common security flaws. Automated code analysis.
- Developer Security Training: How to use your standard controls and avoid common flaws
- Support from Security Expers: Even with training and standard controls, security is hard. Access to security experts and independent testing/analysis is key. Ideally, a security expert would be on the team (but usually not possible).
Planning Sprint (Sprint 0)
- Identify StakeholdersL Ask them what thier most important security concerns are. Work with them on the basic security controls required based on system purpose, environment, existence of such mechanisms, etc
- Confidentiality: Who is allowed to access what data and how? How important is protecting this data? Regulatory requirements?
- Integrity: What data must be protected and to what degree?
- Availability: How important is system availability? Can we define an SLA?
Planning Sprint: Capture Risks in Stakeholder Security StoriesAssurance is the level of confidence
- As a User...I want to be the only one who can access my account so that I can keep my information private.
- As a User...I want my personal information encrypted in storage and transit so that it doesn't get stolen by attackers.
- As a Manager...I want to be the only one who can edit Employee salaries so that I can prevent fraud.
- As a Business Owner...I want all security critical actions logged, so that attacks can be noticed and diagnosed.
Building Assurance "Depth First"
- Identify most important security concerns and their required security mechanisms
- Within sprints, or in periodic security sprints develop test methods for them and their use, configure/implement/analyze these security mechanisms, and run the tests
Implement Stakeholder Security Stories
- Security stories are implemented just like other stories. Test-driven development (unit test cases come before the code). Continuous reviews and inspection (pair programming/constant information reviews)
Test Cases for Security Controls
- Security "requirements" are defined by developing test cases. Unit tests can test both positive (functional) and negative (not broken) aspects of security mechanisms. Tests are repeatable, providing full regression testing. But not true penetration testing or analysis.
- Real experience with test driven development. The OWASP Enterprise Security API.
- Results in significant increase in assurance
Test Cases for Security Stories
- Functional test cases. Typical unit testing by developers. Verify presence and proper function of security control. May include simple tests with a browser.
- Security test cases. Check for best practices. Test for common pitfalls. Hopefully, most come with your standard security controls.
- Test cases provide strong assurance evidence
- Independent security testing. Verifies that functional and security tests were performed. Provides additional specialized security testing expertise.
Periodic Security Sprints
- As necessary, build/integrate related security controls. Implemente highest priority related security controls first. Leveraging your standard security components is key. Building significant new security controls is hard. Security sprints may even be completely avoided if sufficient standard components are available.
- Examples: Authentication, sessions, authorization, validation, canonicalization, encoding, error handling, logging, intrusion detection
Perform Agile Security Reviews
- Security reviews: verify all are in place and complete. Threat model, security stories, security controls, test cases, test results. Notice: Most are standard agile artifacts, not just add-on security deliverables.
- Application code review and penetration testing. Added for critical applications to increase assurance. Manual (tool supported), automated, or both. Within security sprints and/or predeployment testing.
Example: Agile Access Control
- With standard access control components, just make sure "isAuthorized() is called where needed both in presentation layer and business logic. Stay focused on implementing the functionality
- Define user stories aroudn who can do what. Configure your policy for what is most important first. Define and restrict what normal users can do. Policy can be both declarative and programmatic.
- How do you test proper implementation? Develop policy specific test cases to make sure policy is enforced properly.
Security in Agile Summary
- Agile can generate assurance well, possibly better
- Approach is depth-first, not breadth-first
- Getting the right stakeholder security stories is key
- In traditional security, assurance comes primarily from expert security reviews at successive stages of development. In agile security, assurance comes from managing the key risks to the security stakeholders.
This presentation was by Dinis Cruz, and OWASP board member and he works for Ounce Labs, a producer of a source code analysis tool, but he said he was not speaking on behalf of either. The presentation was entitled "Building a Tool for Security Consultants: A Story of a Customized Source Code Scanner". Everything was built on Open Source except for the scanning engine which is using Ounce.
About the Tool
Developed features while performing an assessment. Only developing features that make sense. Considered mature after 4 or 5 engagements with no feature additions necessary. Tools job should be to give you "pointers" that you can follow. Tool displays a chart of the flow from function to function. Uses different colors to represent data sources and data sinks. Can map just source to sink so you can easily figure out where tainted data arrives from and where it goes to. Able to look for "insecure patterns" instead of finding 20 XSS or 10 SQL injection pages. Able to display function calls ordered both ways: what functions are called by a function or functions that call a function. Added a scripting editing environment. Everything that is available via the GUI can be scripted.
There were no slides for this presentation and the whole thing was a demonstration of the tool and how it works, it's features, etc. I don't know a whole lot about source code scanning and will tell you that a good chunk of this presentation was over my head, but Dinis was very enthusiastic about the tool and made it sound like it's something totally awesome and very worth looking into. He says that the tool is not "nice" and not "easy to use", but once you get used to it, it is an extremely useful tool for source code analysis.
This presentation was by Chris Nickerson, founder of Lares Consulting, and the goal was to talk about the use of layered attacks.
General types of threats includes social engineering/human (corporate/personal manipulation, bogus e-mails, physical intrusion, media dropping, phone calls, conversation, role playing), electronic (application and business logic attacks, software vulnerability exploitation, ...), physical (break-in, theft, physical access, physical manipulation, violence), and malfunction/inherent (business logic flaws, software glitches, software coding holes/exploits, process breakdown, act of god/war/terrorism disruption, intended backdoors) and a red team test should cover them all.
Why red teaming?
How do you know you can put up a fight if you have never taken a punch?
Red teaming process: Information Gathering -> Vulnerability Analysis -> Target Selection -> Planning -> Executing the Attack -> Back to step 1
Process of Attack
- Information Gathering: Research methods and useful information (spend most time here)
- Vulnerability Analysis: Internal/external/hired/personal
- Target Selection: Internal/external/hired/personal
- Planning: Plan a, b, e, d, pie
- Executing the Attack: Getting what you need and getting out. Not getting greedy. Getting out cleanly.
Corporate Attack Approach
- External Direct: server/app attack
- External Indirect: client side/phishing/phone calls
- Internal Indirect: key/cd drops/propaganda/creating a spy
- Internal Direct: social/electronic/physical/blended
- Exotic Attacks: environment manipulation (pulling the fire alarm, etc to move people)
Information Gathering Tools
- Maltego: The best attacks from the best intel (gives a graphical view of how all of the information interacts)
- Metagoofil: Yer Dox on the net have Infos (Extracts information from internet documents)
- Clez.net (External Profiling)
- CentralOps.net (Network Profiling)
- Robtex (Server Profiling)
- Touchgraph (Show business relationships and links)
- ServerSniff (Get tons of webserver specific info and verification)
- Netcraft (usage info)
- DomainTools (Domain info)
- MySpace/Friendster/Twitter (know ya enemy)
- Ophcrack Live
- Core Impact
- FireWire PCMCIA Card + Winlockpwn = Unlock
- Switchblade + Hacksaw + U3 drive
- Elite Keylogger
- WRT + Metasploit = Cheap leave behind
Other Fun Toys Onsite
- FlexiSpy (installs image on cell phone to read SMS, listen to phone calls, etc)
- Pen cams
- USB cams
- Cell phone jammers
All of these different methods to test front/back/side doors don't rule out the low tech attacks. You could spend a million dollars to prevent someone from hacking the server and they could just walk in the front door and take it. A really good talk by a guy who really knows his stuff and the only talk I've seen so far at the conference that wasn't specifically about technical vulnerabilities.
This presentation was by Alexander Meisel and is from a paper that was put together by the Germany OWASP chapter. He began by introducing the problem being online businesses having HTTP as their "weak spot". Then talked about the definition of the term "Web Application Firewall". It's not a network firewall and not only hardware. The targeted audience of the paper is technical decision makers, people responsible for operations and security, and application owners. Next he talked about some of the characteristics of web applicatons with regard to security. Prioritize web applications in regard to their importance (access to personal customer data, access to confidential company information, certifications). Some technical aspects include test and quality assurance, documentaiton, and vendor contracts.
Where do WAFs fit into the web application security field? WAFs are part of a solution. Create a table with wanted functionality (CSRF, session fixation, *-Injection). Do a rating/evaluation with "+" meaning it can be very well implemented using a WAF, "-" meaning it can not be implemented, "!" meaning depends on the WAF/application/requirement, and "=" meaning it can partly be implemented with a WAF.
Looks at the benefits and risks of WAFs. Good baseline security. Compliance. Just-in-time patching of problems. Additional benefits (depending on functionality) could be central reporting and error logging, SSL termination, URL encryption, etc.
Some risks involved in using WAFs are false positives, increased complexity, having yet another proxy, and potential side effects if the WAF terminates the application.
Protection against the OWASP Top 10. App vs WAF vs Policy. Three types of applications: web application in design phase, already productive app which can easily be changed, and productive app which cannot be modified or only with difficulty. Table of OWASP Top 10 in regards to work required with the 3 types of applications to fix the problem in the application itself, using a WAF, and using a policy.
Criteria for deciding whether or not to use WAFs. Company wide criteria includes the importance of the application for the success of the company, number of web applications, complexity, operational costs, and performance and scalability. Criteria with regard to the web application includes changeability of the application, documentation, maintenance contracts, and time required fixing bugs in third-party products. Consideration of financial aspects includes avoidance of financial damage via successful attacks and teh costs of using a WAF (license, update, project costs for eval and WAF introduction, volume of work required/personnel costs).
He started going pretty fast here since he was already running over on time. The gist was a bunch of best practices for introduction and operation of web application firewalls. He talked about technical requirements, job requirements, and an iterative procedure for implementation.
This presentation was mostly just an overview of what is in the paper and he didn't get into too much specifics. Go check out the paper at https://www.owasp.org/index.php/Best_Practices:_Web_Application_Firewalls to get the details!
This presentation was by Hans Zaunere, Managing Member, and it is entitled "PHundamental Security - Ecosystem Review, Coding Secure with PHP, and Best Practices". Take a look at http://www.nyphp.org/phundamentals/ for the ongoing guide and best practices. Guru Stefan Esser recently presented an excellent talk at Zendcon.
Security fundamentals are common across the board. Different environments have different requirements (desktop apps different from web/internet apps). Web/internet have a huge number of touch points. PHP isn't responsible for all of them, but the developer is. Different languages handle in different ways. PHP is no different except "More internet applications speak PHP than any other". PHP gets a bad rap. Low point of entry and great flexibility. There's been some mistakes like weak default configuration, too forgiving for amateurs, the infamous magic_* of PHP, PHP Group argues what's a security flaw.
It's easy to shoot yourself in the foot with C. In C++ it's harder to shoot yourself in the foot, but when you do, you blow off your whole leg. - Bjarne Stroustrup, Inventor of C++
Three zones of responsibility. PHP is effectively a wrapper around libraries and data sources. Many external dependencies and touch points.
- Poorly written code by amateur developers with no programming background. Primary cause for the security ecosystem around PHP. Laziness - letting PHP do it's magic_*. "Program smart"
- Extensions and external libraries. PHP's greatest asset. Sometimes library binding is faulty. Sometimes the external library has faults, or behaves in an unforeseen way when in a web environment - possible in any environment. Know what extensions you're using, use the minimal number of extensions, and be aware of the environment they were originally designed for. "Know thy extensions"
- PHP Core - "PHP". Secunia says 19 advisories for PHP between 2003-2008. Java had 38+ and Ruby 11+. "The list goes on - PHP is not alone". One advisory in 2008. "More internet applications speak PHP than any other"
- Best practices are common to any well run enterprise environment. PHP is growing into this environment very quickly.
- Web security is largely about your data and less about exploits in the underlying platform. Buffer overflows aren't so much the hot-topic.
- Installation - Avoid prepackaged installs, including RPMs, .deb, etc. If you use them, review their default deployment. Installation touch points also typically include MySQL/Apache.
- Configuration - Use php.ini-recommended. Better yet, take the time to know what you're doing and tune configuration files yourself.
- Don't make PHP guess what you mean. Be explicit with variables and types. Don't abuse scope - know where your variables come from. Avoid magic_* and implicitness - BE EXPLICIT.
- Keep code small, organized, and maintainable. Use OOP techniques to enforce code execution paths. Use includes to keep things organized.
- Don't use super-globals directly - wrap for protection.
Be aggressive - B.E. aggressive
- It's always about data
- One of PHP's greatest strengths - loosely typed. Also it's biggest weakness. Don't make PHP guess what you mean.
- Cast variables, know their type and the data you expect. Let PHP do it's magic only when you want it to - not by chance.
- Keep tabs on your data's path, lifecycle and type. Know where it's come from, what it's doing, and where it's going. Filter/escape/cast and throw exceptions every step of the way.
- Input validation, output validation, CASTING.
- Don't be lazy, be explicit, use OOP.
Casting isn't just for movie producers
- No system has a single security pressure point
- Don't take the easy way out just because you can
- Put PHP in the same well managed enterprise environment as other technologies
- PHP/AMP respond very well to TLC
PHP is just part of the ecosystem and there is awareness and experience on the PHP side. The ying/yang of PHP's history overshadows reality. Stand by PHP and it'll stand by you. Web/internet applications are deep and complex. Users, interoperability, data, architecture, support, compliance. Phishing, hijacking, spam, sopcial engineering - BROWSERS!
PHP is the least of your worries
This presentation is by Jacob West in the Security Research Group and Taylor McKinsley in Product Marketing from Fortify software. I'd like to note that Fortify is a developer of a source code analysis tool and so this presentation may have a bias towards source code analysis tools.
56% of organizations fail PCI section 6. Poorly coded web applications leading to SQL injection vulnerabilities is one of hte top five reasons for a PCI audit failure. Section 6 is becoming a bigger problem: #9 in 2006 reason for failure, #2 in 2007.
PCI Section 6 has to do with guidelines to "Develop and maintain secure systems and applications". Section 6.6 reads "Ensure that all web-facing applications are protected against known attacks by either of the following methods: Having all custom application code reviwed for common vulnerabilities by an organization that specializes in web application secure" or by using a web application firewall. Further clarifications say that automated tools are acceptable, web application penetration testing is allowed, and vulnerability assessments can be performed by an internal team.
Comparing Apples, Oranges, and Watermelons
- Setup: Source code analysis (+2) is good because it works on existing hardware, but must live where your source code lives. Penetration testing (+3) is good because you only need one to assess everything and works on existing hardware, but needs to talk to a running program. Application firewall (+1)is good because it lives on the network, but you must model program behavior.
- Optimization: Source code analysis (+2) is good because you can specify generic antipatterns in code, but you must understand vulnerability in detail. Penetration testing (+2) is good because tests are attacks, but you must successfully attack your application. Application firewalls (+1) are good because they share configuration across programs, but must differentiate good from bad.
- Performance: Source code analysis (+3) is good because it simulates all application states and is non-production, but scales with build time and not the number of tests. Penetration testing (+2) is good because you get incremental results and is non-production, but you must exercise each application state. Application firewall (+1) is good because it's a stand-alone device and scales with $$$, but impacts production performance and scales with $$$.
- Human resources: Source code analysis (+1) is good because it enables security in development and reports a root cause, but makes auditors better and does not replace them. Penetration testing (+2) is good because it is highly automatable, but reports symptoms and not the root cause. Application firewall (+2) is good because once it's configured it functions largely unattended, but requires extensive and ongoing configuration.
- Security know-how: Source code analysis (+3) is good because it gives code-level details to an auditor, but you must understand security-relevant behavior of APIs. Penetration testing (+1) is good because it automates hacks, but a hacker is required to measure success and optimize. Application firewall (+2) is good because it identifies common attacks out of the box and is a community effort, but a hacker is required to measure success and customize.
- Development expertise: Source code analysis (+1) is good because it focuses attention on relevant code, but you must understand code-level program behavior. Penetration testing (+2) is good because basic attacks ignore internals, but advanced attacks require internal knowledge. Application firewalls (+2) are good because they live on the network, but you must understand the program to tell good from bad.
- False positives: Source code analysis (+1) is good because it gives auditors details to verify issues, but reports impossible application states. Penetration testing (+2) is good because results come with reproduction steps, but it is difficult to oracle some bugs. Application firewalls (+1) are good because it is attacks instead of vulnerabilities, but there is an evolving definition of valid behavior.
- False negatives: Source code analysis (+3) is good because it simulates all program states and models the full program, but it must be told what to look for. Penetration testing (+1) is good because it is good at finding what hackers find, but is difficult to oracle some bugs and has missed coverage. Application firewalls (+1) are good because it uses attacks instead of vulnerabilities, but there is an evolving attack landscape.
- Technology support: Source code analysis (+2) is good because parsing is separable from the analysis and is interface-neutral, but it must adapt to new program paradigms. Penetration testing (+2) is good because it is independent from program paradigms, but is tied to protocols and is limited to network interfaces. Application firewalls (+2) are good because they are independent from program paradigms, but are tied to protocols and are limited to network interfaces.
Working Towards a Solution
- Assessment: Proving the problem or meeting the regulatory requirement. Recurring cost that does not "fix" anything
- Remediation: Fixing security issues found during assessments. Lowering business risk at a single point in time.
- Prevention: Get security right hte first time. Minimizing business risk systematically.
Do your own comparison and fill out the scorecard yourself (presenters ratings are noted in parentheses above).
Taylor did interviews with three companies to get their experiences deploying each (source code analysis, penetration testing, and application firewall) and had them evaluate based on the nine criteria both before and after buying. Not going to list each company's results in the blog, but it was just a basic table with each criteria and a number rating for both before purchase and after deployment. To sum it up, Source Code Analysis was a 14 rating before purchase and a 17 rating after deployment. Penetration testing was a 21 rating before purchase and a 21 rating after deployment. Application firewalls were a 21 rating before purchase and a 16 rating after deployment. It seems like the first organization had a large amount of developers and that factored into their decision to purchase a source code analysis tool. The second organization had a far fewer number of developers and was more of an IT shop and chose the penetration testing tool. The last organization was a smaller shop in general (still fairly large) and went with the WAF because they wanted something they could just put in place and manage.
Analysis: All three solutions required more effort than expected. All three solutions produce reasonably accurate results. Varying levels of expertise needed.
How do you demonstrate that your application is protected against known attacks?
- Verification that the application was analyzed
- A report showing no critical security issues identified
- Document showing how the tool fits into your architecture
How do you show that the user is appropriately trained?
- Document explaining prior experience or an informal interview
How do you show that you have configured the tool appropriately?
- Document explaining how the tool was configured and what new rules had to be added.
Summary: PCI section 6 is evolving to become increasingly precise. Compare technologies in your environment along nine criteria. Demonstrating compliance is an art, not a science.