Web Admin Blog Real Web Admins. Real World Experience.

25Aug/140

Why You Shouldn’t Phish Your Users

As an Information Security Program Owner, I get a barrage of e-mails and phone calls multiple times a day from vendors looking to sell us their latest hotness security product.  Between the e-mails, phone calls, expo floor at BlackHat this year, and several talks that I've seen at past conferences, I have noticed a disturbing trend that I thought was worth bringing up: phishing your users.

The concept is simple; you send e-mails to your users with content that appears legitimate along with links or attachments that are designed to simulate a spear-phishing attack.  If the user recognizes it as malicious and deletes it, then they are left to carry on with business as usual.  If, however, they fall victim to your trickery, then they are punished in the form of verbal and written lectures, letters to their management, and security awareness training.  No carrot, all stick.  This situation makes me think back upon an issue that I've encountered with my twin daughters at bedtime.  For over a year we struggled to get them to stay in bed at night.  We would lay them down, play some music, and then leave the room and it wasn't 5 minutes later before they were up playing, yelling, and coming back out into the hall.  We yelled at them, spanked them, turned off lights, and did just about everything we could think of to get them to stay in bed.  None of the punishments we did actually corrected the behavior.  Do you want to know what actually worked?  Offering them a treat in the morning if they stayed in bed all night.  Positive reinforcement.

As much as we hate to admit it, adults aren't that different from children in this way.  Nobody takes well to being tricked into clicking on links or opening attachments.  Punishing them for it leads to even further resentment.  And where do you think they focus those hostilities?  The Security Team.  Those people who you are trying to protect end up blaming you and your team for getting them into trouble.  Now, what happens the next time you have a problem that you need that user's assistance to solve?  Absolutely nothing.  Every time you phish a user, you are burning a bridge that you may need later on.  And since we all know how easy it is to phish a user, it just means that you are burning a lot of bridges.

So, what can we do to prevent our organization from being compromised by phishing and other types of social engineering attacks?  To start with, you should incorporate security awareness training to run alongside your new hire training activities.  Make sure that every employee has a baseline amount of knowledge on the issues and how to avoid them.  Next, you should invest in technologies that will detect and prevent these types of malicious activities.  Performing some sort of link and attachment inspection in e-mails and web content inspection for malware will significantly reduce the success rate of these types of attacks.  Lastly, there are a number of vendors who will track real-life phishing attempts to your users and modify the links to be able to perform analysis on who clicked and who didn't.  This has the exact same effect of phishing your users, where you can sit them down and have a talk about what happened, but without pitting them against the Security Team.  The attacker is now the bad guy and you're just the friendly information security professional helping to get them back up and running and giving them tips so that it doesn't happen again.  You are BUILDING BRIDGES.  And, if you want to put an even more positive spin on this process, offer up a reward for those who get phished, but notify the Security Team instead of clicking on the link or opening the attachment.  Everybody wins.  That's why you shouldn't phish your users.

15Sep/132

Enterprise Risk Management for the Masses

A couple of years ago I decided, along with support from my management, that Enterprise Risk Management would become a focal point for my Information Security Program.  I was convinced that framing vulnerabilities in the form of risks was essential to giving management visibility into issues they currently didn't know existed and to give our staff the comfort of knowing that the issues that caused them to lose sleep at night were now being considered for mitigation by management.  I couldn't have been more right.

I began by collecting the risks submitted by each team in Excel spreadsheets and Word documents.  They had all of the pertinent information like a subject, owner, risk assessment, etc, but very quickly I became a victim of my own success.  Before I knew it, I had more risks than I could efficiently track in this format.  First off, it was extremely cumbersome to try to manually maintain the risk index in Excel.  While Excel is good at calculating formulas, it sucks at maintaining links to external documents.  It can be done, but it requires quite a bit of manual effort to do so.  Second, maintaining your risk details in Word documents is something they should reserve only for your worst enemies.  They are difficult to update, difficult to track updates with, difficult to search and, well, just plain difficult.  I thought to myself that there has to be a better way, yet, this is what the unfortunate majority out there are currently stuck with today.

After some research, it turns out that many years back, my company had another security professional who was interested in Enterprise Risk Management.  Apparently, they had come to similar conclusions as I did with the Word documents and Excel spreadsheets, but they were able to get some internal development time to create a Lotus Notes based risk management database.  It was everything that I needed, or so I thought, so I started to manually enter all of my new risks into this old risk management database.  At first, things seemed to be working well.  I had some different views into my data that would allow me to see way more information than I could before.  I also had the ability for management of our various teams to be able to see their risks without involving me.  It was much better, but soon I began to realize the limitations of this approach.  The database itself was rigid.  Changes required me to go through another internal team for resources and it often took a long time to make them.  Also, any updates that were made didn't modify the current risks, only the ones submitted after that point.  Once, I found myself opening and re-saving hundreds of risks just because I decided to change my risk calculation formula slightly.  I began looking again for another way.

Soon, my new round of research brought me to a special set of tools called Governance, Risk, and Compliance or GRC for short.  There are a number of such tools out there by well-resepcted companies such as EMC Archer and CA.  They looked completely awesome and seemed to solve all of my problems with many more features to spare so I started to get some SWAG quotes from a few of the vendors.  Low and behold, these tools hold a pricetag of $100k to half a million dollars and beyond.  A request for budget for one of these tools was dismissed immediately with management literally laughing at my suggestion.  OK, so maybe it was on me, right?  Maybe I didn't do a good enough job of selling the tool?  Maybe I didn't engage the right stakeholders to back my request?  I guess you could call me a glutton for punishment, but I decided to keep trying.  This time I gathered people I thought would be interested in risk from all different areas of our business for a demo of one of the tools.  Trade Compliance, Health and Safety, Facilities, Legal, and many more.  They watched the presentation, asked some fantastic questions, and ultimately left that meeting saying that they thought that a GRC solution was a fantastic idea.  That was until I mentioned the price tag.  If even with a budget split between half a dozen different teams, it wasn't going to happen, I knew that it simply wasn't going to happen.

As I began to think about the situation that I was in, I realized that I wasn't alone in all this.  I talked with friends at various state agencies, friends at risk consultancies, and friends at companies large and small.  They had gone through the same trials and tribulations that I had and fared no better for the most part.  Having spent the better part of the last decade coding random applications and websites in PHP and MySQL, I decided that there may be something that I could do about it.  I would go home from work and start coding until the wee hours of the morning.  I would wake up early on my weekends and start coding again until the family awoke.  After several weeks of this, I had a working prototype for a new risk management system based on some simplifications of the NIST 800-30 risk management framework and running on my LAMP (Linux Apache MySQL PHP) stack.  SimpleRisk was born.

At the time of this writing, I have released 7 official versions of SimpleRisk since March of this year.  It has come a long way since then, but still holds true to it's roots.  SimpleRisk is free and open source.  The methodology was designed to be as simple as possible, hence the name.  A five step process walks you through the basics of risk management:

  1. Submit your risks
  2. Plan your mitigations
  3. Perform management reviews
  4. Prioritize for project planning
  5. Review regularly

It has every basic feature required of an enterprise risk management system and I'm adding new ones all the time.  It has five different ways to weight classic risk calculations (ie. likelihood and impact) and can perform CVSS scoring as well.  It has it's own built-in authentication system, but I've built an extra module to do LDAP authentication that I'm giving away to anyone who donates $500 or more to the cause.  It also has a half-dozen different ways to report on the risks and many more reports should be complete soon.  You can check out the demo (minus the Administrator interface) using the username "user" and password "user" at http://demo.simplerisk.org.  Or, if you're ready to dive right in, you can obtain the download package for free at http://www.simplerisk.org.

In order to make your foray into SimpleRisk as simple as possible, I've created a SimpleRisk LAMP Installation Guide that you can use to have the tool up and running in about 30-60 minutes.  And if all else fails and that proves too difficult or time consuming, then you should make your way to http://www.hostedrisk.com where for a fraction of what it would cost to buy a GRC solution, you will have your own dedicated SimpleRisk instance, running on hardware dedicated to you, built with security in mind, including extra modules not part of the standard distribution, and you'll never have to worry about installing or upgrading risk management software ever again.  Hopefully you won't ever need this, but the option is always there in case you do.

My frustrations with a lack of efficient and cost-effective risk management tools led me to create one of my own.  My hope is that by making SimpleRisk free and open source, it will benefit the rest of the security community as much as it has already benefited me.  If you have any questions or requests for features that you would like to see included in the tool, I'm always here to help.  SimpleRisk is simple, enterprise risk management, for the masses.

25Jan/100

Come To OpsCamp!

Next weekend, Jan 30 2009, there's a Web Ops get-together here in Austin called OpsCamp!  It'll be a Web ops "unconference" with a cloud focus.  Right up our alley!  We hope to see you there.

13Nov/096

Techniques in Attacking and Defending XML/Web Services

This presentation was by Jason Macy and Mamoon Yunus of Crosscheck Networks - Forum Systems.  It wins the award (the one I just made up) for being the most vendor-oriented presentation at the conference.  Not that it wasn't an interesting presentation, but their solution to defend against most of the attacks was "Use an XML Gateway" (guess what Forum Systems sells?) and the attacks were all presented using the CrossCheck SOAPSonar tool.  I realize that being a vendor they probably have more knowledge than most in the field, but being an Open Source conference, you'd think they would have demonstrated using a free/open tool (SOAPUI?) and talked more about non-hardware solutions to fix the issues.  My notes from the session are below:

Agenda

  1. Introduction to XML/Web Services Threats
  2. Techniques for Defending XML Threats
  3. XML Attack Examples and Classification
  4. Review sample attacks

Introduction to XML Threats

  • Explicit Attacks
    • Forced Disruption
    • Information Theft
    • Vendor Discovery
  • Implicit Vulnerability
    • Perimeter Breach (embeeded virus, malware)
    • Infrastructure Malfunction (parser and data processing failures)

New Attack Vectors

  • Protocol Firewalls are blind to XML
  • Malware and virus delivered via SOAP attachments
  • WSDL exposes schema and message structure
  • Injection attacks exposed via XML parameters
  • Data replay attacks

Security Testing - Base Requirements

  • Security Framework
    • Sign, ENcrypt, Decrypt, SSL
  • Identity Framework
    • Basic auth, SSL auth, WS-Security token auth
  • Parameter Injection
    • Database or file driven
    • Permutations for security, identity, and SOAP/XML
  • Concurrent Client Simultaneous Loading
    • Denial of Service Testing
  • SOAP with Attachments
    • Malware and Virus testing
  • Dynamic XSD Mutation
    • Derive SOAP vulnerability profile from WSDL schema
13Nov/090

OWASP Top 10 – 2010

This presentation was by Dave WIchers, COO of Aspect Security and an OWASP Board Member.  My notes are below:

What's Changed?

  • It's about Risks, not just vulnerabilities
    • New title is: "The Top 10 Most Critical Web Application Security Risks"
  • OWASP Top 10 Risk Rating Methodology
    • Based on the OWASP Risk Rating Methodology, used to prioritize Top 10
  • 2 Risks Added, 2 Dropped
    • Added: A6 - Security Misconfiguration
      • Was A10 in 2004 Top 10: Insecure Configuration Management
    • Added: A8 - Unvalidated Redirects and Forwards
      • Relatively common and VERY dangerous flaw that is not well know
    • Removed: A3 - Malicious File Execution
      • Primarily a PHP flaw that is dropping in prevalence
    • Removed: A6 - Information Leakage and Improper Error Handling
      • A very prevalent flaw, that does not introduce much risk (normally)
  1. A1- Injection: Tricking an application into including unintended commands in the data sent to an interpreter. (http://www.owasp.org/index.php/SQL_Injection_Prevention_Cheat_Sheet)
  2. A2 - Cross Site Scripting (XSS): Raw data from attacker is sent to an innocent user's browser.  For large chunks of user supplied HTML, use OWASP's AntiSamy to sanitize this HTML to make it safe.  (http://www.owasp.org/index.php/XSS_(Cross_Site_Scripting)_Prevention_Cheat_Sheet)
  3. A3 - Broken Authentication and Session Management: Means credentials have to go with every request.  Should use SSL for everything requiring authentication.
  4. A4 - Insecure Direct Object Reference: This is part of enforcing proper "Authorization", along with A7 - Failure to Restrict URL Access.
  5. A5 - Cross Site Request Forgery (CSRF): An attack where the victim's browser is tricked into issuing a command to a vulnerable web application.  Vulnerability is caused by browsers automatically including user authentication data with each request.  (Check out OWASP CSRFGuard, OWASP CSRFTester, http://www.owasp.org/index.php/CSRF_Prevention_Cheat_Sheet)
  6. A6 - Security Misconfiguration: All through the network and platform.  Don't forget the development environment.  Think of all the places your source code goes.  All credentials should change in production.
  7. A7 - Failure to Restrict URL Access: This is part of enforcing proper "authorization", along with A4 - Insecure Direct Object References.
  8. A8 - Unvalidated Redirects and Forwards: Web application redirects are very common and frequently include user supplied parameters in the destination URL.  If they aren't validated, attacker can send victim to a site of their choice.
  9. A9 - Insecure Cryptographic Storage: Storing sensitive data insecurely.  Failure to identify all sensitive data.  Failure to identify all the places that this sensitive data gets stored.  Failure to properly protect this data in every location.
  10. A10 - Insufficient Transport Layer Protection

OWASP Top 10 Risk Rating Methodology

  • Attack Vector (How hard for an attacker to use this flaw - 1 (Easy), 2 (Average), 3 (Difficult))
  • Weakness Prevalence (How often is it found - 1 (Widespread), 2 (Common), 3 (Uncommon))
  • Weakness Detectability (How hard is it for an attacker to find the flaw - 1 (Easy),  2 (Average), 3 (Difficult))
  • Technical Impact (1 (Severe), 2 (Moderate), 3 (Minor))

This is generic across the internet, not specific to any organization.

Started a new "Prevention Cheatsheet Series" that the Top 10 references (XSS, SQL Injection, Transport Layer Security, CSRF, Direct Object Reference).

What is actually being released is RC1 of the Top 10 and they are encouraging people to provide comments through the end of the year and then use that feedback to post the final Top 10 in January 2010.

12Nov/090

The ESAPI Web Application Firewall

This presentation was by Arshan Dabirsiaghi and was about the OWASP ESAPI Web Application Firewall (WAF) project.  My notes are below:

WAF Fallacies (at least in regards to OWASP ESAPI WAF)

  • WAFs add attack surface
  • WAFs can create culture problems
  • WAFs can't fix business logic vulnerabilities
  • WAFs are way too expensive
  • WAFs complicate networks

Why fix in ESAPI WAF vs Fix in code?

  • Changing in ESAPI WAF is just a text file
  • Shorter gap between time discovered and WAF fix vs code fix

Advantages of WAF

  • Performance - Only your rules are checked, plus state is already managed by the app server
  • Capability - being closer to the app lets us do more
  • Process - Rules are closer to application owner, shortening discovery-to-patch time, also fix-to-patch-removal time

Principle: Make common tasks easy, uncommon tasks possible

General virtual patching functionality is easy to understand

Ability to write custom script rules as well "bean-shell-rules"
Fixing Injection Flaws is easy

Can fix business logic flaws with the WAF (missing authentication, missing functional access control, missing data layer access control)

Can add "outbound" security as well

  • Add anti-clickjacking header
  • Set uniform content-type
  • Add HttpOnly flag
  • Add secure flag
  • Detect outbound information
  • Replace outbound information

Takes advantage of early failing to make rules as optimized as possible

Now we see the tool demonstrated with several different vulnerabilities in a real-world application (JForum):

  • Cross-Site Scripting Flaw (JForum XSS flaw is unable to be fixed with a WAF because of dynamic URLs)
  • Unchecked Redirect
  • Add HttpOnly
  • Add anti-clickjacking header
  • Privilege escalation

3 Different WAF Modes

  • Log
  • Block
  • Redirect

Latency with all of the rules turned on is about 5%.  With selected rules is closer to 0%.  Basically an order of n magnitude where n is the number of rules enabled.  Comes out to milliseconds.

12Nov/090

Defending Yourself: Integrating Real Time Defenses into Online Applications

This presentation was by Michael Coates, the AppSensor Project Lead.  Michael works as a Senior Application Security Engineer at Aspect Security.  AppSensor is a real time defense system with the goal being to protect an application by detecting who is bad and getting rid of them before they do bad things.  My notes from this session are below:

Agenda

  • AppSensor Project
  • Malicious Attackers
  • Application Worms

Detecting Attacks the Right Way

  • Integration
    • Detect INSIDE the application
    • Understand business logic
  • Effectiveness
    • Minimal false positives
    • Immediate response
  • Effort
    • Automatic detection
    • No manual work required

Detection Outside the Application (WAF)

  • Application context not available
  • No concept of access violations
  • Custom application + Generic Solution != success
  • Ex: Changing the account ID in /viewAccount?id=1002

Inside the Application is Best

  • Understand application & business context
  • Integration with authentication & user store

How Does AppSensor Protect the App?

  • Take many requests for an attacker to find a vulnerability
  • Takes fewer requests by AppSensor to determine that the user is malicious

AppSensor is Faster than Attacker

  • User identified as malicious and blocked before a vulnerability is found

Categories of Detection

  • Request
  • Authentication
  • Access Control
  • Session
  • Input
  • Encoding
  • Command Injection
  • File IO
  • User Trend
  • System Trend

Attack Detection: Real vs Cyber World

  • Why do bank robbers get caught?
  • Why don't hackers get caught?

Let's Change Things - Applications Should...

  • Detect attacks
  • Understand normal use vs suspicious use
  • Instantly identify attackers
  • Shutdown attackers in real time
  • Modify application accessibility for defense

Detecting Malicious Users

  • Many malicious attacks are obvious and not "user error"
    • POST when expecting GET
    • Tampering with headers
    • Submissions of XSS attack

Detecting Malicious Users

  • Bypassing client side input validation
  • Transaction using functionality not visible to user role
  • Multiple access control violations
  • Change of user agent midsession
  • Double encoded data

The Code

  • Leverages ESAPI!
  • 3 lines to setup AppSensor
  • 2 lines per AppSensor detection point

Setting up AppSensor

  1. Configure response action object (log logout, account lock)
  2. Create AppSensorIntrusionDetector with response action object
  3. Set ESAPI intrusion detector

Defining Response Policies

  • ESAPI.properties file
  • Define
    • Threshold count
    • Interval of events
    • Response action
    • Per exception type or aggregate

2 Lines to Use AppSensor

  1. Check for "maliciousness"
  2. Create new AppSensorException

Understanding the Intrusion Exception

new AppSensorIntrusion Exception(

  • request.getServletPath(),
  • "ACE1",
  • "User Message",
  • "Direct object tampering with ..."

);

AppSensor vs Scanners

  • Tools attempt 10,000s of generic attacks
  • AppSensor stops automated scans nearly instantly

AppSensor vs Human Attackers

  • Very difficult for attacker
  • Requires advanced obfuscation for each attack
  • Multiple probes == detection

Application Worms on the Rise

  • Twitter Worm
  • MySpace Samy WOrm
  • Huge damages for site
    • Remediation
    • Cleanup
    • Bad PR
    • Infected Users
  • Leverage XSS and CSRF

Detecting/Preventing an Application Worm

  • Can you find/fix all XSS?
  • Pattern matching easily foiled
  • Block the common factor!
    • Worms use XSS and CSRF for propagation
    • 1000% usage increase -> problem
    • Our example: updateProfile, updateStatus, updateName

Case Study: Samy

  • MySpace Application Worm
  • XSS worm embedded in User Profile
  • Exponential Growth of Samy's friends

Samy vs AppSensor

  • AppSensor detects uptick in addFriend usage
  • Compares against trended info
  • Automatic response initiated
    • Alert admin +200% add friend usage
    • Alerts admin 2nd time +500% add friend usage
    • Automatically shuts down add friend feature
  • Result
    • Worm contained
    • Add friend temporarily disabled
    • Site stays up

Benefits of Trend Monitoring

  • Detection of
    • Application worms
    • Scripted attacks/probing
    • CSRF attacks
  • Alerting of excessive activity
  • Selective feature shutdown for overall stability

AppSensor in Action

  • Demo social networking app
  • Defended with AppSensor trend monitoring

What's Under the Hood?

  • REST communication between AppSensor & App
  • Support Response Actions (warn user, logout user, disable user, etc)

AppSensor Brain

  • Drools - Rule Based System
  • Support for complex rule sets - much more than just counting feature usage
  • Evaluates objects in Drools memory

The Exploit

  • XSS infects victim's "Status" with worm
  • CSRF adds victim as friend of Charlie

Defend with AppSensor

  • AppSensor Policy
    • Notify admin if events > 5
    • Disable service if events > 10
  • AppSensor notices anomaly - alerts admin
  • After 10 events AppSensor disables just that feature of the site
  • Users protected, worm contained, site stays up

Trend Monitoring Benefits

  • Auto detection of attacks
  • ...
23Mar/090

Assessing Your Web App Manually Without Hacking It

After giving my presentation on "Using Proxies to Secure Applications and More" at the TRISC 2009 conference, I decided to attend the presentation by Robert "RSnake" Hansen and Rob MacDougal entitled "Assessing Your Web App Manually Without Hacking It".  The gist of this presentation was that with a few simple tools (Web Developer Toolbar, NoScript, you web browser) you can spend about an hour looking at the characteristics of a web application in order to determine what types and how many vulnerabilities it may have.  My notes on the presentation are below:

31Oct/081

Using Proxies to Secure Applications and More

I've been really surprised that for as long as I've been active with OWASP, I've never seen a proxy presentation.  After all, they are hugely beneficial in doing web application penetration testing and they're really not that difficult to use.  Take TamperData for example.  It's just a firefox plugin, but it does header, cookie, get, and post manipulation just as well as WebScarab.  Or Google Ratproxy, which works in the background while you browse around QA'ing your web site and gives you a nice actionable report when you're done.  I decided it was time to educate my peers on the awesomeness of proxies.

This past Tuesday I presented to a crowd of about 35 people at the Austin OWASP Meeting.  The title of my presentation was "Using Proxies to Secure Applications and More".  Since so many people came up to me afterward telling me what a great presentation it was and how they learned something they can take back to the office, I decided (with a little insistance from Ernest) that it was worth putting up on SlideShare and posting to the Web Admin Blog.

The presentation starts off with a brief description of what a proxy is.  Then, I talked about the different types of proxies.  Then, the bulk of the presentation was just me giving examples and demonstrating the various proxies.  I included anonymizing proxies, reverse proxies, and intercepting proxies.  While my slides can't substitue for the actual demo, I did try to include in them what tool I used for the demo.  If you have any specific questions, please let me know.  All that said, here's the presentation.

25Sep/081

Best Practices Guide: Web Application Firewalls – OWASP AppSec NYC 2008

This presentation was by Alexander Meisel and is from a paper that was put together by the Germany OWASP chapter. He began by introducing the problem being online businesses having HTTP as their "weak spot".  Then talked about the definition of the term "Web Application Firewall".  It's not a network firewall and not only hardware.  The targeted audience of the paper is technical decision makers, people responsible for operations and security, and application owners.  Next he talked about some of the characteristics of web applicatons with regard to security.  Prioritize web applications in regard to their importance (access to personal customer data, access to confidential company information, certifications).  Some technical aspects include test and quality assurance, documentaiton, and vendor contracts.

Where do WAFs fit into the web application security field?  WAFs are part of a solution.  Create a table with wanted functionality (CSRF, session fixation, *-Injection).  Do a rating/evaluation with "+" meaning it can be very well implemented using a WAF, "-" meaning it can not be implemented, "!" meaning depends on the WAF/application/requirement, and "=" meaning it can partly be implemented with a WAF.

Looks at the benefits and risks of WAFs.  Good baseline security.  Compliance.  Just-in-time patching of problems.  Additional benefits (depending on functionality) could be central reporting and error logging, SSL termination, URL encryption, etc.

Some risks involved in using WAFs are false positives, increased complexity, having yet another proxy, and potential side effects if the WAF terminates the application.

Protection against the OWASP Top 10.  App vs WAF vs Policy.  Three types of applications: web application in design phase, already productive app which can easily be changed, and productive app which cannot be modified or only with difficulty.  Table of OWASP Top 10 in regards to work required with the 3 types of applications to fix the problem in the application itself, using a WAF, and using a policy.

Criteria for deciding whether or not to use WAFs.  Company wide criteria includes the importance of the application for the success of the company, number of web applications, complexity, operational costs, and performance and scalability.  Criteria with regard to the web application includes changeability of the application, documentation, maintenance contracts, and time required fixing bugs in third-party products.  Consideration of financial aspects includes avoidance of financial damage via successful attacks and teh costs of using a WAF (license, update, project costs for eval and WAF introduction, volume of work required/personnel costs).

He started going pretty fast here since he was already running over on time.  The gist was a bunch of best practices for introduction and operation of web application firewalls.  He talked about technical requirements, job requirements, and an iterative procedure for implementation.

This presentation was mostly just an overview of what is in the paper and he didn't get into too much specifics.  Go check out the paper at https://www.owasp.org/index.php/Best_Practices:_Web_Application_Firewalls to get the details!