When I first put my name in the hat for the OWASP elections in the fall of 2013, I thought I knew what I was signing up for. I thought that my seven year history with the organization in a number of different roles (Chapter Leader, Chapter Committee Chair, AppSecUSA Chair) had me well prepared for the duties of an OWASP Board member. I told my wife that it wouldn't be a big deal, mostly something that I could do in my spare time while at work, and that it would feel good to be able to make a difference on a bigger scale than I'd done to date. I ran for the Board on a platform of wanting to support the growth of the OWASP chapters around the world and wanting to drive visibility, and ultimately buy-in, back to the community. I told myself that as passionate as I was with these things as a community member, it was time to either put up or shut up.
Here I am, six months later, as an elected member of the OWASP Board of Directors and I can honestly say that no prior experience could have prepared me for this. It's not a good thing or a bad thing, it's just very different than I expected. As a community member, I remember being at the AppSecUSA conferences and struggling with how to introduce myself to these "famous" OWASP Board Members. I was a just a chapter leader struggling to come up with ideas to engage the Austin security community while these guys were literally trying to change the world. They were the figurative "Rock Stars" of my little security world. Needless to say, I see things a bit differently now, but it's probably not what you think.
When I look at my fellow Board members, I do still see those "Rock Stars". I can't even begin to tell you how much I look up to guys like Jim Manico for literally spending every day of his life trying to make the world more secure. I constantly have to tell myself that even though I don't consider myself a security rock star, the community saw something in me and put me on the Board for a reason and I continue to hold myself responsible for executing on the platform that I laid out in my election materials. But what I've come to realize now, that I didn't realize before my election, is that even though it feels the other way around, it's really the community, not the Board that holds the power in OWASP.
When I look back at the discussions that we've had as a Board over the past six months, other than setting strategic goals, the vast majority of our meetings have focused on operational and governance issues. Through this process, I have come to the realization that while extremely important to keeping OWASP, as a non-profit organization, afloat, this isn't the kind of exciting world-wide impact stuff I thought I had signed up for. As an example, my first two months as a Board member were spent in large part re-investigating a situation that a previous Board had closed the books on long ago. In the process of trying to help the individual involved, I was twice accused by that individual (and acquitted) of violating OWASP's Code of Ethics. Talk about gratitude. Since then, it seems like it's been putting out one small fire after another. More recently, I've spent many hours working with the Board and the Executive Director to grapple with an employee who resigned from the organization only to have members of our community question whether we, as an organization, did enough to keep them here, without knowing all of the details. It blows my mind how the Board can have unanimous support for an item, feel confident that it's in the best interest of the organization, and still be called into question as to whether we are somehow being underhanded in our decisions. It's like we sometimes forget that the Board is made up of seven people, from all over the world, with vastly different beliefs, desires, and even visions for OWASP. If you can get that many people, that diverse, on the same page, then there's something to be said for that.
So, I guess in a nutshell what I'm saying is that while I feel that it's quite the privilege to be serving on the OWASP Board alongside some of the people I respect most in this industry, there is definitely a part of me that feels like the stuff that OWASP does that has the most profound impact on global security isn't what we do on the Board, but rather, what the community does in our Chapters and Projects. The Board is there to support you, the community. To create the policies to make you successful. To provide the staff to make your lives easier so that you can spend your time doing things that accomplish OWASP's mission. In addition, I want to dispel any notion that the Board is some sort of an Ivory Tower. There should never be an "us vs them" mentality at OWASP because the Board is made up of people who have been, and in many cases still are, in the trenches right alongside the community. The Board, to put it simply, is just a group of Chapter Leaders, Project Leaders, and other members of our community who, like me, decided that it was time to put up or shut up. People who, for whatever reason, the community elected as our leaders to evangelize the OWASP mission and make the community that we hold near and dear to our hearts successful. To think that anyone would volunteer to be a Board member only to destroy our community is absurd. While I may not necessarily agree with everything my fellow Board members say or do, I have never questioned their loyalty to OWASP and I hope you don't either.
With all of the above having been said, I feel that it's also important to say that being an OWASP Board member is also an amazing opportunity to be a catalyst for change. Over the past six months the Board has stepped up to the task of driving visibility and control back to our community. We've instituted a new polling system that the Board have used to take the pulse of the community on key issues. Michael has taken on the responsibility of weekly calls with the community in order to keep them informed of key issues and allow them to provide feedback. And we are currently working on bringing back the committees under a new structure that will encourage participation and empower our leaders to take action. OWASP even won the SC Magazine Editor's Choice Award at this year's RSA Conference. Regardless of how you've felt about OWASP in the past, I feel quite strongly that the future for OWASP is so bright we're going to need a good pair of shades.
So, I'll end this post very similar to how it began. The OWASP Foundation is currently accepting nominations for the OWASP Board of Directors. If you've ever felt passionate about Information Security or felt like you have big ideas to make OWASP a better community, then now is the perfect time to throw your hat into the ring as I did. I can't promise that it'll make you a security rock star. I can't even promise that the work is glamorous. And my experience, thus far, has been that it's been countless hours of volunteer work with little appreciation for what gets done. But, what I can promise, is that OWASP is making the world a better place and the Board plays a vital role in making that happen. You, too, can be a catalyst for change.
I absolutely love my job and one of the coolest things about what I do is getting to do proof-of-concepts with bleeding edge technology. I feel very privileged that many companies out there respect me enough to provide me with these opportunities and I feel that engaging on this level enables me to be a better security practitioner because I routinely have my finger on the pulse of the latest and greatest tools out there. The problem that I run into, however, is when vendors present me "enterprise ready" tools that are clearly not enterprise ready. Maybe it's a cool concept, a working prototype, or even a functional system. The problem is that "enterprise ready" assumes so much more than just a product that does some stuff as advertised. To me, at least, it assumes a product that can be easily transitioned to the company's IT team for long-term support of the platform. Here are some signs to look out for that will tell you if the tool is truly ready for the big show:
- Installation Process: This one could honestly go either way. Personally, I prefer to have a product that I can install and configure myself. I cringe every time I hear a vendor mention to me that professional services are involved in an installation. I get it, sometimes a tool is so highly customized to your environment that you need help, but the majority of the products I use on a daily basis aren't that way. If installing a product requires a day of professional services time, then this should probably be your first signal to at least start looking out for the following additional signs.
- Initialization Script: I honestly feel a bit silly even having to mention this as I would assume this to be a standard part of any enterprise product, but it's not. If I have to poke around in the installation directory looking for the right script to run to start or stop your product, then it's not enterprise ready. Even worse, if it's a more complex product that requires starting multiple pieces and you don't have a single init script to handle the startup and shutdown in the proper order, then your product is not enterprise ready. If you're trying to sell me something to make my life as a security professional easier, then I should spend my time using your tool instead of figuring out how to start and stop it.
- Release Notifications: If I buy a product from you and I'm paying you for support, then, I'm typically doing so with the intention that I will be able to move to the next version once it is released. Maybe it's because there are bugs that need to be fix or because there is new functionality, but whatever the reason, I want to know when that version becomes available. I'll talk a bit more about the upgrade process itself in the next bullet, but if the company does not have a way to notify you when a new release is available, be wary.
- Defined Upgrade Process: Have you ever used a tool that you thought was completely awesome until the first time that an upgrade rolled around? They tell you copy these files over and it breaks. Now, run this script and it fails. You engage support and spend hours on the phone with them and then a week later they offer a webex where a support person will take care of the upgrade for you. I had to ditch a really interesting tool a while back for this very reason and I'm currently dealing with another one where every upgrade requires a support person to come onsite. It's a completely ineffective use of both my time and theirs. When I designed SimpleRisk, one of the first things I considered was how to make it as simple as possible for a remote person to upgrade the tool without assistance. I've at least got it down to copying some files and running a script which anyone can do. Even better are the companies where it's click a button to upgrade. Better still are the companies that just automatically do the upgrade for you. In any case, be wary of any upgrade processes that are not well-defined.
- Backup Plan: This may not apply to all products or all scenarios, but it's a good idea when evaluating a product to ask yourself how you will back up the data and recover it if a disaster ever strikes. If the answer is "We'd just wipe and reinstall", then cool, but if the answer is "F*ck, I don't know", it may be worth having that discussion with the vendor.
- Monitoring: Nothing bothers me more than when I'm all excited to use my shiny new toy and when I go to log in it's down. In reality, I should know it's down when it happens because there's a high likelihood that the tool isn't doing what it's supposed to if it's not running. Ask your vendor what you should be monitoring in order to ensure that the tool is functioning properly. If they don't have a good answer for you, be wary.
- Product Roadmap: When you purchase a product, you purchase it not only for what it's capable of doing for you today, but also for the opportunities that it will provide you with tomorrow. Ask the vendor about their product roadmap to see if it's in-line with your vision of how you intend to use the product. Are there features that you can use down the line? More importantly, do they have plans to continue to invest in the platform that they are selling you or is it just major bug fixes at this point while they move on to something else. If the vendor can't give you a straight answer to this question, then you may have problems.
Don't get me wrong. There are plenty of tools out there that fail one or more of these signs and that doesn't mean that you should completely avoid them, but you shouldn't expect to pay a premium for them either. Hopefully the vendor is being honest with themselves and labeling it as "Beta" while they work to iron these things out. If not, you should be honest with them about your willingness to accept a product that is not "enterprise ready". Perhaps you're willing to accept a little bit of pain for a smaller price tag. Maybe you want to be able to brag to your peers that you were the first to have that product hotness. Whatever the reason, just make sure that you are aware of what you're getting into up front.
I've been following Palo Alto as a networking company for a couple of years now. Their claim is that the days of the port-based firewall are dead and that their application-centric approach is a far better way to enforce your access controls. Take the HTTP protocol for example. HTTP typically runs as a service on port 80, but does that mean that everything running on port 80 is HTTP? As an attacker looking for a way to funnel data out of your organization, why not use the standard HTTP port to send data, since I know you leave it wide open in order for your employees to surf the web. There's nothing to say that I actually have to be running an HTTP server on the other end and there's nothing on my classic firewall to tell any differently. At first, I was admittedly a bit skeptical. I didn't think that you could really tell enough about different applications on the web to be able to separate them out like Palo Alto claims to. Fortunately, Palo Alto reached out to me and provided me with a brand new PA-200 in an attempt to change my mind.
When the PA-200 arrived, it came with everything that I would need to get it up and running. That includes the unit itself, a power supply, a D89 to RJ45 console cable, an ethernet cable, and some instructions and warranty information.
On the front of the unit is four ethernet ports for your devices, a management port, a USB port, a console port, and several status indicator LEDs.
By default, the appliance is configured with ethernet ports 1 and 2 paired as a WAN to LAN link as this is the configuration that the majority of the people who buy it will likely use it for. That said, by following the instructions to connect your computer up to the management port, you can quickly access the user interface that allows you to change this assignment.
This shows the ethernet 1 and 2 interfaces as both being a "virtual wire" and here we can see the virtual wire that connects the two.
From here, we can take a look at the "zones" and see that our two interfaces have been defined as an untrusted (ethernet 1) and trusted (ethernet 2) zone.
To think of this a different way, my cable modem WAN connection (ie. the Internet) goes in my "untrust" zone and my local network (ie. LAN) goes in my "trust" zone. Now all that's left is to set our policy and for ease of management to start with, I set it to allow everything out with a default deny all inbound.
With this configuration I had done enough to be up and running on the device and I immediately started to see data populate the dashboard on the top applications running on my network.
It's color coded based on risk level and the dashboard also provides me a similar view of Top High Risk Applications. Any of these boxes can be clicked on in order to provide additional data about the protocol, sources, destinations, countries, and more.
Now, let me say that while I'm running this on my home internet connection, this thing is a hoss and can do way more than I can throw at it. With their App-ID technology enabled you can throw 100 Mbps of throughput at it no problem. In addition to being an application firewall, it also does standard port-based firewalling, VPN, routing, switching, and so much more. It's so extremely versatile that this thing could easily be placed in a smaller branch office and replace multiple other devices on their network such as a firewall, router, and VPN concentrator. More functionality for less money...who wouldn't want that? In addition to these default capabilities, additional licensing can also be obtained to allow you to do URL filtering, malware detection, and more. Having just gotten this up and running, I'm still exploring the ins and outs of all of the functionality, but it's pretty exciting to have all of this capability in a box that is smaller than the cable modem my ISP provides me. More posts to come on this as I get deeper into the guts of running my new Palo Alto PA-200 !
The other day I read that Comcast is launching a new plan to turn home internet users into unwilling participants in their new global wifi strategy. I'm sure that they will soon be touting how insanely awesome it will be to get "full strength" internet access virtually anywhere just by subscribing to this service. Other than the issues with taking a service that the consumer already pays for and carving out their bandwidth for other people, the security practitioner in me can't help but wonder what the security ramifications of sharing an internet connection like this actually means. Combine this with the default access to your cable modem that your service provider already has, and it paints a very scary picture of network security for the home user. It is no longer sufficient (if it ever was) to rely on your cable modem for network access controls. Thus, I am advocating in favor of placing a personal firewall between your cable modem and your network for all home internet setups.
Now, it's not as bad as you may think. It doesn't have to be some crazy expensive piece of equipment like you'd purchase for a business. Even the basic home gateways come with the ability to do Network Address Translation (NAT) which effectively turns your internet connection into a one-way pipe. All I'm saying is that instead of plugging your network devices directly into the cable modem for Internet access, you should use your own hardware and draw a clear "line in the sand" between your equipment and theirs. In addition, I would advocate that you should no longer consider the wifi access provided by the cable modem device as safe and should use your own equipment for this access. In other words, treat anything on the WAN side of your home gateway/personal firewall as untrusted and protect against it accordingly.
The 2014 Verizon Data Breach Investigation Report (DBIR) is out and it paints quite the gloomy picture of the world we live in today where cyber security is concerned. With over 63,000 security incidents and 1,367 confirmed data breaches, the question is no longer if you get popped, but rather, when. According to the report, data export is second only to credit card theft on the list of threat actions as a result of a breach. And with the time to compromise typically measured in days and time to discovery measured in weeks or months, Houston, we have a problem.
I've written in the past about all of the cool tricks we've been doing to find malware and other security issues by performing NetFlow analysis using the 21CT LYNXeon tool and this time I've found another trick around data loss detection that I thought was worth writing about. Before I get into the trick, let's quickly recap NetFlow for those who aren't familiar with it.
Think of NetFlow as the cliff notes of all of the network traffic that your systems handle on a daily basis. Instead of seeing WHAT data was transmitted (a task for deep packet inspection/DPI), we see the summary of HOW the data was transmitted. Things like source and destination IP, source and destination port, protocol, and bytes sent and received. Because many network devices are capable of giving you this information for free, it only makes sense to capture it and start using it for security analytics.
So, now we have our NetFlow and we know that we're going to be breached eventually, the real question becomes how to detect it quickly and remediate before a significant data loss occurs. Our LYNXeon tool allows us to create patterns of what to look for within NetFlow and other data sources. So, to help detect for data loss, I've designed the following analytic:
What this analytic does is it searches our NetFlow for any time an internal IP address is talking to an external IP address. Then, it adds up the bytes sent for each of these unique sets of connections (same source, destination, and port) and presents me with a top 25 list. Something like this:
So, now we have a list of the top 25 source and destination pairs that are sending data outside of our organization. There are also some interesting ports in this list like 12547, 22 (SSH), 443 (HTTPS), and 29234. A system with 38.48 GB worth of data sent to a remote server seems like a bad sign and something that should be investigated. You get the idea. It's just a matter of analyzing the data and separating out what is typical vs what isn't and then digging deeper into those.
My advice is to run this report on an automated schedule at least daily so that you can quickly detect when data loss has begun in order to squash it at the source. You could probably argue that an attacker might take a low and slow approach to remain undetected by my report, and you'd probably be right, but I'd also argue that if this were the case, then I've hopefully slowed them enough to catch them another way within a reasonable timespan. Remember, security is all about defense in depth and with the many significant issues that are highlighted by the Verizon DBIR, we could use all of the defense we can muster.
Today I did an interesting experiment that I thought was worth sharing with others. I tried to come up with a ten item list of password/access management policies based on increasing levels of security. On my list, a "10" effectively means the most secure access management and password polices whereas as "0" effectively means nothing. Here is my list:
- Open Authentication: Exactly like it sounds. No username or password required. If you can access the application, then you can get in.
- Non-Limited Credentialed Authentication: The use of a username or some form of non-open authentication, but perhaps a globally shared account or extremely weak password like "password". (ex: "password")
- Weak Credentialed Authentication: The use of a unique username plus a basic password policy simply limiting the number of characters, but not the content. (ex: "google")
- Complex Credentialed Authentication: The use of a unique username plus a basic password policy not only limiting the number or characters, but also requiring some level of complexity. This can be easily defeated using Supercharged Password Cracking Techniques. (ex: "G@@gl3")
- Complex Non-Random Credentialed Authentication: The use of a unique username plus a password policy limiting the number of characters, requiring complexity, and checking against permutations of dictionary and commonly used passphrases, but supporting combinations of multiple dictionary words. (ex: "i@teAHors3")
- Complex Random Lengthy Credentialed Authentication: The use of a unique username plus a password policy where passwords are truly randomly generated and are of a sufficient length to effectively be unbreakable. Something in the neighborhood of 20+ characters is pretty decent. This is the point where passwords become so complicated that the average user cannot remember them on their own and are forced to record them somewhere. (ex: "Hh#8qcFhe9H$#324dnakfB3q4fUY@*")
- Non-Complex Two-Factor Authentication: The use of a unique username plus what is commonly referred to as "two factor" authentication. The use of "something you know", like a basic 6-digit PIN that is easy for you to remember, alongside "something you have" that is a unique and randomly seeded value. The key here is that the "something you have" part is something that changes on a very frequent basis such as every 60 seconds. (ex: "147246965201")
- Complex Two-Factor Authentication: The use of a unique username plus the two factor authentication where the "something you know" is a password that is more complex than a simple numeric and the "something you have" is unique, randomly seeded, and changing frequently like above. Perhaps the "something you have" is even something more than a simple numeric here as well. (ex: "H8n@m78an1vA")
- Non-Complex Three-Factor Authentication: The use of a unique username, the "two factor" authentication piece in number seven above, plus a third "something you are" component. This is a unique biometric value such as a palm scan, fingerprint scan, or retinal scan.
- Complex Three-Factor Authentication: The use of a unique username, the "two factor" authentication piece in number eight above, plus a third "something you are" unique biometric value.
I will admit that this list is dumbed down quite a bit as there are other factors that most certainly can weigh in here as well. For example, having a maximum password age of one day instead of ninety days can significantly change the amount of time that an attacker has to brute force an account. Other influencing factors could be the remembered password history and our account lockout policy. For the most part, though, these can be a component of any of the above policies.
Worth noting here is that as security increases, the user experience is likely decreasing due to increased complexity. The exception here may be as we move from number six to number seven on my list. The move to two-factor authentication should add enough constant change to our formula to allow us to have a more simple password while still making it more difficult to brute force the account. There is a trade-off of having to always carry a token retrieval device with you and having to handle the situation where a user is unable to access their token, but otherwise, this becomes the real sweet spot where usability meets security.
What do you think? Is my list fairly accurate or am I way off base here? Am I missing something important? Please feel free to comment and provide your own list and/or feedback for mine.
A couple of years ago I decided, along with support from my management, that Enterprise Risk Management would become a focal point for my Information Security Program. I was convinced that framing vulnerabilities in the form of risks was essential to giving management visibility into issues they currently didn't know existed and to give our staff the comfort of knowing that the issues that caused them to lose sleep at night were now being considered for mitigation by management. I couldn't have been more right.
I began by collecting the risks submitted by each team in Excel spreadsheets and Word documents. They had all of the pertinent information like a subject, owner, risk assessment, etc, but very quickly I became a victim of my own success. Before I knew it, I had more risks than I could efficiently track in this format. First off, it was extremely cumbersome to try to manually maintain the risk index in Excel. While Excel is good at calculating formulas, it sucks at maintaining links to external documents. It can be done, but it requires quite a bit of manual effort to do so. Second, maintaining your risk details in Word documents is something they should reserve only for your worst enemies. They are difficult to update, difficult to track updates with, difficult to search and, well, just plain difficult. I thought to myself that there has to be a better way, yet, this is what the unfortunate majority out there are currently stuck with today.
After some research, it turns out that many years back, my company had another security professional who was interested in Enterprise Risk Management. Apparently, they had come to similar conclusions as I did with the Word documents and Excel spreadsheets, but they were able to get some internal development time to create a Lotus Notes based risk management database. It was everything that I needed, or so I thought, so I started to manually enter all of my new risks into this old risk management database. At first, things seemed to be working well. I had some different views into my data that would allow me to see way more information than I could before. I also had the ability for management of our various teams to be able to see their risks without involving me. It was much better, but soon I began to realize the limitations of this approach. The database itself was rigid. Changes required me to go through another internal team for resources and it often took a long time to make them. Also, any updates that were made didn't modify the current risks, only the ones submitted after that point. Once, I found myself opening and re-saving hundreds of risks just because I decided to change my risk calculation formula slightly. I began looking again for another way.
Soon, my new round of research brought me to a special set of tools called Governance, Risk, and Compliance or GRC for short. There are a number of such tools out there by well-resepcted companies such as EMC Archer and CA. They looked completely awesome and seemed to solve all of my problems with many more features to spare so I started to get some SWAG quotes from a few of the vendors. Low and behold, these tools hold a pricetag of $100k to half a million dollars and beyond. A request for budget for one of these tools was dismissed immediately with management literally laughing at my suggestion. OK, so maybe it was on me, right? Maybe I didn't do a good enough job of selling the tool? Maybe I didn't engage the right stakeholders to back my request? I guess you could call me a glutton for punishment, but I decided to keep trying. This time I gathered people I thought would be interested in risk from all different areas of our business for a demo of one of the tools. Trade Compliance, Health and Safety, Facilities, Legal, and many more. They watched the presentation, asked some fantastic questions, and ultimately left that meeting saying that they thought that a GRC solution was a fantastic idea. That was until I mentioned the price tag. If even with a budget split between half a dozen different teams, it wasn't going to happen, I knew that it simply wasn't going to happen.
As I began to think about the situation that I was in, I realized that I wasn't alone in all this. I talked with friends at various state agencies, friends at risk consultancies, and friends at companies large and small. They had gone through the same trials and tribulations that I had and fared no better for the most part. Having spent the better part of the last decade coding random applications and websites in PHP and MySQL, I decided that there may be something that I could do about it. I would go home from work and start coding until the wee hours of the morning. I would wake up early on my weekends and start coding again until the family awoke. After several weeks of this, I had a working prototype for a new risk management system based on some simplifications of the NIST 800-30 risk management framework and running on my LAMP (Linux Apache MySQL PHP) stack. SimpleRisk was born.
At the time of this writing, I have released 7 official versions of SimpleRisk since March of this year. It has come a long way since then, but still holds true to it's roots. SimpleRisk is free and open source. The methodology was designed to be as simple as possible, hence the name. A five step process walks you through the basics of risk management:
- Submit your risks
- Plan your mitigations
- Perform management reviews
- Prioritize for project planning
- Review regularly
It has every basic feature required of an enterprise risk management system and I'm adding new ones all the time. It has five different ways to weight classic risk calculations (ie. likelihood and impact) and can perform CVSS scoring as well. It has it's own built-in authentication system, but I've built an extra module to do LDAP authentication that I'm giving away to anyone who donates $500 or more to the cause. It also has a half-dozen different ways to report on the risks and many more reports should be complete soon. You can check out the demo (minus the Administrator interface) using the username "user" and password "user" at http://demo.simplerisk.org. Or, if you're ready to dive right in, you can obtain the download package for free at http://www.simplerisk.org.
In order to make your foray into SimpleRisk as simple as possible, I've created a SimpleRisk LAMP Installation Guide that you can use to have the tool up and running in about 30-60 minutes. And if all else fails and that proves too difficult or time consuming, then you should make your way to http://www.hostedrisk.com where for a fraction of what it would cost to buy a GRC solution, you will have your own dedicated SimpleRisk instance, running on hardware dedicated to you, built with security in mind, including extra modules not part of the standard distribution, and you'll never have to worry about installing or upgrading risk management software ever again. Hopefully you won't ever need this, but the option is always there in case you do.
My frustrations with a lack of efficient and cost-effective risk management tools led me to create one of my own. My hope is that by making SimpleRisk free and open source, it will benefit the rest of the security community as much as it has already benefited me. If you have any questions or requests for features that you would like to see included in the tool, I'm always here to help. SimpleRisk is simple, enterprise risk management, for the masses.
I am going to start out here by saying that I do not now, nor have I ever, held the title of Chief Information Security Officer (CISO). That having been said, I do effectively fill this role as the Information Security Program Owner for a large, $1B+ per year, public company. Some of what follows will be diatribe on my current role and what I would change if given the opportunity. Some of it will be based on general observations of how I've seen other companies handle internal security. What follows are six reasons why your company needs a Chief Information Security Officer (CISO).
Let's start out with how I got my current title. Early in my career I was a *nix Administrator working for a number of different companies. I did everything from working as support at a website hosting company to building systems as a military contractor. Even though my official title never had anything to do with security, I have always had a passion for it, so I always found a way to make it part of my job. Fast forward to about seven years ago where I got a job as a Web Systems Engineer at my current employer. I quickly realized that there was nobody handling security for our systems so I decided to shoulder that responsibility. I began by running Qualys scans, analyzing the results, and fixing the vulnerabilities. Since this was not my primary role, it all of this work was performed in about 5% of my overall time spent, but I was able to keep good metrics and show fantastic results over time. After several years of working like this, I finally made the decision to dedicate myself to security full-time. I got a job offer from another company to be a Security Engineer, but decided to see if my current company was interested in allowing me a similar move. After some discussions and a few presentations on what the job would entail, I was officially allowed to spend 100% of my time on security. The only catch was that I was now responsible for our IT SOX testing as well. Now came the question of what to call my new role. The title "C" anything is reserved for our executives so that took CISO off the table immediately. Sad. I was also not allowed to use the term "Manager" in my title since that indicated having people underneath me. Even sadder. I finally settled on "Information Security Program Owner" as it indicated an ownership role in security (as close to Manager as I could get) while staying away from those other non-sanctioned titles.
Alright, so what does any of that have to do with needing a CISO you ask? To start with, I was the only security professional in the entire company of roughly 5,000 employees at the time. While officially my purview was in the area of IT security for the enterprise, the lack of any other experts quickly made me a hot commodity. I was asked to participate on various architecture teams, several teams having to do with regulatory compliance, and even to consult with our R&D teams on product security from time to time. I'd like to believe that it was because I am so awesome that people couldn't get enough of me, but the God's honest truth is that the entire company had a need and desire for security and there wasn't anyone else to assist. Which leads me to the first reason why your company needs a Chief Information Security Officer:
Reason #1: By definition, the CISO is where the buck stops as far as security is concerned for your organization. It is the CISO's job to make sure that security is a concerted effort and that your efforts are not inefficiently duplicated in multiple business units. Without a CISO, you may may have operational security, but you likely lack direction or a long-term plan for an actual security program.
Now, while my title says "he owns the security program", the fact is that I am not officially a manager or executive. Thus, on an official level, I pull about as much weight as any other individual contributor in the organization. It's a precarious position to be in. On one hand I'm charged with ensuring the security of everyone and everything in the company. Sometimes this can require being the bad guy and telling people their stuff is broken. On the other hand, I don't hold enough power to actually force any action that others don't actually want to take. Perhaps I'll write a future post about how I've managed to still get things done despite this dilemma, but for now this leads me to the second reason why your company needs a Chief Information Security Officer:
Reason #2: Designating one of your senior security resources as the CISO is a form of empowerment. You are making a statement that they are the person that you trust to make informed security decisions for the organization. It helps if you can have them report to another C-level executive, like the CFO, but the most important thing here is the title as Chief Information Security Officer says that they are in charge of everything security (everything Information Security if you want to get technical) for your organization. This helps tremendously in ensuring that security is still a priority when business turns political.
When you hear the title Chief Information Security Officer, what do you think of? Maybe the IT guy who handle the IPS system? The guy who goes running around when a system is infected with malware? Maybe even the guy who wrote the Information Security Policy if you're lucky? Your CISO should be all these things and so much more. This leads me to the third reason why your company needs a Chief Information Security Officer:
Reason #3: Your CISO is all things security. Wikipedia does a great job listing some of the many roles of the Chief Information Security Officer so I'm just going to steal them and list them here:
- Information Security and Information Assurance
- Information Regulatory Compliance (PCI, SOX, HIPAA, etc)
- Information Risk Management
- Supply Chain Risk Management
- Information Technology Controls
- Information Privacy
- Computer Emergency Response Team
- Identity and Access Management
- Security Architecture
- IT Investigations, Digital Forensics, and eDiscovery
- Disaster Recovery and Business Continuity Management
- Information Security Operations Center
Obviously one person cannot handle all of these things which is why most companies have a team of security professionals (ie. Information Security Officers) who report up to the CISO, but this should give you an idea as to the wide scope of what the CISO is responsible for. Chances are that if you don't have a CISO, then many of these activities aren't happening. Even worse, the ones that are happening likely aren't aligned with your business objectives. It's tough to justify spending any money on a program when it performs activities ad-hoc and completely separate from your business. Which leads me to the fourth reason why your company needs a Chief Information Security Officer:
Reason #4: Your CISO is a business executive that spans into the technical world of security as well. They should be involved in the business decisions of the company so that they can ensure that the company's security activities are well-aligned with the projects that the business is undertaking.
Hopefully, your senior security professional is an extremely valued member of your team. If you are holding off on giving them an official Chief Information Security Officer title, then you are doing both them and your company a disservice. Security companies are organizing events all the time that are targeted at these executives who control the security purse strings. Sometimes they call them CISO Roundtables, Summits, or otherwise, but the gist of it is that they are a form of education for the CISO and provides them with the opportunity to network with other security professionals in the area, all on somebody else's dime. The catch is that you're only invited if you're a CISO. This leads me to the fifth reason why your company needs a Chief Information Security Officer:
Reason #5: The title of CISO is synonymous with "the person in charge of security" for your company and worlds of opportunity open up for them when you bestow upon them that title. It means free lunches, free trainings, and a host of other perks that unfortunately aren't available with a title like "Security Manager" or "Senior Security Engineer". Think of it as a job perk that doesn't cost your company a thing.
Before I wrap this up, I have one final reason why your company needs a Chief Information Security Officer, but it's certainly not for everyone. Occasionally, you'll find a person both technically talented as well as someone who has an affinity and desire to do public speaking. If this is your senior security person, then it's time to lock them down as they have the ability to do more positive marketing for your company than your entire marketing department. This leads me to the sixth and final reason why your company needs a Chief Information Security Officer:
Reason #6: If your CISO is willing and able to give engaging talks about security-related topics, then that person, with that title, can make a world of difference for your organization from a marketing perspective. Conferences are always looking for new and interesting talks and attendees often consider the speakers as industry luminaries. No marketing whitepaper will ever come close to the exposure potential of having your own industry expert, presenting on a fantastic topic, using a company branded slide deck, in front of hundreds of security professionals.
There you have my six reasons why your company needs a Chief Information Security Officer. I hope that this was helpful in your search for becoming or designating your company's ultimate CISO. Feel free to add your own thoughts in the comments below.
Let's say that you go to the same restaurant at least once a week for an entire year. The staff is always friendly, the menu always has something that sounds appealing, and the food is always good enough to keep you coming back for more. The only real drawback is that it usually takes a solid half-hour to get your food, but you've learned to find something else to do while you're waiting because it's always been worth the wait. Today you go into the same restaurant, but now the staff goes out of their way to service you, the menu has twice as much selection as before, the food is literally the best thing you've ever tasted, and it was on your table just the way you like it within 30 seconds of placing your order. This is my initial impression of the newly released version of 21CT's LYNXeon software (version 2.29).
I'll be honest. Before we upgraded to the new version I had mixed feelings. On one hand, I loved the data that the LYNXeon platform was giving me. The ability to comb through NetFlow data and find potentially malicious patterns in it was unlike any other security tool that I've experienced. On the other hand, the queries sometimes ran for half an hour or more before I had any results to analyze. I learned to save my queries for when I knew my computer would be sitting idle for a while. It was a burden that I was willing to undertake for the results, but a burden nonetheless. We upgraded to LYNXeon 2.29 less than a week ago, but already I can tell that this is a huge leap in the right direction for 21CT's flagship network pattern analysis software. Those same queries that used to take 30 minutes now take 30 seconds or less to complete. The reason being is a massive overhaul of the database layer of the platform. By switching to a grid-based, column-oriented, database structure for storing and querying data, the product was transformed from a pack mule into a thoroughbred.
Enhanced performance wasn't the only feature that found it's way into the 2.29 release. They also refactored the way that LYNXeon consumes data as well. While the old platform did a fairly good job of consuming NetFlow data, adding in other data sources to your analytics was a challenge to say the least; usually requiring custom integration work to make it happen. The new platform has added the concept of a connector with new data types and a framework around how to ingest these different types of data. It may still require some assistance from support in order to consume data types other than NetFlow, but it's nowhere near the level of effort it was before the upgrade. We were up and running with the new version of LYNXeon, consuming NetFlow, IPS alerts, and alerts from our FireEye malware prevention system, in a few hours. The system is capable of adding DNS queries, HTTP queries, and so much more. What this amounts to is that LYNXeon is now a flexible platform that can allow you to consume data from many different security tools and then visualize and correlate them in one place. Kinda like a SIEM, but actually useful.
As with any tool, I'm sure that LYNXeon 2.29 won't be without it's share of bugs, but overall the new platform is a huge improvement over the old and with what I've seen so far I gotta say that I'm impressed. 21CT is undoubtedly moving in the right direction and I'm excited to see what these guys do with the platform going forward. That's my first impression of the 21CT LYNXeon 2.29 release.
Last year I gave a talk at a number of different conferences called "The Magic of Symbiotic Security: Creating an Ecosystem of Security Systems" in which I spoke about how if we can break our security tools out of their silos, then they become far more useful. Lately, I've been doing a lot of work at my company in identifying systems infected by malware and getting rid of the infections because, as you are hopefully aware, the presence of malware on your systems is equivalent to hackers on your network. Malware can give the controller backdoor access to the system, allows them to scan the network for other devices to compromise, gives them a platform to launch additional attacks from, and enables them to exfiltrate data out of the network. I have a few different tools which I'll highlight later that do some really cool things on their own, but when you combine their functionality together, you open up a whole new world of possibilities.
The first tool that I wanted to talk about is for malware analysis. In our case this is FireEye, but this could just as easily be Damballa, Bit9, or any other technology that will allow you to identify IP addresses of hosts infected by malware, servers hosting malware objects, and command and control servers. Alone, this tool identifies a single client-to-server relationship, but it does provide a pattern that we can use as a template to find similar issues in our environment where perhaps we do not have coverage with this device. Now that we have identified the patterns that we are looking for, we need to find a way to discover additional instances of those patterns. This brings me to our second tool.
The second tool is for NetFlow analysis. In case you are unfamiliar with NetFlow, it is a feature of most network devices that creates summary information about the network activity that is running through them. It includes the source and destination IP addresses, source and destination ports, protocols, and bytes transferred. Specifically, we need a NetFlow analysis tool that is capable of showing us connections between our internal systems and systems on the Internet. In our case, we use a product called LYNXeon to do this. Alone, LYNXeon does a good job of allowing us to visualize connections from one system to another, but finding the systems related to malware issues can often be a needle in a haystack because of the NetFlow limitations mentioned above. So while our malware connections (downloads and command-and-control) are buried in the NetFlow data, we really have no way to identify them in the NetFlow tool silo.
Now comes the fun part. One of the cool things about the FireEye system is that it provides us with the ability to export data and one of the cool things about the LYNXeon system is that it provides us with the ability to import data and tag it. So what we do is, in FireEye, we export the list of all systems that we have detected as having been infected by malware. We also export the list of all of the command and control servers and malware hosting servers that we have seen connections to. Next, we go into LYNXeon and tell it to import these two lists of IP addresses and tag them with a custom tag that we created called "FireEye". We have now successfully combined these two tools and the payoff is huge.
Success #1: Detecting the Spread of Malware on Your Network
Our FireEye system works by executing downloads inside of a virtual machine and analyzing the affect they have on the system. Because the virtual machine doesn't always match the target system, in many cases we are only able to tell that it was malware and not that the malware actually infected the system. Using LYNXeon, however, we can create special queries that will show us all connectivity from the potentially infected system after the time of the malware download. Did the system immediately make connections to other foreign systems on the Internet? Did it start scanning our internal network looking for other hosts to compromise? All this and more is possible now that we have identified a potentially infected system on our network. Here is a pattern file which I created in LYNXeon to do this:
And here is the pattern diagram which this query accomplishes:
Success #2: Finding Other Infected Systems
FireEye appliances aren't free and with offices in over 40 countries around the world getting full coverage can get expensive. But, if we can use a handful of appliances to get an idea of where our systems are talking to when compromised, then we have data which we can turn around and use in places where we do not have those appliances. Because we are sending NetFlow data from our devices around the world into LYNXeon, we can search for any connections to these common malware servers. No more needle in a haystack. The data is all there, we just needed to know how to look for it. Here is a pattern file which I created in LYNXeon to do this:
And here is the pattern diagram which this query accomplishes:
Success #3: Discovering Other Types of Attacks
Often times our adversaries aren't just trying one type of attack and giving up when it fails. They are trying every trick in their arsenal and trying to gain and maintain a foothold on your network with whatever method they can. Once we've identified an attacker's IP address, we can now use our NetFlow data to see all other traffic coming from that IP address. Often times, expanding these types of relationships can shed light on other activities they are performing on your network. Perhaps they are performing reconnaissance on your servers? Maybe they are trying to DOS one of your systems? The fact is that once they've been uncovered as a bad guy on your network, you should be weary of all activities performed by them. Maybe even ban their IP address altogether. Here is a pattern file which I created in LYNXeon to do this:
And here is the pattern diagram which this query accomplishes:
So there you have it. By combining our malware analysis using FireEye and our NetFlow analysis using LYNXeon, we have created a hybrid system capable of far more than either of these tools by themselves. This is the magic of symbiotic security in action. Our tools becomes infinitely more powerful when we are able to share the data between them. Hopefully you will take that into consideration the next time you are looking at purchasing a security tool.