With an estimated Powerball Jackpot of $1.5B, everybody is talking about it right now. I've got my tickets, but with a prize that big, I'm having an easier time listing the things that I can't buy with the money, rather than what I can. That said, I keep coming back to a concept that statisticians refer to as "expected return". What it means is that even though your odds of winning the Powerball aren't changing (roughly 1 in 292 million), the bigger the jackpot, the bigger the expected return. To calculate it, just take the odds and multiply by the reward value.
1/292m x $1.5B = $5.14
So, before you factor in taxes, the possibility of splits, and other factors that might affect your reward, the expected return is something like $5.14 on a $2 ticket. Not too bad. Factoring in the cash option ($930M), a 40% Federal tax, an extra 25-28% tax from the IRS for gambling winnings, and perhaps even state taxes, however, this number drops well below the $2 range making it far less appealing to buy a ticket. I found interesting articles in Business Insider and Wired that elaborate further on this idea.
The same concept can be applied to poker in what they refer to as "pot odds". You take the odds of a winning hand and multiply it by the size of the pot. In poker, often times players are more willing to play what is normally a statistically losing hand if it gives them a shot at more money (ie. a bigger pot).
So what does this have to do with SimpleRisk and risk management? The classic formula for calculating risk is RISK = LIKELIHOOD x IMPACT and is the exact same formula used for calculating expected return and pot odds. You're simply taking the likelihood of an event happening regardless of whether it is a positive or negative outcome and multiply it by the estimated dollar value of that outcome. Pretty simple. And that is what Powerball, Poker, and SimpleRisk have in common.
I have been an active member of the OWASP community in some form since 2007. I've been the OWASP Austin Chapter Leader, served as the Chair of the Global Chapters Committee, and, most recently, was elected (and re-elected) to the OWASP Board of Directors. In the past, I have heard a number of people in our community compare the Board to an "Ivory Tower". They would say that they were unapproachable and preferred to let others do the work while they pulled the strings of the Foundation from behind the scenes. I think there may be some truth to that statement, but I told myself when I ran for the Board that I wouldn't be like that. I want people to feel like I am out there actively trying to solve the problems of our community. Case in point, in my 23 months on the OWASP Board, I have proposed more Bylaw changes and new policies than anybody else.
As I continue to try to be a man of action, I find that I am often one of the first Board members on the scene in times of crisis. On multiple occasions I've shunned the historical "Ivory Tower" approach to managing the organization and dove head in to the situation at hand. My assumption has been that I was elected because I have an opinion, not in spite of it. In each case I've tried to present a clear and concise analysis of my view of the situation. I've tried to offer up suggestions on next steps or provide data points that others may not have been aware of. Being such a diverse community has many strengths, however, one weakness is that it is difficult to drive to a consensus on anything. It really doesn't matter what side of the issue you are on, it always seems like some will agree with you and others will not. Frequently, what begins as an intended friendly and spirited debate, ends with somebody feeling marginalized because a decision was made that they did not agree with. It's sad when this happens, but is inevitable when you mix passionate people with issues that do not have binary answers.
This leads me back to the "Ivory Tower" dilemma. If my desire is to actively be a part of the community, then I place myself directly in a position of potential conflict when I speak. I'm not allowed to speak as Josh, the community member, because the perception is that I am always speaking with my Board member hat on. And I have a strong feeling that this perception of Board members speaking authoritatively is what leads a person on the other side to feel marginalized. Definitely not intended, at least on my part, but that's what I've started to gather from some of the feedback that I've received. So if that's the case, then I begin to wonder if the situation would have been better off had I held my tongue and refrained from jumping into the discussion in order to let our community continue to fight it out or to let another Board member, our Executive Director, or somebody else communicate the Board's analysis and actions. But, if I do that, aren't I now perpetuating the stereotype of the OWASP Board being an "Ivory Tower"?
I'm not sure that there is a right or wrong answer here and nobody said that being a Board member would be easy, but I can't say that I ever expected to need to give up my personal voice with the community (the one that likely got me elected to the Board in the first place) in order to serve the Board. That said, it genuinely saddens me when an extremely valued OWASP volunteer feels the need to leave in order to make a point. It is a huge loss for the OWASP Foundation and one that I, regrettably, played a role in provoking. I don't apologize for my stance on the issue that was being debated. I feel that we should all be allowed to have an opinion and I still support the actions of the Board thus far. That said, if I could take back my words, crawl back into that "Ivory Tower", and let someone else do the talking in this particular situation, I'm sorry to say that I would.
Johanna, I'm sorry that it turned out this way. You may not believe it, but I sincerely respect and appreciate what you have done for the OWASP Foundation more than words can express. You have brought order where there was chaos and a dedication to the cause that was matched only by your intellect. I feel that we don't have to always agree on a vision in order for me to appreciate your perspective. I regret that I never conveyed that to you before now. I'm sorry.
Even though the Drobo is supposed to be a pretty rock-solid tool for backing up your files, there are still plenty of reasons why one would want to keep a copy of those files elsewhere just in case. For example, what would happen if there is a fire and your Drobo is damaged. Are you OK with losing everything? I've even heard about the rare case where the Drobo drives get out of sync and a complete reformat is necessary; causing you to lose everything. To prevent this, it is a good idea to install the Crashplan Drobo app and ensure that a copy of your data is recoverable, even if the worst case scenario happens with your Drobo.
If you do as I mention above, chances are that things will work well for a while and then suddenly, one day, you will find that Crashplan is no longer running on your Drobo. Despite multiple attempts to start it back up, you will inevitably find yourself staring at a message saying either "crashplan is enabled and stopped" or "crashplan is disabled and stopped" and will be clueless, like I was, in terms of how to fix it. The good news is that after months of struggling with this, I finally came across a post on the DroboSpace forums from the guy who packages the Crashplan application for Drobo. It was a bit cryptic at first, but eventually I was able to interpret what he was saying and I wanted to share it with everyone in a bit more layman's terms.
The underlying issue here is that Crashplan is configured to automatically upgrade itself on the Drobo. When this happens, it downloads the replacement files and goes to run the upgrade script. Unfortunately, the Crashplan team does not write the upgrade script to work in the Busybox environment (the one that runs on your Drobo) and the script breaks. By tweaking the script ever so slightly, you can get it to run the upgrade and Crashplan will once again start up on your Drobo. Here are the steps to do it:
- SSH into your Drobo with the command "ssh -L 4201:localhost:4243 root@[your Drobo IP]"
- Take a look at the /tmp/DroboApps/crashplan/log.txt file and you'll probably see a message saying something like "Error: Could not find or load main class com.backup42.service.CPService"
- Go to the crashplan upgrade directory with the command "cd /mnt/DroboFS/Shares/DroboApps/crashplan/app/upgrade"
- Here you will see a directory whose name looks like a random value like "1388642400364.1415587076262". I believe you should see a new one of these directories for each version you are upgrading to. Change to that directory using the command "cd 1388642400364.1415587076262" substituting for whatever directory name you see.
- Edit the upgrade.sh script inside that directory. You want to change the "rm -fv" line to "rm -f" and the "mv -fv" line to "mv -f". You will also want to search for the two lines that start with "/bin/ps -ef" and change them to use "/bin/ps w" instead. Save the file.
- Change the permissions on the upgrade.sh script to make it executable with the command "chmod +x upgrade.sh".
- Run the upgrade.sh script with the command "./upgrade.sh".
When the script completes, you should be back at the terminal prompt. From here, you can go back to the /mnt/DroboFS/Shares/DroboApps/crashplan directory and try starting Crashplan using the command "./service.sh start". Check that it is "enabled and running" by running the command "./service.sh status" to get the status. You may have to run through steps 4-7 multiple times based on how many upgrades back you are, but when all is said and done, you should be back up and running with Crashplan on your Drobo. Good luck!
When I first put my name in the hat for the OWASP elections in the fall of 2013, I thought I knew what I was signing up for. I thought that my seven year history with the organization in a number of different roles (Chapter Leader, Chapter Committee Chair, AppSecUSA Chair) had me well prepared for the duties of an OWASP Board member. I told my wife that it wouldn't be a big deal, mostly something that I could do in my spare time while at work, and that it would feel good to be able to make a difference on a bigger scale than I'd done to date. I ran for the Board on a platform of wanting to support the growth of the OWASP chapters around the world and wanting to drive visibility, and ultimately buy-in, back to the community. I told myself that as passionate as I was with these things as a community member, it was time to either put up or shut up.
Here I am, six months later, as an elected member of the OWASP Board of Directors and I can honestly say that no prior experience could have prepared me for this. It's not a good thing or a bad thing, it's just very different than I expected. As a community member, I remember being at the AppSecUSA conferences and struggling with how to introduce myself to these "famous" OWASP Board Members. I was a just a chapter leader struggling to come up with ideas to engage the Austin security community while these guys were literally trying to change the world. They were the figurative "Rock Stars" of my little security world. Needless to say, I see things a bit differently now, but it's probably not what you think.
When I look at my fellow Board members, I do still see those "Rock Stars". I can't even begin to tell you how much I look up to guys like Jim Manico for literally spending every day of his life trying to make the world more secure. I constantly have to tell myself that even though I don't consider myself a security rock star, the community saw something in me and put me on the Board for a reason and I continue to hold myself responsible for executing on the platform that I laid out in my election materials. But what I've come to realize now, that I didn't realize before my election, is that even though it feels the other way around, it's really the community, not the Board that holds the power in OWASP.
When I look back at the discussions that we've had as a Board over the past six months, other than setting strategic goals, the vast majority of our meetings have focused on operational and governance issues. Through this process, I have come to the realization that while extremely important to keeping OWASP, as a non-profit organization, afloat, this isn't the kind of exciting world-wide impact stuff I thought I had signed up for. As an example, my first two months as a Board member were spent in large part re-investigating a situation that a previous Board had closed the books on long ago. In the process of trying to help the individual involved, I was twice accused by that individual (and acquitted) of violating OWASP's Code of Ethics. Talk about gratitude. Since then, it seems like it's been putting out one small fire after another. More recently, I've spent many hours working with the Board and the Executive Director to grapple with an employee who resigned from the organization only to have members of our community question whether we, as an organization, did enough to keep them here, without knowing all of the details. It blows my mind how the Board can have unanimous support for an item, feel confident that it's in the best interest of the organization, and still be called into question as to whether we are somehow being underhanded in our decisions. It's like we sometimes forget that the Board is made up of seven people, from all over the world, with vastly different beliefs, desires, and even visions for OWASP. If you can get that many people, that diverse, on the same page, then there's something to be said for that.
So, I guess in a nutshell what I'm saying is that while I feel that it's quite the privilege to be serving on the OWASP Board alongside some of the people I respect most in this industry, there is definitely a part of me that feels like the stuff that OWASP does that has the most profound impact on global security isn't what we do on the Board, but rather, what the community does in our Chapters and Projects. The Board is there to support you, the community. To create the policies to make you successful. To provide the staff to make your lives easier so that you can spend your time doing things that accomplish OWASP's mission. In addition, I want to dispel any notion that the Board is some sort of an Ivory Tower. There should never be an "us vs them" mentality at OWASP because the Board is made up of people who have been, and in many cases still are, in the trenches right alongside the community. The Board, to put it simply, is just a group of Chapter Leaders, Project Leaders, and other members of our community who, like me, decided that it was time to put up or shut up. People who, for whatever reason, the community elected as our leaders to evangelize the OWASP mission and make the community that we hold near and dear to our hearts successful. To think that anyone would volunteer to be a Board member only to destroy our community is absurd. While I may not necessarily agree with everything my fellow Board members say or do, I have never questioned their loyalty to OWASP and I hope you don't either.
With all of the above having been said, I feel that it's also important to say that being an OWASP Board member is also an amazing opportunity to be a catalyst for change. Over the past six months the Board has stepped up to the task of driving visibility and control back to our community. We've instituted a new polling system that the Board have used to take the pulse of the community on key issues. Michael has taken on the responsibility of weekly calls with the community in order to keep them informed of key issues and allow them to provide feedback. And we are currently working on bringing back the committees under a new structure that will encourage participation and empower our leaders to take action. OWASP even won the SC Magazine Editor's Choice Award at this year's RSA Conference. Regardless of how you've felt about OWASP in the past, I feel quite strongly that the future for OWASP is so bright we're going to need a good pair of shades.
So, I'll end this post very similar to how it began. The OWASP Foundation is currently accepting nominations for the OWASP Board of Directors. If you've ever felt passionate about Information Security or felt like you have big ideas to make OWASP a better community, then now is the perfect time to throw your hat into the ring as I did. I can't promise that it'll make you a security rock star. I can't even promise that the work is glamorous. And my experience, thus far, has been that it's been countless hours of volunteer work with little appreciation for what gets done. But, what I can promise, is that OWASP is making the world a better place and the Board plays a vital role in making that happen. You, too, can be a catalyst for change.
I absolutely love my job and one of the coolest things about what I do is getting to do proof-of-concepts with bleeding edge technology. I feel very privileged that many companies out there respect me enough to provide me with these opportunities and I feel that engaging on this level enables me to be a better security practitioner because I routinely have my finger on the pulse of the latest and greatest tools out there. The problem that I run into, however, is when vendors present me "enterprise ready" tools that are clearly not enterprise ready. Maybe it's a cool concept, a working prototype, or even a functional system. The problem is that "enterprise ready" assumes so much more than just a product that does some stuff as advertised. To me, at least, it assumes a product that can be easily transitioned to the company's IT team for long-term support of the platform. Here are some signs to look out for that will tell you if the tool is truly ready for the big show:
- Installation Process: This one could honestly go either way. Personally, I prefer to have a product that I can install and configure myself. I cringe every time I hear a vendor mention to me that professional services are involved in an installation. I get it, sometimes a tool is so highly customized to your environment that you need help, but the majority of the products I use on a daily basis aren't that way. If installing a product requires a day of professional services time, then this should probably be your first signal to at least start looking out for the following additional signs.
- Initialization Script: I honestly feel a bit silly even having to mention this as I would assume this to be a standard part of any enterprise product, but it's not. If I have to poke around in the installation directory looking for the right script to run to start or stop your product, then it's not enterprise ready. Even worse, if it's a more complex product that requires starting multiple pieces and you don't have a single init script to handle the startup and shutdown in the proper order, then your product is not enterprise ready. If you're trying to sell me something to make my life as a security professional easier, then I should spend my time using your tool instead of figuring out how to start and stop it.
- Release Notifications: If I buy a product from you and I'm paying you for support, then, I'm typically doing so with the intention that I will be able to move to the next version once it is released. Maybe it's because there are bugs that need to be fix or because there is new functionality, but whatever the reason, I want to know when that version becomes available. I'll talk a bit more about the upgrade process itself in the next bullet, but if the company does not have a way to notify you when a new release is available, be wary.
- Defined Upgrade Process: Have you ever used a tool that you thought was completely awesome until the first time that an upgrade rolled around? They tell you copy these files over and it breaks. Now, run this script and it fails. You engage support and spend hours on the phone with them and then a week later they offer a webex where a support person will take care of the upgrade for you. I had to ditch a really interesting tool a while back for this very reason and I'm currently dealing with another one where every upgrade requires a support person to come onsite. It's a completely ineffective use of both my time and theirs. When I designed SimpleRisk, one of the first things I considered was how to make it as simple as possible for a remote person to upgrade the tool without assistance. I've at least got it down to copying some files and running a script which anyone can do. Even better are the companies where it's click a button to upgrade. Better still are the companies that just automatically do the upgrade for you. In any case, be wary of any upgrade processes that are not well-defined.
- Backup Plan: This may not apply to all products or all scenarios, but it's a good idea when evaluating a product to ask yourself how you will back up the data and recover it if a disaster ever strikes. If the answer is "We'd just wipe and reinstall", then cool, but if the answer is "F*ck, I don't know", it may be worth having that discussion with the vendor.
- Monitoring: Nothing bothers me more than when I'm all excited to use my shiny new toy and when I go to log in it's down. In reality, I should know it's down when it happens because there's a high likelihood that the tool isn't doing what it's supposed to if it's not running. Ask your vendor what you should be monitoring in order to ensure that the tool is functioning properly. If they don't have a good answer for you, be wary.
- Product Roadmap: When you purchase a product, you purchase it not only for what it's capable of doing for you today, but also for the opportunities that it will provide you with tomorrow. Ask the vendor about their product roadmap to see if it's in-line with your vision of how you intend to use the product. Are there features that you can use down the line? More importantly, do they have plans to continue to invest in the platform that they are selling you or is it just major bug fixes at this point while they move on to something else. If the vendor can't give you a straight answer to this question, then you may have problems.
Don't get me wrong. There are plenty of tools out there that fail one or more of these signs and that doesn't mean that you should completely avoid them, but you shouldn't expect to pay a premium for them either. Hopefully the vendor is being honest with themselves and labeling it as "Beta" while they work to iron these things out. If not, you should be honest with them about your willingness to accept a product that is not "enterprise ready". Perhaps you're willing to accept a little bit of pain for a smaller price tag. Maybe you want to be able to brag to your peers that you were the first to have that product hotness. Whatever the reason, just make sure that you are aware of what you're getting into up front.
I've been following Palo Alto as a networking company for a couple of years now. Their claim is that the days of the port-based firewall are dead and that their application-centric approach is a far better way to enforce your access controls. Take the HTTP protocol for example. HTTP typically runs as a service on port 80, but does that mean that everything running on port 80 is HTTP? As an attacker looking for a way to funnel data out of your organization, why not use the standard HTTP port to send data, since I know you leave it wide open in order for your employees to surf the web. There's nothing to say that I actually have to be running an HTTP server on the other end and there's nothing on my classic firewall to tell any differently. At first, I was admittedly a bit skeptical. I didn't think that you could really tell enough about different applications on the web to be able to separate them out like Palo Alto claims to. Fortunately, Palo Alto reached out to me and provided me with a brand new PA-200 in an attempt to change my mind.
When the PA-200 arrived, it came with everything that I would need to get it up and running. That includes the unit itself, a power supply, a D89 to RJ45 console cable, an ethernet cable, and some instructions and warranty information.
On the front of the unit is four ethernet ports for your devices, a management port, a USB port, a console port, and several status indicator LEDs.
By default, the appliance is configured with ethernet ports 1 and 2 paired as a WAN to LAN link as this is the configuration that the majority of the people who buy it will likely use it for. That said, by following the instructions to connect your computer up to the management port, you can quickly access the user interface that allows you to change this assignment.
This shows the ethernet 1 and 2 interfaces as both being a "virtual wire" and here we can see the virtual wire that connects the two.
From here, we can take a look at the "zones" and see that our two interfaces have been defined as an untrusted (ethernet 1) and trusted (ethernet 2) zone.
To think of this a different way, my cable modem WAN connection (ie. the Internet) goes in my "untrust" zone and my local network (ie. LAN) goes in my "trust" zone. Now all that's left is to set our policy and for ease of management to start with, I set it to allow everything out with a default deny all inbound.
With this configuration I had done enough to be up and running on the device and I immediately started to see data populate the dashboard on the top applications running on my network.
It's color coded based on risk level and the dashboard also provides me a similar view of Top High Risk Applications. Any of these boxes can be clicked on in order to provide additional data about the protocol, sources, destinations, countries, and more.
Now, let me say that while I'm running this on my home internet connection, this thing is a hoss and can do way more than I can throw at it. With their App-ID technology enabled you can throw 100 Mbps of throughput at it no problem. In addition to being an application firewall, it also does standard port-based firewalling, VPN, routing, switching, and so much more. It's so extremely versatile that this thing could easily be placed in a smaller branch office and replace multiple other devices on their network such as a firewall, router, and VPN concentrator. More functionality for less money...who wouldn't want that? In addition to these default capabilities, additional licensing can also be obtained to allow you to do URL filtering, malware detection, and more. Having just gotten this up and running, I'm still exploring the ins and outs of all of the functionality, but it's pretty exciting to have all of this capability in a box that is smaller than the cable modem my ISP provides me. More posts to come on this as I get deeper into the guts of running my new Palo Alto PA-200 !
The other day I read that Comcast is launching a new plan to turn home internet users into unwilling participants in their new global wifi strategy. I'm sure that they will soon be touting how insanely awesome it will be to get "full strength" internet access virtually anywhere just by subscribing to this service. Other than the issues with taking a service that the consumer already pays for and carving out their bandwidth for other people, the security practitioner in me can't help but wonder what the security ramifications of sharing an internet connection like this actually means. Combine this with the default access to your cable modem that your service provider already has, and it paints a very scary picture of network security for the home user. It is no longer sufficient (if it ever was) to rely on your cable modem for network access controls. Thus, I am advocating in favor of placing a personal firewall between your cable modem and your network for all home internet setups.
Now, it's not as bad as you may think. It doesn't have to be some crazy expensive piece of equipment like you'd purchase for a business. Even the basic home gateways come with the ability to do Network Address Translation (NAT) which effectively turns your internet connection into a one-way pipe. All I'm saying is that instead of plugging your network devices directly into the cable modem for Internet access, you should use your own hardware and draw a clear "line in the sand" between your equipment and theirs. In addition, I would advocate that you should no longer consider the wifi access provided by the cable modem device as safe and should use your own equipment for this access. In other words, treat anything on the WAN side of your home gateway/personal firewall as untrusted and protect against it accordingly.
The 2014 Verizon Data Breach Investigation Report (DBIR) is out and it paints quite the gloomy picture of the world we live in today where cyber security is concerned. With over 63,000 security incidents and 1,367 confirmed data breaches, the question is no longer if you get popped, but rather, when. According to the report, data export is second only to credit card theft on the list of threat actions as a result of a breach. And with the time to compromise typically measured in days and time to discovery measured in weeks or months, Houston, we have a problem.
I've written in the past about all of the cool tricks we've been doing to find malware and other security issues by performing NetFlow analysis using the 21CT LYNXeon tool and this time I've found another trick around data loss detection that I thought was worth writing about. Before I get into the trick, let's quickly recap NetFlow for those who aren't familiar with it.
Think of NetFlow as the cliff notes of all of the network traffic that your systems handle on a daily basis. Instead of seeing WHAT data was transmitted (a task for deep packet inspection/DPI), we see the summary of HOW the data was transmitted. Things like source and destination IP, source and destination port, protocol, and bytes sent and received. Because many network devices are capable of giving you this information for free, it only makes sense to capture it and start using it for security analytics.
So, now we have our NetFlow and we know that we're going to be breached eventually, the real question becomes how to detect it quickly and remediate before a significant data loss occurs. Our LYNXeon tool allows us to create patterns of what to look for within NetFlow and other data sources. So, to help detect for data loss, I've designed the following analytic:
What this analytic does is it searches our NetFlow for any time an internal IP address is talking to an external IP address. Then, it adds up the bytes sent for each of these unique sets of connections (same source, destination, and port) and presents me with a top 25 list. Something like this:
So, now we have a list of the top 25 source and destination pairs that are sending data outside of our organization. There are also some interesting ports in this list like 12547, 22 (SSH), 443 (HTTPS), and 29234. A system with 38.48 GB worth of data sent to a remote server seems like a bad sign and something that should be investigated. You get the idea. It's just a matter of analyzing the data and separating out what is typical vs what isn't and then digging deeper into those.
My advice is to run this report on an automated schedule at least daily so that you can quickly detect when data loss has begun in order to squash it at the source. You could probably argue that an attacker might take a low and slow approach to remain undetected by my report, and you'd probably be right, but I'd also argue that if this were the case, then I've hopefully slowed them enough to catch them another way within a reasonable timespan. Remember, security is all about defense in depth and with the many significant issues that are highlighted by the Verizon DBIR, we could use all of the defense we can muster.