Last year I gave a talk at a number of different conferences called "The Magic of Symbiotic Security: Creating an Ecosystem of Security Systems" in which I spoke about how if we can break our security tools out of their silos, then they become far more useful. Lately, I've been doing a lot of work at my company in identifying systems infected by malware and getting rid of the infections because, as you are hopefully aware, the presence of malware on your systems is equivalent to hackers on your network. Malware can give the controller backdoor access to the system, allows them to scan the network for other devices to compromise, gives them a platform to launch additional attacks from, and enables them to exfiltrate data out of the network. I have a few different tools which I'll highlight later that do some really cool things on their own, but when you combine their functionality together, you open up a whole new world of possibilities.
The first tool that I wanted to talk about is for malware analysis. In our case this is FireEye, but this could just as easily be Damballa, Bit9, or any other technology that will allow you to identify IP addresses of hosts infected by malware, servers hosting malware objects, and command and control servers. Alone, this tool identifies a single client-to-server relationship, but it does provide a pattern that we can use as a template to find similar issues in our environment where perhaps we do not have coverage with this device. Now that we have identified the patterns that we are looking for, we need to find a way to discover additional instances of those patterns. This brings me to our second tool.
The second tool is for NetFlow analysis. In case you are unfamiliar with NetFlow, it is a feature of most network devices that creates summary information about the network activity that is running through them. It includes the source and destination IP addresses, source and destination ports, protocols, and bytes transferred. Specifically, we need a NetFlow analysis tool that is capable of showing us connections between our internal systems and systems on the Internet. In our case, we use a product called LYNXeon to do this. Alone, LYNXeon does a good job of allowing us to visualize connections from one system to another, but finding the systems related to malware issues can often be a needle in a haystack because of the NetFlow limitations mentioned above. So while our malware connections (downloads and command-and-control) are buried in the NetFlow data, we really have no way to identify them in the NetFlow tool silo.
Now comes the fun part. One of the cool things about the FireEye system is that it provides us with the ability to export data and one of the cool things about the LYNXeon system is that it provides us with the ability to import data and tag it. So what we do is, in FireEye, we export the list of all systems that we have detected as having been infected by malware. We also export the list of all of the command and control servers and malware hosting servers that we have seen connections to. Next, we go into LYNXeon and tell it to import these two lists of IP addresses and tag them with a custom tag that we created called "FireEye". We have now successfully combined these two tools and the payoff is huge.
Success #1: Detecting the Spread of Malware on Your Network
Our FireEye system works by executing downloads inside of a virtual machine and analyzing the affect they have on the system. Because the virtual machine doesn't always match the target system, in many cases we are only able to tell that it was malware and not that the malware actually infected the system. Using LYNXeon, however, we can create special queries that will show us all connectivity from the potentially infected system after the time of the malware download. Did the system immediately make connections to other foreign systems on the Internet? Did it start scanning our internal network looking for other hosts to compromise? All this and more is possible now that we have identified a potentially infected system on our network. Here is a pattern file which I created in LYNXeon to do this:
And here is the pattern diagram which this query accomplishes:
Success #2: Finding Other Infected Systems
FireEye appliances aren't free and with offices in over 40 countries around the world getting full coverage can get expensive. But, if we can use a handful of appliances to get an idea of where our systems are talking to when compromised, then we have data which we can turn around and use in places where we do not have those appliances. Because we are sending NetFlow data from our devices around the world into LYNXeon, we can search for any connections to these common malware servers. No more needle in a haystack. The data is all there, we just needed to know how to look for it. Here is a pattern file which I created in LYNXeon to do this:
And here is the pattern diagram which this query accomplishes:
Success #3: Discovering Other Types of Attacks
Often times our adversaries aren't just trying one type of attack and giving up when it fails. They are trying every trick in their arsenal and trying to gain and maintain a foothold on your network with whatever method they can. Once we've identified an attacker's IP address, we can now use our NetFlow data to see all other traffic coming from that IP address. Often times, expanding these types of relationships can shed light on other activities they are performing on your network. Perhaps they are performing reconnaissance on your servers? Maybe they are trying to DOS one of your systems? The fact is that once they've been uncovered as a bad guy on your network, you should be weary of all activities performed by them. Maybe even ban their IP address altogether. Here is a pattern file which I created in LYNXeon to do this:
And here is the pattern diagram which this query accomplishes:
So there you have it. By combining our malware analysis using FireEye and our NetFlow analysis using LYNXeon, we have created a hybrid system capable of far more than either of these tools by themselves. This is the magic of symbiotic security in action. Our tools becomes infinitely more powerful when we are able to share the data between them. Hopefully you will take that into consideration the next time you are looking at purchasing a security tool.
Yesterday I finished competing in my first ever Capture The Flag (CTF) tournament. It was called Kommand and Kontroll Revenge of the Carders and was run by Rod Soto of Prolexic. I'm going to caveat this post by saying that this was my first ever CTF competition so I have absolutely no baseline of comparison. It was also pretty thin on competition (only one other team actively pursuing flags for any period of length). That said, it is what it is and in the end my team ended up with the win which I'm quite proud of. We got system level privileges on 15 different systems to come away with both the most points as well as accomplishing the primary objective which I'll describe more about below.
The competition consisted of somewhere around 30 virtual machines running as servers and workstations on a completely isolated network. Each system played a role in the scenario that Rod created around a carding ring that got pwn3d by one of its members using a Zeus botnet. The primary objective for the CTF was to take over command and control of the botnet. The secondary objective was to capture flags of various levels of difficulty (and points) in order to score more points than the competing teams. It took the better part of two days to do it, but once we finally got system privileges on the CnC server, it was only a matter of time before we figured out a way to take ownership of it and win the game. What follows are some lessons (in no particular order) that I learned throughout the competition that will hopefully serve to better myself and others as they compete in future competitions.
- Participate with a team: These CTF competitions are most definitely a team sport. It's a series of challenges, different types of systems, and different applications. There's no way that any one person can be an expert in all of them. Working with one or more partners means that you have a fresh perspective when you need it. It also helps when there are situations where time is of the essence. For example, at one point I had system level privileges on a box and found the flag, but needed a way to get it onto my system. We had a running FTP server to make the transfer, but this risks the other team seeing the file. With the help of a partner, we had the file on the server, downloaded, and removed in under 5 seconds.
- Keep important files on removable media: I constantly found myself transferring files between different environments. Some were flags, some were exploit code, others were just files with notes on them. At one point I had my attack VM lock up on me and die to the point where I had to restore it from a snapshot. Had I not been keeping my important files on removable media, it would have cost us several flags and many points. Thank goodness for being prepared.
- Don't submit all of your flags at once: Believe it or not, there's quite a bit of strategy involved in how you present your team. Show too little points and people will think you're a chump. It'll encourage others to join the game because they feel they can make up those points quickly. Show too many points and now the competition feels the need to work harder and faster to catch up. My partner and I decided that it was best to start off with a low number of points. We posted a few flags just to show some progress, but kept a large number in our back pocket for later. At the end of day one I posted some more, but not all of the remaining flags. In hindsight, this was a bad move on my part as it seemed to get the other team moving faster. At the beginning of day 1 they posted enough points to overtake us on the scoreboard, but we still had enough flags in waiting at that point to more than make up the difference. We decided to hold them until the end to make the other team think they had it in the bag. I think this proved to be a far smarter strategy.
- Have a variety of different environments available: Since the CTF machines were running a wide variety of host operating systems, we ran into a number of challenges where we needed to be able to mimic a similar environment. Fortunately, I had a fairly diverse system that I was running which had OSX, Windows, and Linux. I found myself constantly switching between them during the game. I know that other players were definitely hindered by their lack of diverse environments.
- Take snapshots of your environments: As I mentioned in #2 above, at one point I had my attack VM lock up on me. I tried restarting, but no matter what I did, I couldn't get back into the GUI interface to resume my attacking. This would have killed my game. Snapshots to the rescue. Fortunately, before I started, I took a snapshot of my VMs and was able to quickly and easily roll back to a known good state.
- Have Internet access available: Maybe it's via your phone or via another computer attached to a different network, but there were a number of times where we had to query things on the Internet. Sometimes it was for scripts (like a PHP C99 shell) and sometimes it was for knowledge, but without Internet access, things would have been far more difficult.
- Know how to query an exploit database: Assuming that you found a way to get Internet access, you should know how to use an exploit database like the one at http://www.exploit-db.com. After you do your discovery, you have a list of running applications, sometimes even version numbers, and need to know if they are affected by any vulnerabilities with known exploits. That's where these guys come in.
- Update in advance: In several cases, the needed exploit was provided in the latest version of Metasploit. Unfortunately, my partner had a version that was a bit outdated. As in the case of this CTF, Internet access was not available in the game environment. He ended up taking his system onto the conference wireless network to do the update, but it sidetracked him for a fairly significant amount of time. It's far easier to update your tools before you walk into the CTF environment so you can spend your time actually hacking all the things.
- Be well versed in exploitation tools: The time I spent listening to my friend Raphael Mudge talk about penetration testing with Armitage paid off dividends here as did the many months our study group went through David Kennedy's Metasploit book. I went into it feeling like I had a pretty good grasp on the concepts with no practical application of the skills. Now, I feel like the CTF gave me the practical application and then some. If you don't have at least some knowledge of a tool like Metasploit or Armitage, you're going to struggle.
- Explore the system: The system that I mentioned earlier that we used to take over the botnet command and control was one that I had rooted several hours earlier. I browsed the system, got the flag, and moved on. It wasn't until I established a VNC connection to the system that I found the CnC console staring right back at me. It had been there all along and because I didn't give the system enough attention, I moved right on past what could have won us the game far sooner. Remember, there are many different ways to view the data on the system. Be somewhat thorough while at the same time remembering that time is of the essence.
- Know how to use a directory brute forcer: I think that many of the people who came in, played for an hour, and then left got stuck here. They ran their scan, found some HTTP servers, and connected to them but saw nothing but a "Hello world!" message. They knew that something was running, but couldn't figure out what. Fortunately, I'm familiar with the OWASP ZAP tool and was able to tell it to brute force common directories on the web server. We found a number of different applications this way that there was really no other way to find. Your Metasploit exploits will never work if you can't tell it the proper URI to target.
So, there you have it. My list of lessons learned from participating in (and winning) my first Capture the Flag (CTF) challenge. Big thanks to my partners Alek and Nate for pwning systems alongside me. As I said in #1 above, CTF is a team sport and I couldn't have won it without you guys.
About a week ago I turned on a new rule on our IPS system that is designed to detect (and block) users who are using TOR to make their activities on our network anonymous. You can say that TOR is about protecting a user's privacy all you want, but I'd argue that while using corporate assets you should have no expectation of privacy (at least in that sense) and that the use of anonymizers on a corporate network can typically be viewed as a sign that you are up to no good. Almost immediately when I turned on this new rule, I began seeing associated events in the IPS console. I decided that the best approach was to contact the user directly as they may be wondering why their Internet connection was no longer working. I reached out to this particular user and explained that if this was the case, then it was because of the new IPS rule. The solution was simple; just reconfigure his browser to no longer use TOR as the proxy. But as I began this process, things started getting weird.
I began by telling the user to look for names like "TOR", "The Onion Router", and "Privoxy" in his Add & Remove Programs. Strange....there was nothing there. Then I asked him to check his Task Manager to look for a running process called "tor.exe" or similar. Again, nothing. I was at a loss. I decided that this was something I needed to get my hands on to figure out so I scheduled some time with the user.
This morning when I sat with the user, I noticed little wrong with his system. He had a few standard applications running, but nothing unusual. I checked his process listing and saw nothing out of the ordinary. I ran Hijack This! and that, too, looked pretty normal. All this, yet in the meantime I continued to see alerts on the IPS system that his computer was using TOR. Even when I was sitting at the console with NO browser activity. So, to make a long story short, here's how I finally figured out what was happening. I checked the IPS system and came up with the source ports for the requests that I was seeing alerts on. I then went on the system and ran a netstat -nao. This listed all network connections on the users system along with the associated process. I checked the list and found the entry that matched the port number I was seeing the alerts on. I then ran the command tasklist /svc /FI "PID eq <process_num>" This provided me with the name of the process that was running with this process ID which it turns out was "iexplore.exe". Wait. Internet explorer isn't even running on this computer. Or is it? Since the default process viewer in the Task Manager is pretty lame, I downloaded the Microsoft Sysinternals Process Monitor. It's a free tool available from Microsoft and provides a ton more information about running processes and allows you to see what they are doing in real time. I used the Process Monitor to view these processes and focused particularly on the flags that were used when they started. What I found was actually pretty startling.
Both of the Internet Explorer processes were started with a special flag that told them to start silently (ie. without the UI) in the background. They also specified a flag similar to this:
--HiddenServiceDir "C:\Documents and Settings\<User_Name>\Application Data\tor\hidden_service" -- HiddenServicePort "55080 127.0.0.1:55080"
Aha! We found our culprit! TOR was running as a hidden service out of the Application Data directory. Once I found this, it was all over. Scanning through the Application Data directory, I also found a file under "Enemvy\ugbie.exe" that was extremely suspect. A later scan via Malwarebytes identified it as a variant of Trojan.ZbotR. I deleted these directories and Malwarebytes found one registry key associated with the ugbie.exe file and deleted it. All is good now and the system is no longer alerting about use of TOR.
So, what's our lesson here? The malware writers are getting sneaky. They've realized that we've created blacklists of their servers and they need to be able to adapt around that. Now, they are using anonymizers, like TOR, to get around these blacklists. Apparently this isn't the first use of TOR in malware either as I read about something called SkyNet that did something similar. In any case, they would have gotten away with it if it weren't for my IPS rule to detect TOR and a fair amount of persistence in finding the root cause. If you're not already detecting this on your network, I think that it's about high time you did it. You can thank me later.
This post is going to be short and sweet as it's something I meant to put up here when I found it sometime back in mid-2011. I'm not even sure if Time Warner is still using these Ubee cable modems for their RoadRunner offering, but I'm sure that there are at least a few people out there who still have them. When you get the modem installed initially, they give you some default credentials. Something like user/user or admin/admin. Using these credentials, you are able to access the device and many of the features that it has to offer you. What you are not able to do is access the menus where you can change how the router is actually configured for internet access, change the master password, or prevent Time Warner from access your modem, and subsequently, your network. To fix this, you just need to know the following secret...
The real administrator username that comes configured on these modems when you get them from Time Warner is the last eight digits of the unit's MAC address. This is unique to your device, but can be found pretty easily by looking at the user interface that you do have access to. The password for this user is "c0nf1gur3m3". Use that and you should be in. Feel free to change the password while you're in there to keep the Time Warner folks out.
One other kinda secret thing to node is that if you do want to change how the router is configured for internet access, you will need to go to http://192.168.0.1/TlModeChange.asp on your router to do so. Once there, you can change it to Bridge mode, NAT mdoe, Router mode, or NAT Router mode depending on what you are looking to do with it. Hope you enjoyed this simple solution for getting the real administrator access to Time Warner RoadRunner's Ubee cable modem.
A couple of weeks back, HD Moore posted a blog entry entitled
"Security Flaws in Universal Plug and Play: Unplug, Don't Play" supporting a Rapid7 Whitepaper in which he discusses the 81 million unique IP addresses that respond to UPnP discovery requests on the Internet and the 23 million fingerprints that match a version of libupnp that exposes the systems to remote code execution. His research on the subject is fascinating and I highly recommend reading it over, but that's not the reason why I'm writing this. The first question this research had me asking myself is whether or not my organization utilizes UPnP for anything. As far as I can tell, the answer to this question is, thankfully, no. Next, out of curiosity I began to wonder how many people were out there actively trying to find these exploits. A perfect opportunity to fire up our new LYNXeon tool.
Our LYNXeon tool is configured to consume NetFlow data provided by literally hundreds of routers and switches in our global environment. One of the most interesting things about it is that it can be used to see the traffic that comes in from our edge routers before it gets squashed by our firewall. Utilizing this tool in this way, we can visualize the so-called "Barbarians" at our gates. These are the hackers that are out there trying to find the weak spots in our security in order to get in. And since I know that UPnP is not a service that we offer up to the Internet at large, it makes finding the guys who are looking to exploit it that much easier.
I fire up LYNXeon and my first step is to generate what is known as "PQL" or "Pattern Query Language". While their Cyber Analytics Catalog offers up a ton of templates to use to find potential threats, PQL is the base of all those queries and writing your own allows you to define your own catalog of things to look for. The language is pretty easy to understand. First you define the characteristics of the connections that you are looking to find. After doing some research, I found out that HD was looking for openings on UPnP's Simple Service Discovery Protocol (SSDP) service which typically runs on UDP/1900. So, my query is for connections from external source IPs to internal source IPs using the UDP protocol on port 1900. Once the connections have been defined, all that is left to do is define the data that you want to see in the results. In total, my PQL code is 15 lines of code:
Now it's officially time to make these invisible Barbarians visible. I tell LYNXeon to only show me results over the last day (to reduce the amount of time the search takes) and then tell it to "Execute Pattern Search" using the pattern file that I just created. Searches will vary in time based upon the timeframe searched, the number of forwarding devices, and how complicated your search criteria are. For me, this search returned 539 results in one minute and 38 seconds.
Now that I have results, I just need to select how to view them. My personal favorite is viewing the results in the Link Explorer. This will show my data as nodes on a pictoral graph. I make one quick adjustment using a organizational feature called "Force Directed Layout" to make the pictures look pretty and voila!
OK, so zoomed out it looks like a bunch of spider webs. Now the fun begins as we begin zooming in on each cluster to see what is going on.
I've blacked out the IP address of the system these guys are connecting to as it is irrelevant for the purposes of this post, but you can clearly see that in the past day this one system has had eight unique IP addresses attempt to connect to it on UDP port 1900. I've got dozens more just like these on that big graph above with varying degrees of complexity. From here, LYNXeon allows me to resolve DNS and/or ARIN names for the associated IP addresses. I can also expand upon those sources to see what else of mine they've been talking to. Is that cool or what? It's taken me minutes to find these potential threats and with little more than a few clicks of the mouse. The Barbarians are most definitely at my gates silently pounding away and chances are pretty good that they are doing the same to you. The question is....can you find them?
I recently had the opportunity to play with a data analytics platform called LYNXeon by a local company (Austin, TX) called 21CT. The LYNXeon tool is billed as a "Big Data Analytics" tool that can assist you in finding answers among the flood of data that comes from your network and security devices and it does a fantastic job of doing just that. What follows are some of my experiences in using this platform and some of the reasons that I think companies can benefit from the visualizations which it provides.
Where I work, data on security events is in silos all over the place. First, there's the various security event notification systems that my team owns. This consists primarily of our IPS system and our malware prevention system. Next, there are our anti-virus and end-point management systems which are owned by our desktop security team. There's also event and application logs from our various data center systems which are owned by various teams. Lastly, there's our network team who owns the firewalls, the routers, the switches, and the wireless access points. As you can imagine, when trying to reconstruct what happened as part of a security event, the data from each of these systems can play a significant role. Even more important is your ability to correlate the data across these siloed systems to get the complete picture. This is where log management typically comes to play.
Don't get me wrong. I think that log management is great when it comes to correlating the siloed data, but what if you don't know what you're looking for? How do you find a problem that you don't know exists? Enter the LYNXeon platform.
The base of the LYNXeon platform is flow data obtained from your various network device. Regardless of whether you use Juniper JFlow, Cisco NetFlow, or one of the other many flow data options, knowing the data that is going from one place to another is crucial to understanding your network and any events that take place on it. Flow data consists of the following:
- Source IP address
- Destination IP address
- IP protocol
- Source port
- Destination port
- IP type of service
Flow data also can contain information about the size of the data on your network.
The default configuration of LYNXeon basically allows you to visually (and textually) analyze this flow data for issues which is immediately useful. LYNXeon Analyst Studio comes with a bunch of pre-canned reporting which allows you to quickly sort through your flow data for interesting patterns. For example, once a system has been compromised, the next step for the attacker is often times data exfiltration. They want to get as much information out of the company as possible before they are identified and their access is squashed. LYNXeon provides you with a report to identify the top destinations in terms of data size for outbound connections. Some other extremely useful reporting that you can do with basic flow data in LYNXeon:
- Identify DNS queries to non-corporate DNS servers.
- Identify the use of protocols that are explicitly banned by corporate policy (P2P? IM?).
- Find inbound connection attempts from hostile countries.
- Find outbound connections via internal protocols (SNMP?).
It's not currently part of the default configuration of LYNXeon, but they have some very smart guys working there who can provide services around importing pretty much any data type you can think of into the visualizations as well. Think about the power of combining the data of what is talking to what along with information about anti-virus alerts, malware alerts, intrusion alerts, and so on. Now, not only do you know that there was an alert in your IPS system, but you can track every system that target talked with after the fact. Did it begin scanning the network for other hosts to compromise? Did it make a call back out to China? These questions and more can be answered with the visual correlation of events through the LYNXeon platform. This is something that I have never seen a SIEM or other log management company be able to accomplish.
LYNXeon probably isn't for everybody. While the interface itself is quite easy to use, it still requires a skilled security professional at the console to be able to analyze the data that is rendered. And while the built-in analytics help tremendously in finding the proverbial "needle in the haystack", it still takes a trained person to be able to interpret the results. But if your company has the expertise and the time to go about proactively finding problems, it is definitely worth looking into both from a network troubleshooting (something I really didn't cover) and security event management perspective.
I was having lunch with Charles Henderson from Trustwave Spider Labs the other day and he mentioned that he had just gotten signed up with the new Roadrunner Extreme Broadband Beta from Time Warner Cable. He mentioned insane download and upload speeds as well as the new DOCSIS 3.0 compliant modem. It was enough to pique my interest and get me to call Time Warner.
I have been on the older Roadrunner Turbo-charged plan since basically when it first came out and have been generally happy with the service up until recently when I've started having to reboot the modem daily. I'm also kind of an internet speed addict so the idea of moving up to 20 MB/s downloads and 5 MB/s uploads was pretty sweet to me. That's just to start with as eventually the service will have 30 MB/s downloads. I called up Time Warner and asked what it would take to move onto the Extreme Broadband Beta and they told me that it was only an extra $5/mo over my Turbo-charged plan. Even better was that they were offering free installation as part of the Beta. They were able to get the install scheduled just over a week out. Not too bad.
The service technicians came out on the designated day and time and got everything hooked up for me. They even replaced a bunch of the wiring on the box on the side of the house where the service connects to. They did some line tests and within minutes I was up and running on the new service. While not the 5 MB/s upload that was advertised to me, the download speed is quite impressive. Check it out:
The other cool thing is that while not necessarily intended, it is very easy to get into the new ubee modem's configuration interface. By default, the device comes up as 192.168.0.1 on your network and has a username and password of user/user. Get in there and it's got all of the configuration options of a wireless internet gateway. The first thing that you should do is change the username and password. After that, enable the wireless network, configure port forwarding, etc.
Not only does the new modem have built-in wireless N, but it also has four additional network ports so you can use it with multiple computers on your network. I remember the days when Time Warner used to charge you if you had more than one computer, but not anymore.
Granted, I've only had the new service for a few hours now, but I'm already pretty impressed. If you're an internet speed demon like me, and you live in the Austin area, I'd recommend that you give Time Warner a call and ask about switching over to the new Roadrunner Extreme Broadband Beta. Enjoy!
Much like many other companies these days, National Instruments hires many of our developers straight out of school. Many times when engaging with these new hire developers, I will ask them what kind of security they learned at their university. In almost all cases I've found that the answer hasn't changed since I graduated back in 2002. Occassionally I'll get a developer who mentions one particular professor or class where they discussed secure coding practices, but most of the time the answer is "I didn't learn security in school". This absolutely kills me. It's like asking an architect to design a building without them knowing anything about support structures and load distribution. The end result may look awesome on the outside, but the slightest breeze will knock it over. With computers being embedded into literally every aspect of our society, do you really want code that crumbles the moment a user does something other than what was explicitly intended?
This leads me to the conclusion that security should be considered a fundamental part of code development and not an afterthought. We should be teaching security to students at a University level so that when they graduate, corporations don't spend valuable time re-training them on proper development techniques. I've heard rumors of large companies like Oracle actually being able to impact college curriculum by telling universities they simply won't hire developers without security training. Unfortunately, most companies aren't in a position to make demands like that, but it certainly wouldn't hurt to develop relationships with faculty at your local university and tell them what you'd like to see out of their students. I did some poking around on the internet and it seems like some professors are already starting to get the memo. For example, I found a great paper written by three professors at the USAF Academy Dept. of Computer Science called Incorporating Security Issues Throughout The Computer Science Curriculum where they say:
While the general public is becoming more aware of security issues, what are our universities doing to produce graduates ready to address our security needs? Computer science as a discipline has matured to the point that students are regularly in tructed in software engineering principles--they learn the importance of life cycle issues in the development and maintenance of software. Where are they receiving similar instruction on security concerns in the software life cycle? The authors propose that security should be taught throughout every computer science curriculum--that security should always be a concern and should be considered in the development of all software just as structured programming and documentation are.
Gentlemen, I couldn't agree more. Security needs to be a foundational piece of every Computer Science program in the country. Not one class. Not one professor. Secure programming techniques need to be a consideration in every CS class in every university. Universities teach students how to write functions, create object-oriented code, and do proper documentation, but when graduates don't know the basic tenets of input validation, then we have a real problem. If you agree with me, then I challenge you to write to the Dean of your local CS program and ask them what they are doing to ensure graduates are familiar with secure coding practices. I'd be very interested in hearing back from you as to what their response was.
Notice anything wrong with this picture?
I was walking by one of the Iron Mountain Secure Shredding bins at work one day several months ago and noticed that the lock wasn't actually locked. Being the security conscious individual that I am, I tried to latch the lock again, but the lock was so rusted that it wouldn't close as hard as I tried. I can't just leave it there like that so I call the number on the bin's label and there is an automated message that tells me that they're not taking local calls anymore and gave me a different number to try. I call that number and they ask me for my company ID number which I had no idea what it was. She informed me that without that ID number I couldn't submit a support request. I informed the lady that this bin contained sensitive personal and financial information and that the issue couldn't wait for some random company ID to be found. Fortunately, she gave in and created the support ticket for me saying that I should hear back from someone within four hours.
One week later, on Friday, Iron Mountain finally calls me back and says that they will come to replace the lock the following Monday before 5 PM. When the lock hadn't been replaced yet on Monday evening, I called Iron Mountain back up. Looking at their records, they showed that a new lock had been delivered, but they had no idea where and the signature was illegible. I work on a three-building campus with 14 floors between them and almost 3,000 people. If they can't tell me where the lock is, then there's no way for me to track it down. They said that they would investigate and call me back.
After not hearing back from them again for a couple of days, I called them back. The woman I spoke with had no real update on the investigation. She said that she would send another message "downstairs" and escalate to her supervisor. At this point it had been almost three weeks with sensitive documents sitting in a bin with a malfunctioning lock. The next day they called me back and said they were never able to track down who the new lock was left with so they would bring us a new one at no charge. Finally, after a total of 24 days with a unlocked Secure Shredding bin, Iron Mountain was able to replace the lock. Iron Mountain......FAIL.
I had a meeting yesterday with a vendor who sells a SaaS solution for binary application vulnerability testing. They tell a very interesting story of a world where dynamic testing ("black box") takes place alongside static testing ("white box") to give you a full picture of your application security posture. They even combine the results with some e-Learning aspects so that developers can research the vulnerabilities in the same place they go to find them. In concept, this sounds fantastic, but I quickly turned into a skeptic and as I dug deeper into the details I'm not sure I like what I found.
I wanted to make sure I fully understood what was going on under the hood here so I started asking questions about the static testing and how it works. They've got a nice looking portal where you name your application, give it a version, assign it to a group of developers, and point it to your compiled code (WAR, EAR, JAR, etc). Once you upload your binaries, their system basically runs a disassembler on it to get it into assembly code. It's then at this level that they start looking for vulnerabilities. They said that this process takes about 3 days initially and then maybe 2 days after the first time because they are able to re-use some data about your application. Once complete, they say they are able to provide you a report detailing your vulnerabilities and how to fix them.
The thing that immediately struck me as worth noting here was the 2-3 day turnaround. This means that our developers would need to wait a fairly substantial amount of time before getting any feedback on the vulnerability status of their code. In a world full of Agile development, 2-3 days is a lifetime. Compare that to static source code testing where you get actionable results at compile time. The edge here definitely goes to source code testing as I believe most people would prefer the near-instant gratification.
The next thing worth noting was that they are taking binary files and disassembling them in order to find vulnerabilities. This lends itself to one major issue which is how can you determine with any accuracy the line number of a particular vulnerability written in let's say Java from assembly code generated by disassembling the binaries. By default, it's simply not possible. This vendor claimed that they can by adding in some debug strings at compile time, but even then I'd contend that you're not going to get much. I'm guessing they have some heuristics that are able to tell what function generated a set of assembly code, but I'm extremely skeptical that they can do anything with variable names, custom code functions, etc. I've seen some source code scanners, on the other hand, that not only tell you what line of code is affected, but are able to give you an entire list of parameters that have been consequently affected by that vulnerability. The edge here definitely goes to source code testing.
The main benefit that I can see with binary testing vs source code testing is that we can test code that we didn't write. Things like APIs, third-party applications, open source, etc are all things that we now have visibility into. The only problem here is that while we now can see the vulnerabilities in this software, they are unfortunately all things that we can't directly influence change in, unless we want to send our developers off to work on somebody else's software. I'd argue that scanning for vulnerabilities in that type of code is their responsibility, not ours. Granted, it'd be nice to have validation that there aren't vulnerabilities there that we're exposing ourselves to by uptaking it, but in all honesty are we really going to take the time to scan somebody else's work? Probably not. The edge here goes to binary testing with the caveat being that it's in something that I frankly don't care as much about.
This isn't the complete list of pros and cons by any means. It's just me voicing in writing some concerns that I had about the technology while talking to this particular vendor. In my opinion, the benefits of doing source code testing far outweigh any benefits that we could get from testing compiled binary files. What do you think about the benefits of one versus the other? I'd certainly love for someone to try to change my mind here and show me where the real value lies in binary testing.