The other day I read that Comcast is launching a new plan to turn home internet users into unwilling participants in their new global wifi strategy. I'm sure that they will soon be touting how insanely awesome it will be to get "full strength" internet access virtually anywhere just by subscribing to this service. Other than the issues with taking a service that the consumer already pays for and carving out their bandwidth for other people, the security practitioner in me can't help but wonder what the security ramifications of sharing an internet connection like this actually means. Combine this with the default access to your cable modem that your service provider already has, and it paints a very scary picture of network security for the home user. It is no longer sufficient (if it ever was) to rely on your cable modem for network access controls. Thus, I am advocating in favor of placing a personal firewall between your cable modem and your network for all home internet setups.
Now, it's not as bad as you may think. It doesn't have to be some crazy expensive piece of equipment like you'd purchase for a business. Even the basic home gateways come with the ability to do Network Address Translation (NAT) which effectively turns your internet connection into a one-way pipe. All I'm saying is that instead of plugging your network devices directly into the cable modem for Internet access, you should use your own hardware and draw a clear "line in the sand" between your equipment and theirs. In addition, I would advocate that you should no longer consider the wifi access provided by the cable modem device as safe and should use your own equipment for this access. In other words, treat anything on the WAN side of your home gateway/personal firewall as untrusted and protect against it accordingly.
The 2014 Verizon Data Breach Investigation Report (DBIR) is out and it paints quite the gloomy picture of the world we live in today where cyber security is concerned. With over 63,000 security incidents and 1,367 confirmed data breaches, the question is no longer if you get popped, but rather, when. According to the report, data export is second only to credit card theft on the list of threat actions as a result of a breach. And with the time to compromise typically measured in days and time to discovery measured in weeks or months, Houston, we have a problem.
I've written in the past about all of the cool tricks we've been doing to find malware and other security issues by performing NetFlow analysis using the 21CT LYNXeon tool and this time I've found another trick around data loss detection that I thought was worth writing about. Before I get into the trick, let's quickly recap NetFlow for those who aren't familiar with it.
Think of NetFlow as the cliff notes of all of the network traffic that your systems handle on a daily basis. Instead of seeing WHAT data was transmitted (a task for deep packet inspection/DPI), we see the summary of HOW the data was transmitted. Things like source and destination IP, source and destination port, protocol, and bytes sent and received. Because many network devices are capable of giving you this information for free, it only makes sense to capture it and start using it for security analytics.
So, now we have our NetFlow and we know that we're going to be breached eventually, the real question becomes how to detect it quickly and remediate before a significant data loss occurs. Our LYNXeon tool allows us to create patterns of what to look for within NetFlow and other data sources. So, to help detect for data loss, I've designed the following analytic:
What this analytic does is it searches our NetFlow for any time an internal IP address is talking to an external IP address. Then, it adds up the bytes sent for each of these unique sets of connections (same source, destination, and port) and presents me with a top 25 list. Something like this:
So, now we have a list of the top 25 source and destination pairs that are sending data outside of our organization. There are also some interesting ports in this list like 12547, 22 (SSH), 443 (HTTPS), and 29234. A system with 38.48 GB worth of data sent to a remote server seems like a bad sign and something that should be investigated. You get the idea. It's just a matter of analyzing the data and separating out what is typical vs what isn't and then digging deeper into those.
My advice is to run this report on an automated schedule at least daily so that you can quickly detect when data loss has begun in order to squash it at the source. You could probably argue that an attacker might take a low and slow approach to remain undetected by my report, and you'd probably be right, but I'd also argue that if this were the case, then I've hopefully slowed them enough to catch them another way within a reasonable timespan. Remember, security is all about defense in depth and with the many significant issues that are highlighted by the Verizon DBIR, we could use all of the defense we can muster.
I am going to start out here by saying that I do not now, nor have I ever, held the title of Chief Information Security Officer (CISO). That having been said, I do effectively fill this role as the Information Security Program Owner for a large, $1B+ per year, public company. Some of what follows will be diatribe on my current role and what I would change if given the opportunity. Some of it will be based on general observations of how I've seen other companies handle internal security. What follows are six reasons why your company needs a Chief Information Security Officer (CISO).
Let's start out with how I got my current title. Early in my career I was a *nix Administrator working for a number of different companies. I did everything from working as support at a website hosting company to building systems as a military contractor. Even though my official title never had anything to do with security, I have always had a passion for it, so I always found a way to make it part of my job. Fast forward to about seven years ago where I got a job as a Web Systems Engineer at my current employer. I quickly realized that there was nobody handling security for our systems so I decided to shoulder that responsibility. I began by running Qualys scans, analyzing the results, and fixing the vulnerabilities. Since this was not my primary role, it all of this work was performed in about 5% of my overall time spent, but I was able to keep good metrics and show fantastic results over time. After several years of working like this, I finally made the decision to dedicate myself to security full-time. I got a job offer from another company to be a Security Engineer, but decided to see if my current company was interested in allowing me a similar move. After some discussions and a few presentations on what the job would entail, I was officially allowed to spend 100% of my time on security. The only catch was that I was now responsible for our IT SOX testing as well. Now came the question of what to call my new role. The title "C" anything is reserved for our executives so that took CISO off the table immediately. Sad. I was also not allowed to use the term "Manager" in my title since that indicated having people underneath me. Even sadder. I finally settled on "Information Security Program Owner" as it indicated an ownership role in security (as close to Manager as I could get) while staying away from those other non-sanctioned titles.
Alright, so what does any of that have to do with needing a CISO you ask? To start with, I was the only security professional in the entire company of roughly 5,000 employees at the time. While officially my purview was in the area of IT security for the enterprise, the lack of any other experts quickly made me a hot commodity. I was asked to participate on various architecture teams, several teams having to do with regulatory compliance, and even to consult with our R&D teams on product security from time to time. I'd like to believe that it was because I am so awesome that people couldn't get enough of me, but the God's honest truth is that the entire company had a need and desire for security and there wasn't anyone else to assist. Which leads me to the first reason why your company needs a Chief Information Security Officer:
Reason #1: By definition, the CISO is where the buck stops as far as security is concerned for your organization. It is the CISO's job to make sure that security is a concerted effort and that your efforts are not inefficiently duplicated in multiple business units. Without a CISO, you may may have operational security, but you likely lack direction or a long-term plan for an actual security program.
Now, while my title says "he owns the security program", the fact is that I am not officially a manager or executive. Thus, on an official level, I pull about as much weight as any other individual contributor in the organization. It's a precarious position to be in. On one hand I'm charged with ensuring the security of everyone and everything in the company. Sometimes this can require being the bad guy and telling people their stuff is broken. On the other hand, I don't hold enough power to actually force any action that others don't actually want to take. Perhaps I'll write a future post about how I've managed to still get things done despite this dilemma, but for now this leads me to the second reason why your company needs a Chief Information Security Officer:
Reason #2: Designating one of your senior security resources as the CISO is a form of empowerment. You are making a statement that they are the person that you trust to make informed security decisions for the organization. It helps if you can have them report to another C-level executive, like the CFO, but the most important thing here is the title as Chief Information Security Officer says that they are in charge of everything security (everything Information Security if you want to get technical) for your organization. This helps tremendously in ensuring that security is still a priority when business turns political.
When you hear the title Chief Information Security Officer, what do you think of? Maybe the IT guy who handle the IPS system? The guy who goes running around when a system is infected with malware? Maybe even the guy who wrote the Information Security Policy if you're lucky? Your CISO should be all these things and so much more. This leads me to the third reason why your company needs a Chief Information Security Officer:
Reason #3: Your CISO is all things security. Wikipedia does a great job listing some of the many roles of the Chief Information Security Officer so I'm just going to steal them and list them here:
- Information Security and Information Assurance
- Information Regulatory Compliance (PCI, SOX, HIPAA, etc)
- Information Risk Management
- Supply Chain Risk Management
- Information Technology Controls
- Information Privacy
- Computer Emergency Response Team
- Identity and Access Management
- Security Architecture
- IT Investigations, Digital Forensics, and eDiscovery
- Disaster Recovery and Business Continuity Management
- Information Security Operations Center
Obviously one person cannot handle all of these things which is why most companies have a team of security professionals (ie. Information Security Officers) who report up to the CISO, but this should give you an idea as to the wide scope of what the CISO is responsible for. Chances are that if you don't have a CISO, then many of these activities aren't happening. Even worse, the ones that are happening likely aren't aligned with your business objectives. It's tough to justify spending any money on a program when it performs activities ad-hoc and completely separate from your business. Which leads me to the fourth reason why your company needs a Chief Information Security Officer:
Reason #4: Your CISO is a business executive that spans into the technical world of security as well. They should be involved in the business decisions of the company so that they can ensure that the company's security activities are well-aligned with the projects that the business is undertaking.
Hopefully, your senior security professional is an extremely valued member of your team. If you are holding off on giving them an official Chief Information Security Officer title, then you are doing both them and your company a disservice. Security companies are organizing events all the time that are targeted at these executives who control the security purse strings. Sometimes they call them CISO Roundtables, Summits, or otherwise, but the gist of it is that they are a form of education for the CISO and provides them with the opportunity to network with other security professionals in the area, all on somebody else's dime. The catch is that you're only invited if you're a CISO. This leads me to the fifth reason why your company needs a Chief Information Security Officer:
Reason #5: The title of CISO is synonymous with "the person in charge of security" for your company and worlds of opportunity open up for them when you bestow upon them that title. It means free lunches, free trainings, and a host of other perks that unfortunately aren't available with a title like "Security Manager" or "Senior Security Engineer". Think of it as a job perk that doesn't cost your company a thing.
Before I wrap this up, I have one final reason why your company needs a Chief Information Security Officer, but it's certainly not for everyone. Occasionally, you'll find a person both technically talented as well as someone who has an affinity and desire to do public speaking. If this is your senior security person, then it's time to lock them down as they have the ability to do more positive marketing for your company than your entire marketing department. This leads me to the sixth and final reason why your company needs a Chief Information Security Officer:
Reason #6: If your CISO is willing and able to give engaging talks about security-related topics, then that person, with that title, can make a world of difference for your organization from a marketing perspective. Conferences are always looking for new and interesting talks and attendees often consider the speakers as industry luminaries. No marketing whitepaper will ever come close to the exposure potential of having your own industry expert, presenting on a fantastic topic, using a company branded slide deck, in front of hundreds of security professionals.
There you have my six reasons why your company needs a Chief Information Security Officer. I hope that this was helpful in your search for becoming or designating your company's ultimate CISO. Feel free to add your own thoughts in the comments below.
Last year I gave a talk at a number of different conferences called "The Magic of Symbiotic Security: Creating an Ecosystem of Security Systems" in which I spoke about how if we can break our security tools out of their silos, then they become far more useful. Lately, I've been doing a lot of work at my company in identifying systems infected by malware and getting rid of the infections because, as you are hopefully aware, the presence of malware on your systems is equivalent to hackers on your network. Malware can give the controller backdoor access to the system, allows them to scan the network for other devices to compromise, gives them a platform to launch additional attacks from, and enables them to exfiltrate data out of the network. I have a few different tools which I'll highlight later that do some really cool things on their own, but when you combine their functionality together, you open up a whole new world of possibilities.
The first tool that I wanted to talk about is for malware analysis. In our case this is FireEye, but this could just as easily be Damballa, Bit9, or any other technology that will allow you to identify IP addresses of hosts infected by malware, servers hosting malware objects, and command and control servers. Alone, this tool identifies a single client-to-server relationship, but it does provide a pattern that we can use as a template to find similar issues in our environment where perhaps we do not have coverage with this device. Now that we have identified the patterns that we are looking for, we need to find a way to discover additional instances of those patterns. This brings me to our second tool.
The second tool is for NetFlow analysis. In case you are unfamiliar with NetFlow, it is a feature of most network devices that creates summary information about the network activity that is running through them. It includes the source and destination IP addresses, source and destination ports, protocols, and bytes transferred. Specifically, we need a NetFlow analysis tool that is capable of showing us connections between our internal systems and systems on the Internet. In our case, we use a product called LYNXeon to do this. Alone, LYNXeon does a good job of allowing us to visualize connections from one system to another, but finding the systems related to malware issues can often be a needle in a haystack because of the NetFlow limitations mentioned above. So while our malware connections (downloads and command-and-control) are buried in the NetFlow data, we really have no way to identify them in the NetFlow tool silo.
Now comes the fun part. One of the cool things about the FireEye system is that it provides us with the ability to export data and one of the cool things about the LYNXeon system is that it provides us with the ability to import data and tag it. So what we do is, in FireEye, we export the list of all systems that we have detected as having been infected by malware. We also export the list of all of the command and control servers and malware hosting servers that we have seen connections to. Next, we go into LYNXeon and tell it to import these two lists of IP addresses and tag them with a custom tag that we created called "FireEye". We have now successfully combined these two tools and the payoff is huge.
Success #1: Detecting the Spread of Malware on Your Network
Our FireEye system works by executing downloads inside of a virtual machine and analyzing the affect they have on the system. Because the virtual machine doesn't always match the target system, in many cases we are only able to tell that it was malware and not that the malware actually infected the system. Using LYNXeon, however, we can create special queries that will show us all connectivity from the potentially infected system after the time of the malware download. Did the system immediately make connections to other foreign systems on the Internet? Did it start scanning our internal network looking for other hosts to compromise? All this and more is possible now that we have identified a potentially infected system on our network. Here is a pattern file which I created in LYNXeon to do this:
And here is the pattern diagram which this query accomplishes:
Success #2: Finding Other Infected Systems
FireEye appliances aren't free and with offices in over 40 countries around the world getting full coverage can get expensive. But, if we can use a handful of appliances to get an idea of where our systems are talking to when compromised, then we have data which we can turn around and use in places where we do not have those appliances. Because we are sending NetFlow data from our devices around the world into LYNXeon, we can search for any connections to these common malware servers. No more needle in a haystack. The data is all there, we just needed to know how to look for it. Here is a pattern file which I created in LYNXeon to do this:
And here is the pattern diagram which this query accomplishes:
Success #3: Discovering Other Types of Attacks
Often times our adversaries aren't just trying one type of attack and giving up when it fails. They are trying every trick in their arsenal and trying to gain and maintain a foothold on your network with whatever method they can. Once we've identified an attacker's IP address, we can now use our NetFlow data to see all other traffic coming from that IP address. Often times, expanding these types of relationships can shed light on other activities they are performing on your network. Perhaps they are performing reconnaissance on your servers? Maybe they are trying to DOS one of your systems? The fact is that once they've been uncovered as a bad guy on your network, you should be weary of all activities performed by them. Maybe even ban their IP address altogether. Here is a pattern file which I created in LYNXeon to do this:
And here is the pattern diagram which this query accomplishes:
So there you have it. By combining our malware analysis using FireEye and our NetFlow analysis using LYNXeon, we have created a hybrid system capable of far more than either of these tools by themselves. This is the magic of symbiotic security in action. Our tools becomes infinitely more powerful when we are able to share the data between them. Hopefully you will take that into consideration the next time you are looking at purchasing a security tool.
I recently had the opportunity to play with a data analytics platform called LYNXeon by a local company (Austin, TX) called 21CT. The LYNXeon tool is billed as a "Big Data Analytics" tool that can assist you in finding answers among the flood of data that comes from your network and security devices and it does a fantastic job of doing just that. What follows are some of my experiences in using this platform and some of the reasons that I think companies can benefit from the visualizations which it provides.
Where I work, data on security events is in silos all over the place. First, there's the various security event notification systems that my team owns. This consists primarily of our IPS system and our malware prevention system. Next, there are our anti-virus and end-point management systems which are owned by our desktop security team. There's also event and application logs from our various data center systems which are owned by various teams. Lastly, there's our network team who owns the firewalls, the routers, the switches, and the wireless access points. As you can imagine, when trying to reconstruct what happened as part of a security event, the data from each of these systems can play a significant role. Even more important is your ability to correlate the data across these siloed systems to get the complete picture. This is where log management typically comes to play.
Don't get me wrong. I think that log management is great when it comes to correlating the siloed data, but what if you don't know what you're looking for? How do you find a problem that you don't know exists? Enter the LYNXeon platform.
The base of the LYNXeon platform is flow data obtained from your various network device. Regardless of whether you use Juniper JFlow, Cisco NetFlow, or one of the other many flow data options, knowing the data that is going from one place to another is crucial to understanding your network and any events that take place on it. Flow data consists of the following:
- Source IP address
- Destination IP address
- IP protocol
- Source port
- Destination port
- IP type of service
Flow data also can contain information about the size of the data on your network.
The default configuration of LYNXeon basically allows you to visually (and textually) analyze this flow data for issues which is immediately useful. LYNXeon Analyst Studio comes with a bunch of pre-canned reporting which allows you to quickly sort through your flow data for interesting patterns. For example, once a system has been compromised, the next step for the attacker is often times data exfiltration. They want to get as much information out of the company as possible before they are identified and their access is squashed. LYNXeon provides you with a report to identify the top destinations in terms of data size for outbound connections. Some other extremely useful reporting that you can do with basic flow data in LYNXeon:
- Identify DNS queries to non-corporate DNS servers.
- Identify the use of protocols that are explicitly banned by corporate policy (P2P? IM?).
- Find inbound connection attempts from hostile countries.
- Find outbound connections via internal protocols (SNMP?).
It's not currently part of the default configuration of LYNXeon, but they have some very smart guys working there who can provide services around importing pretty much any data type you can think of into the visualizations as well. Think about the power of combining the data of what is talking to what along with information about anti-virus alerts, malware alerts, intrusion alerts, and so on. Now, not only do you know that there was an alert in your IPS system, but you can track every system that target talked with after the fact. Did it begin scanning the network for other hosts to compromise? Did it make a call back out to China? These questions and more can be answered with the visual correlation of events through the LYNXeon platform. This is something that I have never seen a SIEM or other log management company be able to accomplish.
LYNXeon probably isn't for everybody. While the interface itself is quite easy to use, it still requires a skilled security professional at the console to be able to analyze the data that is rendered. And while the built-in analytics help tremendously in finding the proverbial "needle in the haystack", it still takes a trained person to be able to interpret the results. But if your company has the expertise and the time to go about proactively finding problems, it is definitely worth looking into both from a network troubleshooting (something I really didn't cover) and security event management perspective.
Much like many other companies these days, National Instruments hires many of our developers straight out of school. Many times when engaging with these new hire developers, I will ask them what kind of security they learned at their university. In almost all cases I've found that the answer hasn't changed since I graduated back in 2002. Occassionally I'll get a developer who mentions one particular professor or class where they discussed secure coding practices, but most of the time the answer is "I didn't learn security in school". This absolutely kills me. It's like asking an architect to design a building without them knowing anything about support structures and load distribution. The end result may look awesome on the outside, but the slightest breeze will knock it over. With computers being embedded into literally every aspect of our society, do you really want code that crumbles the moment a user does something other than what was explicitly intended?
This leads me to the conclusion that security should be considered a fundamental part of code development and not an afterthought. We should be teaching security to students at a University level so that when they graduate, corporations don't spend valuable time re-training them on proper development techniques. I've heard rumors of large companies like Oracle actually being able to impact college curriculum by telling universities they simply won't hire developers without security training. Unfortunately, most companies aren't in a position to make demands like that, but it certainly wouldn't hurt to develop relationships with faculty at your local university and tell them what you'd like to see out of their students. I did some poking around on the internet and it seems like some professors are already starting to get the memo. For example, I found a great paper written by three professors at the USAF Academy Dept. of Computer Science called Incorporating Security Issues Throughout The Computer Science Curriculum where they say:
While the general public is becoming more aware of security issues, what are our universities doing to produce graduates ready to address our security needs? Computer science as a discipline has matured to the point that students are regularly in tructed in software engineering principles--they learn the importance of life cycle issues in the development and maintenance of software. Where are they receiving similar instruction on security concerns in the software life cycle? The authors propose that security should be taught throughout every computer science curriculum--that security should always be a concern and should be considered in the development of all software just as structured programming and documentation are.
Gentlemen, I couldn't agree more. Security needs to be a foundational piece of every Computer Science program in the country. Not one class. Not one professor. Secure programming techniques need to be a consideration in every CS class in every university. Universities teach students how to write functions, create object-oriented code, and do proper documentation, but when graduates don't know the basic tenets of input validation, then we have a real problem. If you agree with me, then I challenge you to write to the Dean of your local CS program and ask them what they are doing to ensure graduates are familiar with secure coding practices. I'd be very interested in hearing back from you as to what their response was.
Notice anything wrong with this picture?
I was walking by one of the Iron Mountain Secure Shredding bins at work one day several months ago and noticed that the lock wasn't actually locked. Being the security conscious individual that I am, I tried to latch the lock again, but the lock was so rusted that it wouldn't close as hard as I tried. I can't just leave it there like that so I call the number on the bin's label and there is an automated message that tells me that they're not taking local calls anymore and gave me a different number to try. I call that number and they ask me for my company ID number which I had no idea what it was. She informed me that without that ID number I couldn't submit a support request. I informed the lady that this bin contained sensitive personal and financial information and that the issue couldn't wait for some random company ID to be found. Fortunately, she gave in and created the support ticket for me saying that I should hear back from someone within four hours.
One week later, on Friday, Iron Mountain finally calls me back and says that they will come to replace the lock the following Monday before 5 PM. When the lock hadn't been replaced yet on Monday evening, I called Iron Mountain back up. Looking at their records, they showed that a new lock had been delivered, but they had no idea where and the signature was illegible. I work on a three-building campus with 14 floors between them and almost 3,000 people. If they can't tell me where the lock is, then there's no way for me to track it down. They said that they would investigate and call me back.
After not hearing back from them again for a couple of days, I called them back. The woman I spoke with had no real update on the investigation. She said that she would send another message "downstairs" and escalate to her supervisor. At this point it had been almost three weeks with sensitive documents sitting in a bin with a malfunctioning lock. The next day they called me back and said they were never able to track down who the new lock was left with so they would bring us a new one at no charge. Finally, after a total of 24 days with a unlocked Secure Shredding bin, Iron Mountain was able to replace the lock. Iron Mountain......FAIL.
Part of my new role as the Information Security Program Owner at NI is taking care of our regulatory compliance concerns which means I spend quite a bit of time dealing with auditors. Now auditors are nice people and I want to preface what I'll say next by saying that I think auditors do perform a great service to companies. I'm sure that most of them are hard-workers and understand compliance requirements probably better than I do, but they just don't understand security.
As a case in point, we're in the middle of our annual audit by one of those "Big Four" audit firms which I won't name here to protect the innocent. I sent an email checking in with our auditors to make sure that they had everything they needed before we went into our four-day holiday weekend. They said that they had received everything they needed except for documentation on "privileged users from the current OS and Database environments" as well as "evidence of current password settings from the application servers, OS, and Database". We go through a round of translation from Auditorese to Techie and figure out that they want exports of some specific user, profile, role, and privilege tables from the database and copies of /etc/passwd, /etc/shadow, and /etc group from the servers.
So we obtain the requested documentation and I shoot them back an email message to find out their proposed method for transferring the files. Secure FTP? No. PGP encryption? Nope. Their response back was astonishing:
How large do you think they'll be? Email should be fine.
Seriously? These are the guys that we're paying to verify that we're properly protecting our systems and they're suggesting that sending our usernames and password hashes via cleartext email is an appropriate method of file transfer. I respond back:
I'm not really concerned about the size of the files, but rather, the data that they contain. Sending files containing the users, groups, and password hashes for our financial systems via cleartext is probably not a good plan considering the point of this process is protecting that data.
And they respond with:
Whatever you'd like Josh. As long as you have the files as of today, we're good.
So now I'm convinced that auditors (or at least these auditors) view security as nothing more than a checklist. The people telling me what I need to do in order to protect my systems really have no clue about the fundamentals of security. If it's not on their checklist, then it must not be of importance. In this particular situation it may be easier or more convenient to send the documents via email, but any security professional worth their salt would tell you that's not secure nor appropriate for that data. Either our auditors hold themselves to a very different standard than the rest of us security professionals or they just don't understand security unless it's on a checklist.
This presentation was by Boaz Belboard, the Executive Director of Information Security for Wireless Generation and the Project Leader for the OWASP Security Spending Benchmarks Project. My notes are below:
It does cost more to produce a secure product than an insecure product.
Most people will still shop somewhere, go to a hospital, or enroll in a university after they have had a data breach.
Why do we spend on security? How much should we be spending?
- Security imposes extra costs on organizations
- The "security tax" is relatively well knnown for network and IT security - 5 to 10% (years of Gartner, Forrester, and other studies)
- No comparable data for development or web apps
- Regualtions and contracts usually require "reasonable measures". What does that mean?
OWASP Security Spending Benchmarks Project
- 20 partner organizations, many contributors
- Open process and participation
- Raw data available to community
Reasons For Investing in Security
- Contractual and Regulatory Compliance
- Incident Prevention, Risk Mitigation
- Cost of Entry
- Competitive Advantage
Technical and Procedural Principles
- Managed and Documented Systems
- Business-need access
- Minimization of sensitive data use
- Security in Design and Development
- Auditing and Monitoring
- Defense in Depth
Specific Activities and Projects
- Security Policy and Training
- DLP-Type Systems
- Internal Configurations Management
- Credential Management
- Security in Development
- Locking down internal permissions
- Secure Data Exchange
- Network Security
- Application Security Programs