In the recent Amazon AWS Newsletter, they asked the following:
Some customers have asked us about ways to easily convert virtual machines from VMware vSphere, Citrix Xen Server, and Microsoft Hyper-V to Amazon EC2 instances - and vice versa. If this is something that you're interested in, we would like to hear from you. Please send an email to firstname.lastname@example.org describing your needs and use case.
I'll share my reply here for comment!
This is a killer feature that allows a number of important activities.
1. Product VMs. Many suppliers are starting to provide third-party products in the form of VMs instead of software to ease install complexity, or in an attempt to move from a hardware appliance approach to a more-software approach. This pretty much prevents their use in EC2. <cue sad music> As opposed to "Hey, if you can VM-ize your stuff then you're pretty close to being able to offer it as an Amazon AMI or even SaaS offering." <schwing!>
2. Leveraging VM Investments. For any organization that already has a VM infrastructure, it allows for reduction of cost and complexity to be able to manage images in the same way. It also allows for the much promised but under-delivered "cloud bursting" theory where you can run the same systems locally and use Amazon for excess capacity. In the current scheme I could make some AMIs "mostly" like my local VMs - but "close" is not good enough to use in production.
3. Local testing. I'd love to be able to bring my AMIs "down to me" for rapid redeploy. I often find myself having to transfer 2.5 gigs of software up to the cloud, install it, find a problem, have our devs fix it and cut another release, transfer it up again (2 hour wait time again, plus paying $$ for the transfer)...
4. Local troubleshooting. We get an app installed up in the cloud and it's not acting quite right and we need to instrument it somehow to debug. This process is much easier on a local LAN with the developers' PCs with all their stuff installed.
5. Local development. A lot of our development exercises the Amazon APIs. This is one area where Azure has a distinct advantage and can be a threat; in Visual Studio there is a "local Azure fabric" and a dev can write their app and have it running "in Azure" but on their machine, and then when they're ready deploy it up. This is slightly more than VM consumption, it's VMs plus Eucalyptus or similar porting of the Amazon API to the client side, but it's a killer feature.
Xen or VMWare would be fine - frankly this would be big enough for us I'd change virtualization solutions to the one that worked with EC2.
I just asked one of our developers for his take on value for being able to transition between VMs and EC2 to include in this email, and his response is "Well, it's just a no-brainer, right?" Right.
What Is Microsoft Azure?
I'm going to attempt to explain Microsoft Azure in "normal Web person" language. Like many of you, I am more familiar with Linux/open source type solutions, and like many of you, my first forays into cloud computing have been with Amazon Web Services. It can often be hard for people not steeped in Redmondese to understand exactly what the heck they're talking about when Microsoft people try to explain their offerings. (I remember a time some years ago I was trying to get a guy to explain some new Microsoft data access thing with the usual three letter acronym name. I asked, "Is it a library? A language? A protocol? A daemon? Branding? What exactly is this thing you're trying to get me to uptake?" The reply was invariably "It's an innovative new way to access data!" Sigh. I never did get an answer and concluded "Never mind.")
Microsoft has released their new cloud offering, Azure. Our company is a close Microsoft partner since we use a lot of their technologies in developing our company's desktop software products, so as "cloud guy" I've gotten some in depth briefings and even went to PDC this year to learn more (some of my friends who have known me over the course of my 15 years of UNIX administration were horrified). "Cloud computing" is an overloaded enough term that it's not highly descriptive and it took a while to cut through the explanations to understand what Azure really is. Let me break it down for you and explain the deal.
Point of Comparison: Amazon (IaaS)
In Amazon EC2, as hopefully everyone knows by now, you are basically given entire dynamically-provisioned, hourly-billed virtual machines that you load OSes on and install software and all that. "Like servers, but somewhere out in the ether." Those kinds of cloud offerings (e.g. Amazon, Rackspace, most of them really) are called Infrastructure As A Service (IaaS). You're responsible for everything you normally would be, except for the data center work. Azure is not an IaaS offering but still bears a lot of similarities to Amazon; I'll get into details later.
Point of Comparison: Google App Engine (PaaS)
Take Google's App Engine as another point of comparison. There, you just upload your Python or Java application to their portal and "it runs on the Web." You don't have access to the server or OS or disk or anything. And it "magically" scales for you. This approach is called Platform as a Service (PaaS). They provide the full platform stack, you only provide the end application. On the one hand, you don't have to mess with OS level stuff - if you are just a Java programmer, you don't have to know a single UNIX (or Windows) command to transition your app from "But it works in Eclipse!" to running on a Web server on the Internet. On the other hand, that comes with a lot of limitations that the PaaS providers have to establish to make everything play together nicely. One of our early App Engine experiences was sad - one of our developers wrote a Java app that used a free XML library to parse some XML. Well, that library had functionality in it (that we weren't using) that could write XML to disk. You can't write to disk in App Engine, so its response was to disallow the entire library. The app didn't work and had to be heavily rewritten. So it's pretty good for code that you are writing EVERY SINGLE LINE OF YOURSELF. Azure isn't quite as restrictive as App Engine, but it has some of that flavor.
Windows Azure falls between the two. First of all, Azure is a real "hosted cloud" like Amazon Web Services, like most of us really think about when we think cloud computing; it's not one of these on premise things that companies are branding as "cloud" just for kicks. That's important to say because it seems like nowadays the larger the company, the more they are deliberately diluting the term "cloud" to stick their products under its aegis. Microsoft isn't doing that, this is a "cloud offering" in the classical (where classical means 2008, I guess) sense.
However, in a number of important ways it's not like Amazon. I'd definitely classify it as a PaaS offering. You upload your code to "Roles" which are basically containers that run your application in a Windows 2008(ish) environment. (There are two types - a "Web role" has a stripped down IIS provided on it, a "Worker role" doesn't - the only real difference between the two.) You do not have raw OS access, and cannot do things like write to the registry. But, it is less restrictive than App Engine. You can bundle up other stuff to run in Azure - even run Java apps using Apache Tomcat. You have to be able to install whatever you want to run "xcopy only" - in other words, no fancy installers, it needs to be something you could just copy the files to a Windows PC, without administrative privilege, and run a command from the command line and have it work. Luckily, Tomcat/Java fits that description. They have helper packs to facilitate doing this with Tomcat, memcached, and Apache/PHP/MediaWiki. At PDC they demoed Domino's Pizza running their Java order app on it and a WordPress blog running on it. So it's not only for .NET programmers. Managed code is easier to deploy, but you can deploy and run about anything that fits the "copy and run command line" model.
I find this approach a little ironic actually. It's been a lot easier for us to get the Java and open source (well, the ones with Windows ports) parts of our infrastructure running on Azure than Windows parts! Everybody provides Windows stuff with an installer, of course, and you can't run installers on Azure. Anyway, in its core computing model it's like Google App Engine - it's more flexible than that (g00d) but it doesn't do automatic scaling (bad). If it did autoscaling I'd be willing to say "It's better than App Engine in every way."
In other ways, it's a lot like Amazon. They offer a variety of storage options - blobs (like S3), tables (like mySQL), queues (like SQS), drives (like EBS). They have an integral CDN. They do hourly billing. Pricing is pretty similar to Amazon - it's hard to totally equate apples to apples, but Azure compute is $0.12/hr and an Amazon small Windows image compute is $0.12/hr (Coincidence? I think not.). And you have to figure out scaling and provisioning yourself on Amazon too - or pay a lot of scratch to one of the provisioning companies like RightScale.
What's Unique and Different
Well, the largest thing that I've already mentioned is the PaaS approach. If you need OS level access, you're out of luck; if you don't want to have to mess with OS management, you're in luck! So to the first order of magnitude, you can think of Azure as "like Amazon Web Services, but the compute uses more of a Google App Engine model."
But wait, there's more!
One of the biggest things that Azure brings to the table is that, using Visual Studio, you can run a local Azure "fabric" on your PC, which means you can develop, test, and run cloud apps locally without having to upload to the cloud and incur usage charges. This is HUGE. One of the biggest pains about programming for Amazon, for instance, is that if you want to exercise any of their APIs, you have to do it "up there." Also, you can't move images back and forth between Amazon and on premise. Now, there are efforts like EUCALYPTUS that try to overcome some of this problem but in the end you pretty much just have to throw in the towel and do all dev and test up in the cloud. Amazon and Eclipse (and maybe Xen) - get together and make it happen!!!!
Here's something else interesting. In a move that seems more like a decision from a typical cranky cult-of-personality open source project, they have decided that proper Web apps need to be asynchronous and message-driven, and by God that's what you're going to do. Their load balancers won't do sticky sessions (only round robin) and time out all connections between all tiers after 60 seconds without exception. If you need more than that, tough - rewrite your app to use a multi-tier message queue/event listener model. Now on the one hand, it's hard for me to disagree with that - I've been sweating our developers, telling them that's the correct best-practice model for scalability on the Web. But again you're faced with the "Well what if I'm using some preexisting software and that's not how it's architected?" problem. This is the typical PaaS pattern of "it's great, if you're writing every line of code yourself."
In many ways, Azure is meant to be very developer friendly. In a lot of ways that's good. As a system admin, however, I wince every time they go on about "You can deploy your app to Azure just by right clicking in Visual Studio!!!" Of course, that's not how anyone with a responsibly controlled production environment would do it, but it certainly does make for fast easy adoption in development. The curve for a developer who is "just" a C++/Java/.NET/whatever wrangler to get up and going on an IaaS solution like Amazon is pretty large comparatively; here, it's "go sign up for an account and then click to deploy from your IDE, and voila it's running on the Intertubes." So it's a qualified good - it puts more pressure on you as an ops person to go get the developers to understand why they need to utilize your services. (In a traditional server environment, they have to go through you to get their code deployed.) Often, for good or ill, we use the release process as a touchstone to also engage developers on other aspects of their code that need to be systems engineered better.
Now, that's my view of the major differences. I think the usual Azure sales pitch would say something different - I've forgotten two of their huge differentiators, their service bus and access control components. They are branded under the name "AppFabric," which as usual is a name Microsoft is also using for something else completely different (a new true app server for Windows Server, including projects formerly code named Dublin and Velocity - think of it as a real WebLogic/WebSphere type app server plus memcache.)
Their service bus is an ESB. As alluded to above, you're going to want to use it to do messaging. You can also use Azure Queues, which is a little confusing because the ESB is also a message queue - I'm not clear on their intended differentiation really. You can of course just load up an ESB yourself in any other IaaS cloud solution too, so if you really want one you could do e.g. Apache ServiceMix hosted on Amazon. But, they are managing this one for you which is a plus. You will need to use it to do many of the common things you'd want to do.
Their access control - is a mess. Sorry, Microsoft guys. The whole rest of the thing, I've managed to cut through the "Microsoft acronyms versus the rest of the world's terms and definitions" factor, but not here. "You see, you use ACS's WIF STS to generate a SWT," says our Microsoft rep with a straight face. They seem to be excited that it will use people's Microsoft Live IDs, so if you want people to have logins to your site and you don't want to manage any of that, it is probably nice. It takes SAML tokens too, I think, though I'm not sure if the caveats around that end up equating to "Well, not really." Anyway, their explanations have been incoherent so far and I'm not smelling anything I'm really interested in behind it. But there's nothing to prevent you from just using LDAP and your own Internet SSO/federation solution. I don't count this against Microsoft because no one else provides anything like this, so even if I ignore the Azure one it doesn't put it behind any other solution.
Microsoft has said they plan to add on some kind of VM/IaaS offering eventually because of the demand. For us, the PaaS approach is a bit of a drawback - we want to do all kinds of things like "virus scan uploaded files," "run a good load balancer," "run an LDAP server", and other things that basically require more full OS access. I think we may have an LDAP direction with the all-Java OpenDS, but it's a pain point in general.
I think a lot of their decisions that are a short term pain in the ass (no installs, no synchronous) are actually good in the long term. If all developers knew how to develop async and did it by default, and if all software vendors, even Windows based ones, provided their product in a form that could just be "copy and run without admin privs" to install, the world would be a better place. That's interesting in that "Sure it's hard to use now but it'll make the world better eventually" is usually heard from the other side of the aisle.
Azure's a pretty legit offering! And I'm very impressed by their velocity. I think it's fair to say that overall Azure isn't quite as good as Amazon except for specific use cases (you're writing it all in .NET by hand in Visual Studio) - but no one else is as good as Amazon either (believe me, I evaluated them) and Amazon has years of head start; Azure is brand new but already at about 80%! That puts them into the top 5 out of the gate.
Without an IaaS component, you still can't do everything under the sun in Azure. But if you're not depending on much in the way of big third party software chunks, it's feasible; if you're doing .NET programming, it's very compelling.
Do note that I haven't focused too much on the attributes and limitations of cloud computing in general here - that's another topic - this article is meant to compare and contrast Azure to other cloud offerings so that people can understand its architecture.
I hope that was clear. Feel free and ask questions in the comments and I'll try to clarify!
After speaking with Luke Kanies at OpsCamp, and reading his good and oft-quoted article "Golden Image or Foil Ball?", I was thinking pretty hard about the use of images in our new automated infrastructure. He's pretty against them. After careful consideration, however, I think judicious use of images is the right thing to do.
My top level thoughts on why to use images.
- Speed - Starting a prebuilt image is faster than reinstalling everything on an empty one. In the world of dynamic scaling, there's a meaningful difference between a "couple minute spinup" and a "fifteen minute spinup."
- Reliability - The more work you are doing at runtime, the more there is to go wrong. I bet I'm not the only person who has run the same compile and install on three allegedly identical Linux boxen and had it go wrong somehow on one of 'em. And the more stuff you're pulling to build your image, the more failure points you have.
- Flexibility - Dynamically building from stem cell kinda makes sense if you're using 100% free open source and have everything automated. What if, however, you have something that you need to install that just hasn't been scripted - or is very hard to script? Like an install of some half-baked Windows software that doesn't have a command line installer and you don't have a tool that can do it? In that case, you really need to do the manual install in non-realtime as part of a image build. And of course many suppliers are providing software as images themselves nowadays.
- Traceability - What happens if you need to replicate a past environment? Having the image is going to be a 100% effective solution to that, even likely to be sufficient for legal reasons. "I keep a bunch of old software repo versions so I can mostly build a machine like it" - somewhat less so.
In the end, it's a question of using intermediate deliverables. Do you recompile all the code and every third party package every time you build a server? No, you often use binaries - it's faster and more reliable. Binaries are the app guys' equivalent of "images."
To address Luke's three concerns from his article specifically:
- Image sprawl - if you use images, you eventually have a large library of images you have to manage. This is very true - but you have to manage a lot of artifacts all up and down the chain anyway. Given the "manual install" and "vendor supplied image" scenarios noted above, if you can't manage images as part of your CM system than it's just not a complete CM system.
- Updating your images - Here, I think Luke makes some not entirely valid assumptions. He notes that once you're done building your images, you're still going to have to make changes in the operational environment ("bootstrapping"). True. But he thinks you're not going to use the same tool to do it. I'm not sure why not - our approach is to use automated tooling to build the images - you don't *want* to do it manually for sure - and Puppet/Chef/etc. works just fine to do that. So if you have to update something at the OS level, you do that and let your CM system blow everything on top - and then burn the image. Image creation and automated CM aren't mutually exclusive - the only reason people don't use automation to build their images is the same reason they don't always use automation on their live servers, which is "it takes work." But to me, since you DO have to have some amount of dynamic CM for the runtime bootstrap as well, it's a good conservation of work to use the same package for both. (Besides bootstrapping, there's other stuff like moving content that shouldn't go on images.)
- Image state vs running state - This one puzzles me. With images, you do need to do restarts to pull in image-based changes. But with virtually all software and app changes you have to as well - maybe not a "reboot," but a "service restart," which is virtually as disruptive. Whether you "reboot your database server" or "stop and start your database server, which still takes a couple minutes", you are planning for downtime or have redundancy in place. And in general you need to orchestrate the changes (rolling restarts, etc.) in a manner that "oh, pull that change whenever you want to Mr. Application Server" doesn't really work for.
In closing, I think images are useful. You shouldn't treat them as a replacement for automated CM - they should be interim deliverables usually generated by, and always managed by, your automated CM. If you just use images in an uncoordinated way, you do end up with a foil ball. With sufficient automation, however, they're more like Russian nesting dolls, and have advantages over starting from scratch with every box.
I went to OpsCamp this last weekend here in Austin, a get-togther for Web operations folks specifically focusing on the cloud, and it was a great time! Here's my after action report.
The event invite said it was in the Spider House, a cool local coffee bar/normal bar. I hadn't been there before, but other people that had said "That's insane! They'll never fit that many people! There's outside seating but it's freezing out!" That gave me some degree of trepidation, but I still racked out in time to get downtown by 8 AM on a Saturday (sigh!). Happily, it turned out that the event was really in the adjacent music/whatnot venue also owned by Spider House, the United States Art Authority, which they kindly allowed us to use for free! There were a lot of people there; we weren't overfilling the place but it was definitely at capacity, there were near 100 people there.
I had just hears of OpsCamp through word of mouth, and figured it was just going to be a gathering of local Austin Web ops types. Which would be entertaining enough, certainly. But as I looked around the room I started recognizing a lot of guys from Velocity and other major shows; CEOs and other high ranked guys from various Web ops related tool companies. Sponsors included John Willis and Adam Jacob (creator of Chef) from Opscode , Luke Kanies from Reductive Labs (creator of Puppet), Damon Edwards and Alex Honor from DTO Solutions (formerly ControlTier), Mark Hinkle and Matt Ray from Zenoss, Dave Nielsen (CloudCamp), Michael Coté (Redmonk), Bitnami, Spiceworks, and Rackspace Cloud. Other than that, there were a lot of random Austinites and some guys from big local outfits (Dell, IBM).
You can read all the tweets about the event if you swing that way.
OpsCamp kinda grew out of an earlier thing, BarCampESM, also in Austin two years ago. I never heard about that, wish I had.
How It Went
I had never been to an "unconference" before. Basically there's no set agenda, it's self-emergent. It worked pretty well. I'll describe the process a bit for other noobs.
First, there was a round of lightning talks. Brett from Rackspace noted that "size matters," Bill from Zenoss said "monitoring is important," and Luke from Reductive claimed that "in 2-4 years 'cloud' won't be a big deal, it'll just be how people are doing things - unless you're a jackass."
Then it was time for sessions. People got up and wrote a proposed session name on a piece of paper and then went in front of the group and pitched it, a hand-count of "how many people find this interesting" was taken.
- service level to resolution
- physical access to your cloud assets
- autodiscovery of systems
- decompose monitoring into tool chain
- tool chain for automatic provisioning
- monitoring from the cloud
- monitoring in the cloud - widely dispersed components
- agent based monitoring evolution
- devops is the debil - change to the role of sysadmins
- And more
We decided that so many of these touched on two major topics that we should do group discussions on them before going to sessions. They were:
- monitoring in the cloud
- config mgmt in the cloud
This seemed like a good idea; these are indeed the two major areas of concern when trying to move to the cloud.
Sadly, the whole-group discussions, especially the monitoring one, were unfruitful. For a long ass time people threw out brilliant quips about "Why would you bother monitoring a server anyway" and other such high-theory wonkery. I got zero value out of these, which was sad because the topics were crucially interesting - just too unfocused; you had people coming at the problem 100 different ways in sound bytes. The only note I bothered to write down was that "monitoring porn" (too many metrics) makes it hard to do correlation. We had that problem here, and invested in a (horrors) non open-source tool, Opnet Panorama, that has an advanced analytics and correlation engine that can make some sense of tens of thousands of metrics for exactly that reason.
There were three sessions. I didn't take many notes in the first one because, being a Web ops guy, I was having to work a release simultaneously with attending OpsCamp 😛
Next weekend, Jan 30 2009, there's a Web Ops get-together here in Austin called OpsCamp! It'll be a Web ops "unconference" with a cloud focus. Right up our alley! We hope to see you there.
The last presentation of the day was by Rich Mogull on "Everything you need to know about cloud security in 30 minutes or less". It all started with all of the presentations and diagrams having pictures of clouds so some guy decides to sell that. Makes security practitioners sad.
Why the cloud is a problem for security
- Poor understanding of cloud taxonomies and definitions
- A generic term, frequently misused to refer to anything on the Internet
- Lack of visibility into cloud deployments
- Organic consumption
Couldn't have talked about this stuff 6 months ago because nobody knew about it and it wasn't discussed.
- Variable control
- Variable visibility
- Variable simplicity/complexity
- Variable resources
Control, visibility, and resources goes down as simplicity and management goes up
Is the cloud more or less secure than we are now? It depends. Something are more secure and some things are less secure because of all of the variability.
- Most constrained
- Most security managed by your provider
- Least flexible
- Less constrained
- Security varies tremendously based on provider and application-shared responsibility
- Security responsibility
- Most flexible
- Most security managed by your developers
- Spillage and data security
- Capability to apply traditional security controls in a dynamic environment
- Lack of visibility into cloud usage
- Changing development patterns/cycles
How do you use your static and dynamic analysis testing tools in the cloud?
Where do you roll your cloud when it fails?
Your Top 2 Cloud Security Defenses
Understand Your SLAs
- Are there security-specific SLAs?
- Can you audit against those SLAs?
- Are there contractual penalties for non-compliance?
- Do your SLAs meet your risk tolerance requirements?
- Security audits - including third party
- Data security/encryption
- Personal security
- Security controls (depend based on service)
- User account management
- Infrastructure changes
Understand Your Cloud
- What security controls are in your cloud?
- How can you manage and integrate with the controls?
- What security documentation is available?
- What contingency plans are available?
Cloud Security Controls to Look For
- Data encryption/security (key management)
- Perimeter defenses
Cloud Security Macro Layers
- SAS70 Audits
- Documentation without verification
- Non-contractual SLAs
What to Do
- Educate yourself
- Engage with developers
- Develop cloud security requirements
Next up at the Cloud Computing and Virtualization Security half-day seminar was a Cloud Computing Panel moderated by Rich Mogull (Analyst/CEO at Securosis) with Josh Zachary (Rackspace), Jim Rymarczk (IBM), and Phil Agcaoili (Dell) participating in the panel. My notes from the panel discussion are below:
Phil: Little difference between outsources of the past and today's Cloud Computing. All of that stuff is sitting outside of your environment and we've been evolving toward that for a long time.
Rich: My impression is that there are benefits to outsourced hosting, but there are clearly areas that make sense and areas that don't. This is fundamentally different from shared computing resources. Very different applications for this. Complexity goes up very quickly very quickly for security controls. Where do you see the most value today? Where do people need to be most cautious?
Jim: Internal virtualization is almost necessary, but it impacts almost every IT process. Technology is still evolving and is far from advanced state. Be pragmatic and find particular applications with a good ROI.
Josh: Understand what you are putting into a cloud environment. Have a good understanding of what a provider can offer you in terms of sensitive data. Otherwise you're putting yourself in a very bad situation. A lot of promise. Great for social networking and web development. Not appropriate with enterprises with large amounts of IP and sensitive data.
Jim: We'll get there in 4-5 years.
Phil: Let supply chain experts do it for you and then interact with them. Access their enviornment from anywhere. Use a secure URL with a federated identity. Your business will come back to you and say "We need to do this" and IT will be unable to assist them. Use it as an opportunity to mobilize compliance and InfoSec and get involved. It's going to come to use and we're just going to have to deal with it. There's a long line of people with a "right to audit". Don't think that someone is doing the right thing in this space, you have to ask.
Audience: What is the most likely channel for standards?
Phil: Cloud Security Alliance is a step in the right direction. Want to come up with PCI DSS like checklists. CSA is working with IEEE and NIST to work along with them. Goal is to be able to feed the standards process, not become a standards body.
Rich: The market is anti-standards based. If we get standardized, then all of the providers are only competing based on cost.
Jim: I think it'll happen. We will see ISO groups for standards on cloud quality.
Audience: Moving data between multiple clouds. How do you determine who gets paid?
Jim: There are proposals for doing that. All of the resource parameters.
Phil: Should see standards based on federated identity. Who is doing what and where. That's where I've seen the most movement. There is no ISO for SaaS. Remapping how 27001 and 27002 apply to us as a software provider.
Audience: Two things that drive standards. The market or monopoly (BetaMax).
Rich: We will have monopolistic ones and then 3rd parties that say they use those standards.
Audience: How can you really have an objective body create standards without being completely embedded in the technology?
Jim: You create a reference standard and the market drives that.
Phil: Gravity pulls us to things that work. Uses SAML as an example. It's the way the internet has always worked. The strongest will survive and the right standards will manifest themselves.
Rich: What are some of things that you're dealing with internally (as consumers and providers) and the top suggestions for people stuck in this situation?
Jim: People who don't have all of the requirements do public clouds. If what you want is available (salesforce.com), it may be irresistible.
Josh: Solution needs to be appropriate to the need. Consult with your attorney to make sure you contract is in line with what you're leveraging the provider for. It's really about what you agree to with that provider and their responsibilities.
Phil: The hurricane is coming. You can't scream into the wind, you gotta learn to run for cover. Find the safe spot.
Audience: What industries do you see using this? I don't see it with healthcare.
Phil: Mostly providers for us. Outsourcing service desks. Government. Large states/local.
Josh: Small and medium retail businesses. Get products out there at a significantly reduced cost.
Jim: Lots of financial institutions looking for ways to cut costs. Healthcare industry as well (Mayo Clinic). Broad interest across the whole market, but especially anywhere they're under extreme cost measures.
Rich: I run a small business that picked an elastic provider that couldn't pay for a full virtual hosting provider. Doing shared hosting right now, but capable of growing to a virtual private server. Have redundancy. Able to go full-colocation if they need it. Able to support growth, but start with the same instance to get there.
Audience: How does 3rd party transparency factor into financial uses?
Jim: Almost exclusively private clouds. There are use cases playing out right now that will be repeatable patterns. Use cases.
Phil: When the volume isn't there, offload to someone like Rackspace and they'll help you to grow.
Audience: Are there guidelines to contracts to make sure information doesn't just get outsourced to yet another party?
Phil: Your largest partners/vendors steal their contracts. Use them as templates.
Audience: What recourse do you have that an audit is used to verify that security is not an issue?
Phil: Third party assessment (ie. the right to audit). It's in our interest to verify they are secure. It's a trend and we now have a long list of people looking to audit against us as a provider. Hoping for an ISO to come up truly for the cloud.
Audience: Is cloud computing just outsourcing?
Rich: It's more than that. For example, companies have internal clouds that aren't outsourced at all.
Josh: Most of the time it's leveraging resources more efficiently at hopefully a reduced cost.
Audience: How do I know you're telling me the truth about the resources I'm using? What if I'm a bad guy who wants to exploit a competitor using the cloud?
Josh: We've seen guys create botnets using stolen credit cards. What you're billed for is in your contract.
Jim: We've had this solved for decades on mainframes. Precious resources propagated amongst users. There's no technical reason we're not doing it today.
Rich: It depends what type of cloud you're using. Some will tell you.
Josh: If you're worried about someone abusing you, why are you there in the first place?
Phil: For our service desk we meter this by how many calls, by location. Monitor servers that were accessed/patched/etc. Different service providers will have different levels.
Audience: Seeing some core issues at the heart of this. For businesses, an assessment of core competencies. Can you build a better data center with the cloud? Second issue involves risk assessment. Can you do a technical audit? Can you pay for it legally? How much market presence does the vendor have? Who has responsibility for what? Notion of transparency of control. Seems like it distills down to those core basics.
Jim: I agree.
Rich: Well said.
Phil: Yes, yes, yes.
Audience: How do you write a contract for failed nation states, volatility, etc? Do we say you can't put our stuff in these countries?
Phil: This is the white elephant in the room. How can you ensure that my data is being protected the way I'd protect it myself. It's amazing what other people do when they get a hold of that stuff. This is the underlying problem that we have to solve. "Moving from a single-family home to a multi-tenant condo. How do we build that now?
Rich: You need to be comfortable with what you're putting out there.
Audience: To what extent is the military or federal government using cloud computing?
Jim: They're interested in finding ways, but they don't talk about how they're using it.
Audience - Vern: They're doing cloud computing using an internal private cloud already. They bill back to the appropriate agency based on use.
Phil: Government is very wary of what's going on.
The next presentation at the ISSA half-day seminar was on the "Cloud Security Alliance" and Security Guidance for Critical Areas of Focus in Cloud Computing by Jeff Reich. Here are my notes from this presentation:
- About the Cloud Security Alliance
- Getting Involved
- Guidance 1.0
- Call to Action
About the Cloud Security Alliance
- Not-for-profit organization
- Inclusive membership, supporting broad spectrum of subject matter expertise: cloud experts, security, legal, compliance, virtualization, etc
- We believe in Cloud Computing, we want to make it better
- Individual membership (free)
- Subject matter experts for research
- Interested in learning about the topic
- Administrative & organizational help
- Corporate Sponsorship
- Help fund outreach, events
- Affiliated Organizations (free)
- Joint projects in the community interest
- Contact information on website
Download version 1.0 of the Security Guidance at http://www.cloudsecurityalliance.org/guidance
Overview of Guidance
- 15 domains
- #1 is Architecture & Framework
- Covers Governing in the Cloud (2-7) and Operating in the Cloud (8-15) as well
Assumptions & Objectives
- Trying to bridge gap between cloud adopters and security practitioners
- Broad "security program" view of the problem
- Not "One Cloud": Nuanced definition critical to understanding risks & mitigation
- 5 principal characteristics (abstration, sharing, SOA, elasticity, consumption/allocation)
- 3 delivery models
- Infrastructure as a Service
- Platform as a Service
- Software as a Service
- 4 deployment models: Public, Private, Managed, Hybrid
Governance & ERM
- A portion of cloud cost savings must be invested into provider security
- Third party transparency of cloud provider
- Financial viability of cloud provider
- Alignment of key performance indicators
- PII best suited in private/hybrid cloud outside of significant due diligence of public cloud provider
- Increased frequency of 3rd party risk assessments
Important thing to consider is the financial viability of your provider. You never want to have your data held hostage in a court battle.
- Contracts must have flexible structure for dynamic cloud relationships
- Plan for both an expected and unexpected termination of the relationship and an orderly return of your assets
- Find conflicts between the laws the cloud provider must comply with and those governing the cloud customer
Compliance & Audit
- Classify data and systems to understand compliance requirements
- Understand data locations, copies
Information Lifecycle Management
- Understand the logical segregation of information and protective controls imnplemented in storage, transfers, backups
- Cloud Computing is real and transformational
- Cloud Computing can and will be secured
- Broad governance approach needed
- Tactical fixes needed
- Combination of updating existing best practices and creating completely new best practices
- Common sense is not optional
Call to Action
- Join us, help make our work better
- Twitter: @cloudsa, #csaguide
Today the Austin ISSA and ISACA chapters held a half-day seminar on Cloud Computing and Virtualization Security. The introduction on cloud computing was given by Vern Williams. My notes on this topic are below:
5 Key Cloud Characteristics
- On-demand self-service
- Ubiquitous network access
- Location independent resource pooling
- Rapid elasticity
- Pay per use
3 Cloud Delivery Models
- Software as a Service (SaaS): Providers applications over a network
- Platform as a Service (PaaS): Deploy customer-created apps to a cloud
- Infrastructure as a Service (IaaS): Rent processing, storage, etc
4 Cloud Deployment Models
- Private cloud: Enterprise owned or leased
- Community cloud: Shared infrastructure for a specific community
- Public cloud: Sold to the public, Mega-scale infrastructure
- Hybrid cloud: Composition of two or more clouds
- Two types: internal and external
Common Cloud Characteristics
- Massive scale
- Free software
- Autonomic computing
- Geographically distributed systems
- Advanced security technologies
- Service oriented software
- Lower central processing unit (CPU) density
- Flexible use of resources
- Rapid deployment of new servers
- Simplified recovery
- Virtual network connections
- Potential impact of a single component failure
- Hypervisor security issues
- Keeping virtual machine (VM) images current
- Virtual network connections
Virtualization Security Concerns
- Protecting the virtual fabric
- Patching off-line VM images
- Configuration Management
- Firewall configurations
- Complicating Audit and Forensics
The industry is abuzz with people who are freaked out about the outages that Amazon and other cloud vendors have had. "Amazon S3 Crash Raises Doubts Among Cloud Customers," says InformationWeek!
This is because people are going into cloud computing with retardedly high expectations. This year at Velocity, Interop, etc. I've seen people just totally in love with cloud computing - Amazon's specifically but in general as well. And it's a good concept for certain applications. However, it is a computing system just like every other computing system devised previously by man. And it has, and will have, problems.
Whether you are using in house systems, or a SaaS vendor, or building "in the cloud," you have the same general concerns. Am I monitoring my systems? What is my SLA? What is my recourse if my system is not hitting it? What's my DR plan?
Cloud computing is also being called "PaaS," or Platform as a Service. It's a special case of SaaS. And if you're a company relying on it, when you contract with a SaaS vendor you get SLAs established and figure out what the remedy is if they breach it. If you are going into a relationship where you are just paying money for a cloud VM, storage, etc. and there is no enforceable SLA in the relationship, then you need to build the risk of likely and unremediable outages into your business plan.
I hate to break it to you, but the IT people working at Amazon, Google, etc. are not all that smarter than the IT people working with you. So an unjustified faith in a SaaS or cloud vendor - "Oh, it's Amazon, I'm sure they'll never have an outage of any sort - their entire system or localized to my part - and if they do I'm sure the $100/month I'm paying them will cause them to give a damn about me" - is unreasonable on its face.
Clouds and cloud vendors are a good innovation. But they're like every other computing innovation and vendor selling it to you. They'll have bugs and failures. But treating them differently is a failure on your part, not theirs.