Web Admin Blog Real Web Admins. Real World Experience.

24Feb/100

A Case For Images

After speaking with Luke Kanies at OpsCamp, and reading his good and oft-quoted article "Golden Image or Foil Ball?", I was thinking pretty hard about the use of images in our new automated infrastructure.  He's pretty against them.  After careful consideration, however, I think judicious use of images is the right thing to do.

My top level thoughts on why to use images.

  1. Speed - Starting a prebuilt image is faster than reinstalling everything on an empty one.  In the world of dynamic scaling, there's a meaningful difference between a "couple minute spinup" and a "fifteen minute spinup."
  2. Reliability - The more work you are doing at runtime, the more there is to go wrong.  I bet I'm not the only person who has run the same compile and install on three allegedly identical Linux boxen and had it go wrong somehow on one of 'em.  And the more stuff you're pulling to build your image, the more failure points you have.
  3. Flexibility - Dynamically building from stem cell kinda makes sense if you're using 100% free open source and have everything automated.  What if, however, you have something that you need to install that just hasn't been scripted - or is very hard to script?  Like an install of some half-baked Windows software that doesn't have a command line installer and you don't have a tool that can do it?  In that case, you really need to do the manual install in non-realtime as part of a image build.  And of course many suppliers are providing software as images themselves nowadays.
  4. Traceability - What happens if you need to replicate a past environment?  Having the image is going to be a 100% effective solution to that, even likely to be sufficient for legal reasons.  "I keep a bunch of old software repo versions so I can mostly build a machine like it" - somewhat less so.

In the end, it's a question of using intermediate deliverables.  Do you recompile all the code and every third party package every time you build a server?  No, you often use binaries - it's faster and more reliable.  Binaries are the app guys' equivalent of "images."

To address Luke's three concerns from his article specifically:

  1. Image sprawl - if you use images, you eventually have a large library of images you have to manage.  This is very true - but you have to manage a lot of artifacts all up and down the chain anyway.  Given the "manual install" and "vendor supplied image" scenarios noted above, if you can't manage images as part of your CM system than it's just not a complete CM system.
  2. Updating your images - Here, I think Luke makes some not entirely valid assumptions.  He notes that once you're done building your images, you're still going to have to make changes in the operational environment ("bootstrapping").  True.  But he thinks you're not going to use the same tool to do it.  I'm not sure why not - our approach is to use automated tooling to build the images - you don't *want* to do it manually for sure - and Puppet/Chef/etc. works just fine to do that.  So if you have to update something at the OS level, you do that and let your CM system blow everything on top - and then burn the image.  Image creation and automated CM aren't mutually exclusive - the only reason people don't use automation to build their images is the same reason they don't always use automation on their live servers, which is "it takes work."  But to me, since you DO have to have some amount of dynamic CM for the runtime bootstrap as well, it's a good conservation of work to use the same package for both. (Besides bootstrapping, there's other stuff like moving content that shouldn't go on images.)
  3. Image state vs running state - This one puzzles me.  With images, you do need to do restarts to pull in image-based changes.  But with virtually all software and app changes you have to as well - maybe not a "reboot," but a "service restart," which is virtually as disruptive.  Whether you "reboot  your database server" or "stop and start your database server, which still takes a couple minutes", you are planning for downtime or have redundancy in place.  And in general you need to orchestrate the changes (rolling restarts, etc.) in a manner that "oh, pull that change whenever you want to Mr. Application Server" doesn't really work for.

In closing, I think images are useful.  You shouldn't treat them as a replacement for automated CM - they should be interim deliverables usually generated by, and always managed by, your automated CM.  If you just use images in an uncoordinated way, you do end up with a foil ball.  With sufficient automation, however, they're more like Russian nesting dolls, and have advantages over starting from scratch with every box.

17Feb/100

Agile Operations

It's funny.  When we recently started working on an upgrade of our Intranet social media platform, and we were trying to figure out how to meld the infrastructure-change-heavy operation with the need for devs, designers, and testers to be able to start working on the system before "three months from now," we broached the idea of "maybe we should do that in iterations!"  First, get the new wiki up and working.  Then, worry about tuning, switching the back end database, etc.  Very basic, but it got me thinking about the problem in terms of "hey, Infrastructure still operates in terms of waterfall, don't we."

Then when Peco and I moved over to NI R&D and started working on cloud-based systems, we quickly realized the need for our infrastructure to be completely programmable - that is, not manually tweaked and controlled, but run in a completely automated fashion.  Also, since we were two systems guys embedded in a large development org that's using agile, we were heavily pressured to work in iterations along with them.  This was initially a shock - my default project plan has, in traditional fashion, months worth of evaluating, installing, and configuring various technology components before anything's up and running.   But as we began to execute in that way, I started to see that no, really, agile is possible for infrastructure work - at least "mostly."  Technologies like cloud computing help, but there's still a little more up front work required than with programming - but you can get mostly towards an agile methodology (and mindset!).

Then at OpsCamp last month, we discovered that there's been this whole Agile Operations/Automated Infrastructure/devops movement thing already in progress we hadn't heard about.  I don't keep in touch with The Blogosphere (tm) enough I guess.  Anyway, turns out a bunch of other folks have suddenly come to the exact same conclusion and there's exciting work going on re: how to make operations agile, automate infrastructure, and meld development and ops work.

So if  you also hadn't been up on this, here's a roundup of some good related core thoughts on these topics for your reading pleasure!

24Jun/090

Velocity 2009 – Introduction to Managed Infrastructure with Puppet

Introduction to Managed Infrastructure with Puppet
by Luke Kanies, Reductive Labs

You can get the work files from git://github.com/reductivelabs/velocity_puppet_workshop_2009.git, and the presentation's available here.

I saw Luke's Puppet talk last year at Velocity 2008, but am more ready to start uptaking some conf management back home.  Our UNIX admins use cfengine, and puppet is supposed to be a better-newer cfengine.  Now there's also an (allegedly) better-newer one called chef I read about lately.  So this should be interesting in helping to orient me to the space.  At lunch, we sat with Luke and found out that Reductive just got their second round funding and were quite happy, though got nervous and prickly when there was too much discussion of whether they were all buying Teslas now.  Congrats Reductive!

Now, to work along, you git the bundle and use it with puppet.  Luke assumes we all have laptops, all have git installed on our laptops, and know how to sync his bundle of goodness down.  And have puppet or can quickly install it.  Bah.  I reckon I'll just follow along.

You can get puppet support via IRC, or the puppet-users google group.

First we exercise "ralsh", the resource abstraction layer shell, which can interact with resources like packages, hosts, and users.  Check em, add em, modify em.

You define abstraction packages.  Like "ssh means ssh on debian, openssh on solaris..."  It requires less redundancy of config than cfengine.

"puppet"  consists of several executables - puppet, ralsh, puppetd, puppetmasterd, and puppetca.

As an aside, cft is a neat config file snapshot thing in red hat.

Anyway, you should use puppet not ralsh directly.  Anyway the syntax is similar.  Here's an example invocation:

puppet -e 'file { "/tmp/eh": ensure => present }'

There's a file backup, or "bucket", functionality when you change/delete files.

You make a repository and can either distribute it or run it all from a server.

There is reporting.

There's a gepetto addon that helps you build a central repo.

A repo has (or should have) modules, which are basically functional groupings.  Modules have "code."  The code can be a class definition.  init.pp is the top/special one.   There's a modulepath setting for puppet.  Load the file, include the class, it runs all the stuff in the class.

It has "nodes" but he scoffs at them.  Put them in manifests/site.pp.  default, or hostname specific (can inherit default).   But you should use a different application, not puppet, to do this.

You have to be able to completely and correctly describe a task for puppet to do it.  This is a feature not a bug.

Puppet uses a client-server pull architecure.  You start a puppetmasterd on a server.  Use the SSH defaults because that's complicated and will hose you eventually.  Then start a puppetd on a client and it'll pull changes from the server.

This is disjointed.  Sorry about that.  The session is really just reading the slide equivalent of man pages while flipping back and forth to a command prompt to run basic examples.  I don't feel like this session gave enough of an intro to puppet, it was just "launch into the man pages and then run individual commands, many of which he tells you to never do."  I don't feel like I'm a lot more informed on puppet than when I started, which makes me sad.  I'm not sure what the target audience for this is.  If it's people totally new to puppet, like me, it starts in the weeds too much.  If it's for someone whohas used puppet, it didn't seem to have many pro tips or design considerations, it was basic command execution.  Anyway, he ran out of time and flipped through the last ten slides in as many seconds.  I'm out!

24Jun/080

The Velocity 2008 Conference Experience – Part VII

We've reached the last couple sessions at Velocity 2008. Read me! Love me!

We hear about Capacity Planning with John Allspaw of Flickr. He says: No benchmarks! Use real production data. (How? We had to develop a program called WebReplay to do this because no one had anything. We're open sourcing it soon, stay tuned.)

Use "safety factors" (from traditional engineering). Aka a reserve, overhead, etc.

They use squid a bunch. At NI we've been looking at Oracle's WebCache - mainly because it supports ESIs and we're thinking that may be a good way to go. There's a half assed ESI plugin for squid but we hear it doesn't work; apparently Zope paid for ESI support in squid 3.0 but no traction on that in 4 years best as we can tell. But I'd be happy not to spend the money.