Web Admin Blog Real Web Admins. Real World Experience.


OpsCamp Debrief

I went to OpsCamp this last weekend here in Austin, a get-togther for Web operations folks specifically focusing on the cloud, and it was a great time!  Here's my after action report.

The event invite said it was in the Spider House, a cool local coffee bar/normal bar.  I hadn't been there before, but other people that had said "That's insane!  They'll never fit that many people!  There's outside seating but it's freezing out!"  That gave me some degree of trepidation, but I still racked out in time to get downtown by 8 AM on a Saturday (sigh!).  Happily, it turned out that the event was really in the adjacent music/whatnot venue also owned by Spider House, the United States Art Authority, which they kindly allowed us to use for free!  There were a lot of people there; we weren't overfilling the place but it was definitely at capacity, there were near 100 people there.

I had just hears of OpsCamp through word of mouth, and figured it was just going to be a gathering of local Austin Web ops types.  Which would be entertaining enough, certainly.  But as I looked around the room I started recognizing a lot of guys from Velocity and other major shows; CEOs and other high ranked guys from various Web ops related tool companies.  Sponsors included John Willis and Adam Jacob (creator of Chef) from Opscode , Luke Kanies from Reductive Labs (creator of Puppet), Damon Edwards and Alex Honor from DTO Solutions (formerly ControlTier), Mark Hinkle and Matt Ray from Zenoss, Dave Nielsen (CloudCamp), Michael Coté (Redmonk), Bitnami, Spiceworks, and Rackspace Cloud.  Other than that, there were a lot of random Austinites and some guys from big local outfits (Dell, IBM).

You can read all the tweets about the event if you swing that way.

OpsCamp kinda grew out of an earlier thing, BarCampESM, also in Austin two years ago.  I never heard about that, wish I had.

How It Went

I had never been to an "unconference" before.  Basically there's no set agenda, it's self-emergent.  It worked pretty well.  I'll describe the process a bit for other noobs.

First, there was a round of lightning talks.  Brett from Rackspace noted that "size matters," Bill from Zenoss said "monitoring is important," and Luke from Reductive claimed that "in 2-4 years 'cloud' won't be a big deal, it'll just be how people are doing things - unless you're a jackass."

Then it was time for sessions.  People got up and wrote a proposed session name on a piece of paper and then went in front of the group and pitched it, a hand-count of "how many people find this interesting" was taken.

Candidates included:

  • service level to resolution
  • physical access to your cloud assets
  • autodiscovery of systems
  • decompose monitoring into tool chain
  • tool chain for automatic provisioning
  • monitoring from the cloud
  • monitoring in the cloud - widely dispersed components
  • agent based monitoring evolution
  • devops is the debil - change to the role of sysadmins
  • And more

We decided that so many of these touched on two major topics that we should do group discussions on them before going to sessions.  They were:

  • monitoring in the cloud
  • config mgmt in the cloud

This seemed like a good idea; these are indeed the two major areas of concern when trying to move to the cloud.

Sadly, the whole-group discussions, especially the monitoring one, were unfruitful.  For a long ass time people threw out brilliant quips about "Why would you bother monitoring a server anyway" and other such high-theory wonkery.  I got zero value out of these, which was sad because the topics were crucially interesting - just too unfocused; you had people coming at the problem 100 different ways in sound bytes.  The only note I bothered to write down was that "monitoring porn" (too many metrics) makes it hard to do correlation.  We had that problem here, and invested in a (horrors) non open-source tool, Opnet Panorama, that has an advanced analytics and correlation engine that can make some sense of tens of thousands of metrics for exactly that reason.


There were three sessions.  I didn't take many notes in the first one because, being a Web ops guy, I was having to work a release simultaneously with attending OpsCamp 😛


Velocity 2009 – Introduction to Managed Infrastructure with Puppet

Introduction to Managed Infrastructure with Puppet
by Luke Kanies, Reductive Labs

You can get the work files from git://github.com/reductivelabs/velocity_puppet_workshop_2009.git, and the presentation's available here.

I saw Luke's Puppet talk last year at Velocity 2008, but am more ready to start uptaking some conf management back home.  Our UNIX admins use cfengine, and puppet is supposed to be a better-newer cfengine.  Now there's also an (allegedly) better-newer one called chef I read about lately.  So this should be interesting in helping to orient me to the space.  At lunch, we sat with Luke and found out that Reductive just got their second round funding and were quite happy, though got nervous and prickly when there was too much discussion of whether they were all buying Teslas now.  Congrats Reductive!

Now, to work along, you git the bundle and use it with puppet.  Luke assumes we all have laptops, all have git installed on our laptops, and know how to sync his bundle of goodness down.  And have puppet or can quickly install it.  Bah.  I reckon I'll just follow along.

You can get puppet support via IRC, or the puppet-users google group.

First we exercise "ralsh", the resource abstraction layer shell, which can interact with resources like packages, hosts, and users.  Check em, add em, modify em.

You define abstraction packages.  Like "ssh means ssh on debian, openssh on solaris..."  It requires less redundancy of config than cfengine.

"puppet"  consists of several executables - puppet, ralsh, puppetd, puppetmasterd, and puppetca.

As an aside, cft is a neat config file snapshot thing in red hat.

Anyway, you should use puppet not ralsh directly.  Anyway the syntax is similar.  Here's an example invocation:

puppet -e 'file { "/tmp/eh": ensure => present }'

There's a file backup, or "bucket", functionality when you change/delete files.

You make a repository and can either distribute it or run it all from a server.

There is reporting.

There's a gepetto addon that helps you build a central repo.

A repo has (or should have) modules, which are basically functional groupings.  Modules have "code."  The code can be a class definition.  init.pp is the top/special one.   There's a modulepath setting for puppet.  Load the file, include the class, it runs all the stuff in the class.

It has "nodes" but he scoffs at them.  Put them in manifests/site.pp.  default, or hostname specific (can inherit default).   But you should use a different application, not puppet, to do this.

You have to be able to completely and correctly describe a task for puppet to do it.  This is a feature not a bug.

Puppet uses a client-server pull architecure.  You start a puppetmasterd on a server.  Use the SSH defaults because that's complicated and will hose you eventually.  Then start a puppetd on a client and it'll pull changes from the server.

This is disjointed.  Sorry about that.  The session is really just reading the slide equivalent of man pages while flipping back and forth to a command prompt to run basic examples.  I don't feel like this session gave enough of an intro to puppet, it was just "launch into the man pages and then run individual commands, many of which he tells you to never do."  I don't feel like I'm a lot more informed on puppet than when I started, which makes me sad.  I'm not sure what the target audience for this is.  If it's people totally new to puppet, like me, it starts in the weeds too much.  If it's for someone whohas used puppet, it didn't seem to have many pro tips or design considerations, it was basic command execution.  Anyway, he ran out of time and flipped through the last ten slides in as many seconds.  I'm out!