Web Admin Blog Real Web Admins. Real World Experience.

23Jun/100

Looking for DevOps Stuff?

If you heard about us at Velocity 2010 and are coming here for sweet sweet DevOps and agile info,  we've moved to a different blog - come see us at the agile admin!

28Apr/100

dev2ops Interview

Want to hear me spout off more about DevOps?  Well, here's your chance; I did an interview with Damon Edwards of DTO and they've posted it on the dev2ops blog!

Killer quote:

"I say this as somebody who about 15 years ago chose system administration over development.  But system administration and system administrators have allowed themselves to lag in maturity behind what the state of the art is. These new technologies are finally causing us to be held to account to modernize the way we do things.  And I think that’s a welcome and healthy challenge."

Tagged as: , No Comments
12Mar/104

Before DevOps, Don’t You Need OpsOps?

From the "sad but true" files comes an extremely insightful point apparently discussed over beer by the UK devops crew recently - that we are talking about dev and ops collaboration but the current state of collaboration among ops teams is pretty crappy.

This resonates deeply with me.  I've seen that problem in spades.  I think in general that a lot of the discussion about the agile ops space is too simplistic in that it seems tuned to organizations of "five guys, three of whom are coders and two of whom are operations" and there's no differentiation.  In real life, there's often larger orgs and a lot of differentiation that causes various collaboration challenges.  Some people refer to this as Web vs Enterprise, but I don't think that's strictly true; once your Web shop grows from 5 guys to 200 it runs afoul of this too - it's a simple scalability and organizational engineering problem.

As an aside, I don't even like the "Ops" term - a sysadmin team can split into subgroups that do systems engineering, release management, and operational support...  Just saying "Ops" seems to me to create implications of not being a partner in the initial design and development of the overall system/app/service/site/whatever you want to call it.

Ops Verticals

Here, we have a large Infrastructure department.  Originally, it was completely siloed by technology verticals, and there's a lot of subgroups.  Network, UNIX, Windows, DBA, Lotus Notes, Telecom, Storage, Data Center...  Some ten plus years ago when the company launched their Web site in earnest, they quickly realized that wasn't going to work out.  You had the buck-passing behavior described in the blog posts above that made issues impossible to solve in a timely fashion, plus it made collaboration with devs/business nearly impossible.  Not only did you need like 8 admins to come involve themselves in your project, but they did not speak similar enough languages - you'd have some crusty UNIX admin yelling "WHAT ABOUT THE INODES" until the business analyst started to cry.

Dev Silos

But are our developers here better off?  They are siloed by business unit.  Just among the Web developers there's the eCommerce developers, eCRM, Product Advisors, Community, Support, Content Management...  On the one hand, they are able to be very agile in creating solutions inside their specific niche.  On the other hand, they are all working within the same system environment, and they don't always stay on the same page in terms of what technologies they are using. "Well, I'm sure THAT team bought a lovely million dollar CMS, but we're going to buy our own different million dollar CMS.   No, you don't get more admin resource."  Over time, they tried to produce architecture groups and other cross-team initiatives to try to rein in the craziness, with mixed but overall positive results.

Plugging the Dike

What we did was create a Web Administration group (Web Ops, whatever you want to call it) that was holistically responsible for Web site uptime, performance, and security.  Running that team was my previous gig, did it for five years.  That group was more horizontally focused and would serve as an interface to the various technology verticals; it worked closely with developers in system design during development, coordinated the release process, and involved devs in troubleshooting during the production phase.

BizOps?

In fact, we didn't just partner with the developers - we partnered with the business owners of our Web site too, instead of tolerating the old model of "Business collaborates with the developers, who then come and tell ops what to do."  This was a remarkably easy sell really.  The company lost money every minute the Web site was down, and it was clear that the dev silos weren't going to be able to fix that any more than the ops silos were.  So we quickly got a seat at the same table.

Results

This was a huge success.  To this day, our director of Web Marketing is one of the biggest advocates of the Web operations team.  Since then, other application administration (our word for this cross-disciplinary ops) teams have formed along the same model.  The DevOps collaboration has been good overall - with certain stresses coming from the Web Ops team's role as gatekeeper and process enforcement.  Ironically, the biggest issues and worst relationships were within Infrastructure between the ops teams!

OpsOps - The Fly In The Ointment

The ops team silos haven't gone down quietly.  To this day the head DBA still says "I don't see a good reason for you guys [WebOps] to exist."  I think there's a common "a thing is just the sum of its parts" mindset among admins for whatever reason.  There are also turf wars arising from the technology silo division and the blurring of technology lines by modern tech.  I tried again and again to pitch "collaborative system administration."  But the default sysadmin behavior is to say "these systems are mine and I have root on them.  Those are your systems and you have root on them.  Stay on your side of the line and I'll stay on mine."

Fun specific Catch-22 situations we found ourselves in:

  • Buying a monitoring tool that correlates events across all the different tiers to help root-cause production problems - but the DBAs refusing to allow it on "their" databases.
  • Buying a hardware load balancer - we were going to manage it, not the network team, and it wasn't a UNIX or Windows server, so we couldn't get anyone to rack and jack it (and of course we weren't allowed to because "Why would a webops person need server room access, that's what the other teams are for").

Some of the problem is just attitude, pure and simple.  We had problems even with collaboration inside the various ops teams!  We'd work with one DBA to design a system and then later need to get support from another DBA, who would gripe that "no one told/consulted them!"  Part of the value of the agile principles that "DevOps" tries to distill is just a generic "get it into your damn head you need to be communicating and working together and that needs to be your default mode of operation." I think it's great to harp on that message because it's little understood among ops.  For every dev group that deliberately ostracizes their ops team, there's two ops teams who don't think they need to talk to the devs - in the end, it's mostly our fault.

Part of the problem is organizational.  I also believe (and ITIL, I think, agrees with me) that the technology-silo model has outlived its usefulness.  I'd like to see admin teams organized by service area with integral DBAs, OS admins, etc.  But people are scared of this for a couple reasons.  One is that those admins might do things differently from area to area (the same problem we have with our devs) - this could be mitigated by "same tech" cross-org standards/discussions.  The other is that this model is not the cheapest.  You can squeeze every last penny out if you only have 4 Windows admins and they're shared by 8 functional areas.  Of course, you are cutting off your nose to spite your face because you lose lots more in abandoned agility, but frankly corporate finance rules (minimize G&A spending) are a powerful driver here.

If nothing else, there's not "one right organization" - I'd be tempted to reorg everyone from verticals into horizontals, let that run for 5 years, and then reorg back the other way, just to keep the stratification from setting in.

Specialist vs Generalist

One other issue.  The Web Ops team we created required us to hire generalists - but generalists that knew their stuff in a lot of different areas.  It became very hard to hire for that position and training took months before someone was at all effective.  Being a generalist doesn't scale well.  Specialization is inevitable and, indeed, desirable (as I think pretty much anything in the history of anything demonstrates).  You can mitigate that with some cross-training and having people be generalists in some areas, but in the end, once you get past that "three devs, two ops, that's the company" model, specialization is needed.

That's why I think one of the common definitions of DevOps - all ops folks learning to be developers or vice versa - is fundamentally flawed.  It's not sustainable.  You either need to hire all expensive superstars that can be good at both, or you hire people that suck at both.

What you do is have people with varying mixes.  In my current team we have a continuum of pure ops people, ops folks doing light dev, devs doing light ops, and pure devs.  It's good to have some folks who are generalizing and some who are specializing.  It's not specializing that is bad, it's specialists who don't collaborate that are bad.

Conclusion

So I've shared a lot of experiences and opinions above but I'm not sure I have a brilliant solution to the problem.  I do think we need to recognize that Ops/Ops collaboration is an issue that arises with scale and one potentially even harder to overcome than Dev/Ops collaboration.  I do think stressing collaboration as a value and trying to break down organizational silos may help.  I'd be happy to hear other folks' experiences and thoughts!

11Mar/101

Defining Agile Operations and DevOps

I recently read a great blog post by Scott Wilson that was talking about the definitions of Agile Operations, DevOps, and related terms.  (Read the comments too, there's some good discussion.)  From what I've heard so far, there are a bunch of semi-related terms people are using around this whole "new thing of ours."

The first is DevOps, which has two totally different frequently used definitions.

1.  Developers and Ops working closely together - the "hugs and collaboration" definition

2.  Operations folks uptaking development best practices and writing code for system automation

The second is Agile Operations, which also has different meanings.

1.  Same as DevOps, whichever definition of that I'm using

2.  Using agile principles to run operations - process techniques, like iterative development or even kanban/TPS kinds of process stuff.  Often with a goal of "faster!"

3.  Using automation - version control, automatic provisioning/control/monitoring.  Sometimes called "Infrastructure Automation" or similar.

This leads to some confusion, as most of these specific elements can be implemented in isolation.  For example, I think the discussion at OpsCamp about "Is DevOps an antipattern" was predicated on an assumption that DevOps meant only DevOps definition #2, "ops guys trying to be developers," and made the discussion somewhat odd to people with other assumed definitions.

I have a proposed set of definitions.  To explain it, let's look at Agile Development and see how it's defined.

Agile development, according to wikipedia and the agile manifesto, consists of a couple different "levels" of thing.  To sum up the wikipedia breakdown,

  • Agile Principles - like "business/users and developers working together."  These are the core values that inform agile, like collaboration, people over process, software over documentation, and responding to change over planning.
  • Agile Methods - specific process types.  Iterations, Lean, XP, Scrum.  "As opposed to waterfall."
  • Agile Practices - techniques often found in conjunction with agile development, not linked to a given method flavor, like test driven development, continuous integration, etc.

I believe the different parts of Agile Operations that people are talking about map directly to these three levels.

  • Agile Operations Principles includes things like dev/ops collaboration (DevOps definition 1 above); things like James Turnbull's 4-part model seem to be spot on examples of trying to define this arena.
  • Agile Operations Methods includes process you use to conduct operations - iterations, kanban, stuff you'd read in Visible Ops; Agile Operations definition #2 above.
  • Agile Operations Practices includes specific techniques like automated build/provisioning, monitoring, anything you'd have a "toolchain" for.  This contains DevOps definition #2 and Agile Operations definition #3 above.

I think it's helpful to break them up along the same lines as agile development, however, because in the end some of those levels should merge once developers understand ops is part of system development too...  There shouldn't be a separate "user/dev collaboration" and "dev/ops collaboration," in a properly mature model it should become a "user/dev/ops collaboration," for example.

I think the dev2ops guys' "People over Process over Tools" diagram mirrors this about exactly - the people being one of the important agile principles, process being a large part of the methods, and tools being used to empower the practices.

What I like about that diagram, and why I want to bring this all back to the Agile Manifesto discussion, is that the risk of having various sub-definitions increases the risk that people will implement the processes or tools without the principles in mind, which is definitely an antipattern.  The Agile guys would tell you that iterations without collaboration is likely to not work out real well.

And it happens in agile development too - there are some teams here at my company that have adopted the methods and/or tools of agile but not its principles, and the results are suboptimal.

Therefore I propose that "Agile Operations" is an umbrella term for all these things, and we keep in mind the principles/methods/practices differentiation.

If we want to call the principles "devops" for short and some of the practices "infrastructure automation" for short I think that would be fine...   Although dev/ops collaboration is ONE of the important principles - but probably not the entirety; and infrastructure automation is one of the important practices, but there are probably others.

24Feb/100

A Case For Images

After speaking with Luke Kanies at OpsCamp, and reading his good and oft-quoted article "Golden Image or Foil Ball?", I was thinking pretty hard about the use of images in our new automated infrastructure.  He's pretty against them.  After careful consideration, however, I think judicious use of images is the right thing to do.

My top level thoughts on why to use images.

  1. Speed - Starting a prebuilt image is faster than reinstalling everything on an empty one.  In the world of dynamic scaling, there's a meaningful difference between a "couple minute spinup" and a "fifteen minute spinup."
  2. Reliability - The more work you are doing at runtime, the more there is to go wrong.  I bet I'm not the only person who has run the same compile and install on three allegedly identical Linux boxen and had it go wrong somehow on one of 'em.  And the more stuff you're pulling to build your image, the more failure points you have.
  3. Flexibility - Dynamically building from stem cell kinda makes sense if you're using 100% free open source and have everything automated.  What if, however, you have something that you need to install that just hasn't been scripted - or is very hard to script?  Like an install of some half-baked Windows software that doesn't have a command line installer and you don't have a tool that can do it?  In that case, you really need to do the manual install in non-realtime as part of a image build.  And of course many suppliers are providing software as images themselves nowadays.
  4. Traceability - What happens if you need to replicate a past environment?  Having the image is going to be a 100% effective solution to that, even likely to be sufficient for legal reasons.  "I keep a bunch of old software repo versions so I can mostly build a machine like it" - somewhat less so.

In the end, it's a question of using intermediate deliverables.  Do you recompile all the code and every third party package every time you build a server?  No, you often use binaries - it's faster and more reliable.  Binaries are the app guys' equivalent of "images."

To address Luke's three concerns from his article specifically:

  1. Image sprawl - if you use images, you eventually have a large library of images you have to manage.  This is very true - but you have to manage a lot of artifacts all up and down the chain anyway.  Given the "manual install" and "vendor supplied image" scenarios noted above, if you can't manage images as part of your CM system than it's just not a complete CM system.
  2. Updating your images - Here, I think Luke makes some not entirely valid assumptions.  He notes that once you're done building your images, you're still going to have to make changes in the operational environment ("bootstrapping").  True.  But he thinks you're not going to use the same tool to do it.  I'm not sure why not - our approach is to use automated tooling to build the images - you don't *want* to do it manually for sure - and Puppet/Chef/etc. works just fine to do that.  So if you have to update something at the OS level, you do that and let your CM system blow everything on top - and then burn the image.  Image creation and automated CM aren't mutually exclusive - the only reason people don't use automation to build their images is the same reason they don't always use automation on their live servers, which is "it takes work."  But to me, since you DO have to have some amount of dynamic CM for the runtime bootstrap as well, it's a good conservation of work to use the same package for both. (Besides bootstrapping, there's other stuff like moving content that shouldn't go on images.)
  3. Image state vs running state - This one puzzles me.  With images, you do need to do restarts to pull in image-based changes.  But with virtually all software and app changes you have to as well - maybe not a "reboot," but a "service restart," which is virtually as disruptive.  Whether you "reboot  your database server" or "stop and start your database server, which still takes a couple minutes", you are planning for downtime or have redundancy in place.  And in general you need to orchestrate the changes (rolling restarts, etc.) in a manner that "oh, pull that change whenever you want to Mr. Application Server" doesn't really work for.

In closing, I think images are useful.  You shouldn't treat them as a replacement for automated CM - they should be interim deliverables usually generated by, and always managed by, your automated CM.  If you just use images in an uncoordinated way, you do end up with a foil ball.  With sufficient automation, however, they're more like Russian nesting dolls, and have advantages over starting from scratch with every box.

17Feb/100

Agile Operations

It's funny.  When we recently started working on an upgrade of our Intranet social media platform, and we were trying to figure out how to meld the infrastructure-change-heavy operation with the need for devs, designers, and testers to be able to start working on the system before "three months from now," we broached the idea of "maybe we should do that in iterations!"  First, get the new wiki up and working.  Then, worry about tuning, switching the back end database, etc.  Very basic, but it got me thinking about the problem in terms of "hey, Infrastructure still operates in terms of waterfall, don't we."

Then when Peco and I moved over to NI R&D and started working on cloud-based systems, we quickly realized the need for our infrastructure to be completely programmable - that is, not manually tweaked and controlled, but run in a completely automated fashion.  Also, since we were two systems guys embedded in a large development org that's using agile, we were heavily pressured to work in iterations along with them.  This was initially a shock - my default project plan has, in traditional fashion, months worth of evaluating, installing, and configuring various technology components before anything's up and running.   But as we began to execute in that way, I started to see that no, really, agile is possible for infrastructure work - at least "mostly."  Technologies like cloud computing help, but there's still a little more up front work required than with programming - but you can get mostly towards an agile methodology (and mindset!).

Then at OpsCamp last month, we discovered that there's been this whole Agile Operations/Automated Infrastructure/devops movement thing already in progress we hadn't heard about.  I don't keep in touch with The Blogosphere (tm) enough I guess.  Anyway, turns out a bunch of other folks have suddenly come to the exact same conclusion and there's exciting work going on re: how to make operations agile, automate infrastructure, and meld development and ops work.

So if  you also hadn't been up on this, here's a roundup of some good related core thoughts on these topics for your reading pleasure!

5Feb/104

OpsCamp Debrief

I went to OpsCamp this last weekend here in Austin, a get-togther for Web operations folks specifically focusing on the cloud, and it was a great time!  Here's my after action report.

The event invite said it was in the Spider House, a cool local coffee bar/normal bar.  I hadn't been there before, but other people that had said "That's insane!  They'll never fit that many people!  There's outside seating but it's freezing out!"  That gave me some degree of trepidation, but I still racked out in time to get downtown by 8 AM on a Saturday (sigh!).  Happily, it turned out that the event was really in the adjacent music/whatnot venue also owned by Spider House, the United States Art Authority, which they kindly allowed us to use for free!  There were a lot of people there; we weren't overfilling the place but it was definitely at capacity, there were near 100 people there.

I had just hears of OpsCamp through word of mouth, and figured it was just going to be a gathering of local Austin Web ops types.  Which would be entertaining enough, certainly.  But as I looked around the room I started recognizing a lot of guys from Velocity and other major shows; CEOs and other high ranked guys from various Web ops related tool companies.  Sponsors included John Willis and Adam Jacob (creator of Chef) from Opscode , Luke Kanies from Reductive Labs (creator of Puppet), Damon Edwards and Alex Honor from DTO Solutions (formerly ControlTier), Mark Hinkle and Matt Ray from Zenoss, Dave Nielsen (CloudCamp), Michael Coté (Redmonk), Bitnami, Spiceworks, and Rackspace Cloud.  Other than that, there were a lot of random Austinites and some guys from big local outfits (Dell, IBM).

You can read all the tweets about the event if you swing that way.

OpsCamp kinda grew out of an earlier thing, BarCampESM, also in Austin two years ago.  I never heard about that, wish I had.

How It Went

I had never been to an "unconference" before.  Basically there's no set agenda, it's self-emergent.  It worked pretty well.  I'll describe the process a bit for other noobs.

First, there was a round of lightning talks.  Brett from Rackspace noted that "size matters," Bill from Zenoss said "monitoring is important," and Luke from Reductive claimed that "in 2-4 years 'cloud' won't be a big deal, it'll just be how people are doing things - unless you're a jackass."

Then it was time for sessions.  People got up and wrote a proposed session name on a piece of paper and then went in front of the group and pitched it, a hand-count of "how many people find this interesting" was taken.

Candidates included:

  • service level to resolution
  • physical access to your cloud assets
  • autodiscovery of systems
  • decompose monitoring into tool chain
  • tool chain for automatic provisioning
  • monitoring from the cloud
  • monitoring in the cloud - widely dispersed components
  • agent based monitoring evolution
  • devops is the debil - change to the role of sysadmins
  • And more

We decided that so many of these touched on two major topics that we should do group discussions on them before going to sessions.  They were:

  • monitoring in the cloud
  • config mgmt in the cloud

This seemed like a good idea; these are indeed the two major areas of concern when trying to move to the cloud.

Sadly, the whole-group discussions, especially the monitoring one, were unfruitful.  For a long ass time people threw out brilliant quips about "Why would you bother monitoring a server anyway" and other such high-theory wonkery.  I got zero value out of these, which was sad because the topics were crucially interesting - just too unfocused; you had people coming at the problem 100 different ways in sound bytes.  The only note I bothered to write down was that "monitoring porn" (too many metrics) makes it hard to do correlation.  We had that problem here, and invested in a (horrors) non open-source tool, Opnet Panorama, that has an advanced analytics and correlation engine that can make some sense of tens of thousands of metrics for exactly that reason.

Sessions

There were three sessions.  I didn't take many notes in the first one because, being a Web ops guy, I was having to work a release simultaneously with attending OpsCamp 😛