Quest for the One True DevOps, Part II: The dawn of automation and the promise of a formula for business success

Quest for the One True DevOps, Part II: Where the ideal of automation itself seems capable of being automated, packaged, and marketed, and a debate ensues over whether such processed packages of processes can retain their innate humanity.
Written by Scott Fulton III, Contributor

Video: 3 things you should know about DevOps

"I don't think that Ops' job is to keep the site stable and fast," declared John Allspaw, at the time a head of technical operations at Yahoo's photo site Flickr. "It's not their job. This might be a news flash to some of you people. Ops' job is to enable the business."

The stage of O'Reilly's 2009 Velocity conference was not the first place anyone ever heard that particular declaration, from Allspaw, his Flickr colleague Paul Hammond, or anyone else. But many IT operations professionals in the audience heard this message as though it were completely fresh. It was, however, one of the first demonstrations of a clever piece of otherwise obvious math: If operations personnel were business enablers, and software developers were business enablers, then separating the two departments worked against that goal.

"What really helps in this regard is when you have operations people who think like developers," began Allspaw at one point. "And developers who think like operations people," completed Hammond, who at that time ran Flickr's engineering team and went on to become platform director at Slack.

"For me, it always comes back to that very first presentation at Velocity, with John Allspaw and Paul Hammond," said Adam Jacob, the chief technology officer of continuous integration platform provider Chef, speaking with ZDNet Scale, "where they were talking about how they did ten deploys a day at Flickr.

Read also: Agile plus DevOps is slowly but steadily reaching enterprise scale

"That change, that shift -- and it's profound, and it sounds simple -- is the idea that your operations people and the folks who develop the software are the same people," he continued. "Their job is to work together in order to keep the system working and moving forward. I think that is the common goal and objective of anybody who starts a DevOps thing. You can call it what you want, and we've since gone on to quantify the business benefits of doing that. . . But what you wind up selling to people, if what you're selling is DevOps or something that makes DevOps happen, is the software that makes that collaboration true."

Is DevOps something that can be installed, like software or a plug-in or an engine? More to the point, are the pipelined automation platforms to which a growing number of organizations are subscribing, Chef among them, guaranteed delivery vehicles for the ideal of DevOps -- a way to plug into the ideal and let it work its magic on its own, automatically? It does seem as though Jacob is suggesting is that the marketable element of DevOps is not so much an ideal as a software product.

"There are multiple schools of thought around DevOps, and that's the thing I think gets people confused," said JP Morgenthal, CTO of Application Services for technology service provider DXC Technology (formed last year from the merger of HPE's enterprise services division with longtime IT channel provider CSC). A veteran of EMC, Morgenthal is on record as believing the DevOps concept is susceptible to being molded into just about anything that suits the purposes of a marketing message. As the title of his 2016 personal blog post professes, "If everything is DevOps, then nothing is DevOps."

Read also: Executives overestimate DevOps maturity

A journey in the dark


We're at the second of four legs of our journey through the Middle Ground of technology in the enterprise, on a Quest for the One True DevOps. Part I of our journey presented the original source of the division of labor between creators and operators, that inspired the original initiative to reunite the two departments -- perhaps as collaborators, though perhaps as the same unit.

We begin at "The Source," the place where the products of the "Farm of Cultivation" meet the head of the Automation Pipeline. We're headed down what appears to be a clear route through the valley, through the "Field of Staging." And we're about to approach our first great obstacle.

"For the vendors who are in the space and selling that stuff, I think what they're selling are the tools of that collaboration," said Jacob. "And that's why it tends to look like automation, like infrastructure-as-code, and like continuous delivery pipelines and visualization. That's why that stuff is the DevOps stuff: because it's the stuff that allows people to actually come together and do their jobs."

One of Chef's principal tools is called Habitat. First released in the summer of 2016, it's an infrastructure configuration system for individual applications. It involves a kind of manifest that accompanies the application throughout the cloud, instructing the platform it's running on -- whatever that may be -- about the resources it will likely require. It's then up to the platform to meet the manifest's requests and provision resources for the application as best as possible. On the surface, that might not sound like the culmination of seven decades of socio-technical theory and human systems. But consider this: If a developer was just as capable of an operator of producing this manifest, and the system could automate its implementation, then much of the separation of labor that partitions Dev from Ops would be rendered unnecessary.

Read also: DevOps accelerates, requiring new leadership styles

Put another way, one really good reason for merging the Dev and the Ops departments might be if technology renders much of Ops obsolete.

Competing with Chef in the continuous delivery and automation space is Puppet (formerly Puppet Labs). Its engineers and advocates have made the case that DevOps is largely about automation -- about identifying the repeatable steps involved in the creation, staging, and delivery of software, and codifying them as much as possible. In the recent past, Puppet has made the case that automation may apply to IT personnel associated with DevOps, or perhaps to an emerging class of professionals termed "AppOps," or people with operational authority within an organization over the production of its applications.

Puppet Chief Technical Strategist Nigel Kersten, speaking with us, defined DevOps as "a loose and evolving collection of engineering, behavioral, and organizational practices, that are focused on going rapidly, safely, and sustainably from idea to customer or business value."

It's a definition custom-fit for a series that uses maps as its metaphor, since it certainly does cover the whole map.


"I feel like we have pretty good agreement around that level, in terms of description of DevOps," Kersten continued. "But then what we actually see is people asking, 'Well, how do I start? What is it that I actually do to get going?'" Most definitions of DevOps that bear scrutiny, he believes, will incorporate automation to a high degree. However, he told us, he has encountered more customer organizations that have not yet begun to automate, even though they're certain they've embarked on their "DevOps journey."

Read also: 5 reasons DevOps will be a big deal in the year ahead

So there are a multitude of elements, all of which are being portrayed as critical to the implementation of DevOps practices: There's "infrastructure-as-code," in which the resources applications require not only to operate but to evolve, are specified as manifests or contracts, so that virtual machines may be composed around them. Then there's the tools various departments may require to establish similar contracts between themselves -- Kersten cites automated testing and continuous integration as examples.

"All of those things fit squarely in the middle of what I see as DevOps," he said. "But I think the really key aspect that we see a lot of people struggle with is, organizing your teams in such a way that operational pain is shared, and you can take a whole systems approach to identifying bottlenecks and unwinding them."

It sounds as though Kersten may be defining DevOps as, whatever tools and practices may serve the purpose of distributing job functions throughout departments, and facilitating communication between them. From a purely holistic perspective, that doesn't actually sound like a concept that exclusively involves Dev and Ops, but rather anything that may accomplish a general alignment between business units. But Kersten explicitly stated that the alignment was not the goal in itself; that like a classic Michael Porter value chain, the end goal is typically the nebulous concept of value as perceived by the customer.


"If you get enough people to draw their pipelines for you," said Chef CTO Adam Jacob, "over time they all start to look really similar. But they use very different words to describe them. One talks about a 'release train,' and another talks about 'release on-demand,' but their pipelines are identical. It's just a question of when they get triggered. But they use different names for different phases. And that, I think, is a very bad pattern of saying, 'Well, it's whatever you want, so it has no meaning. It's so complicated and there's so much variation that you'll never be able to solve it.' I don't think that's true. I think there actually is quite a bit of commonality, and a couple of common patterns that you can really see. You can push people in their direction, and they'll work."

If everything is DevOps

"To me, the reason you want to invest in DevOps is to mitigate risk," remarked DXC's JP Morgenthal. "Everything else, all the other stuff -- the culture, the measurement -- ultimately stems from that. What are you mitigating the risk of? The risk of failure after you put something into production. Along the way, you're simplifying, you're reducing the cost of that repeatability and automation that you've incorporated. You're taking human labor out of the puzzle, and the labor you're taking out typically is trying to fix the problem that they introduced."

One of the great achievements of the category of IT infrastructure services now referred to as continuous integration and continuous delivery (CI/CD, with the last part often interchanged with continuous deployment) is that it presents software developers with the means to build and test their work in an environment that behaves identically, or nearly identically, to the production environments where it will eventually be delivered or deployed. CI/CD is the software category most often equated in the public mind with DevOps itself, and the brand of software that may be responsible is the open source project Jenkins, whose mascot is a butler. I've often told the story of speaking with attendees during lunch at a 2016 tech conference, asking a group of them whether their organizations practice DevOps, and hearing one respond, "Yea, we have DevOps -- that's the one with the little butler guy, right? We also have SharePoint, Workday, and [Office] 365."

The hallmark of Jenkins actually isn't the butler but the pipeline. Although veteran software engineer Martin Fowler is properly credited with the introduction of the deployment pipeline metaphor to CI/CD, it is undoubtedly Jenkins that is responsible for the construction of such pipelines in businesses worldwide. Each segment of a pipeline represents a controlled stage of the software development process, from inception through testing, into staging and delivery to production.

"The piece that changes for most organizations is where they start," remarked Robert Stroud, principal analyst with Forrester.

A typical Forrester client, said Stroud, will approach the firm explaining that its developers have fully embraced the principles of Agile development (a topic for a whole 'nother journey), but came to discover their developers' processes were not aligned with those of their IT operators. "More often than not, there's just a set, defined strategy we'll take that customer through, to get them through to the end result, which is releasing with extreme velocity and quality. They actually want to deploy smaller pieces of change, new features, new ideations, net new ideas, on a really consistent and continual basis.

Read also: What is DevOps? An executive guide to agile development and IT operations

"This is where we see the true pressure coming," Stroud continued. "Most IT organizations are getting extreme pressure from their businesses, saying, 'We want to be able to change our products and services on a really regular basis, and take advantage of new capabilities that we've come up with in terms of our market analysis and new opportunities. And some of those opportunities won't work, so we probably won't continue down that path.'"

Client firms, the Forrester analyst told us, believed they were focused on reliability and quality. But what passed as "focus" was comprised of manual checks and balances that rarely add value to the process. "They were so busy chasing these manual checks and balances," said Stroud, "that the operations organizations haven't really automated good practices for consistent environments and infrastructure, so it could be done on a repeatable, scalable basis. And they also haven't implemented widespread automation."

Barriers and guardrails

Splunk Chief Technology Advocate Andi Mann actually sees the reverse phenomenon, where companies seeking to embrace DevOps dive in head-first into the automation part. "Automation is where a lot of people come into DevOps as an entry point," he told us, though he advises against it. A company can't just build pipelines and see results, he argued; rather, the company needs to make the internal cultural preparations needed to cope with the introduction of what many would consider a foreign agent.

Yet automation "is an important part of being able to get people to collaborate better, because if you automate more things, then a number of outputs happen," continued Mann, previously a veteran of CA Technology. "One is that you have audit and control over your process. That means you can free up people to make their own decisions. They can do what they need to do within a bunch of guardrails, because automation has set up those guardrails."

At this point, Mann's description truly does have a very familiar ring to it. It's the idea that once science has identified the specific elements of a work process, that process may be codified for efficiency, and specialists may be cultivated to ensure reliability and reduce risk. It's Taylorism, which Enid Mumford first witnessed as early as 1949, and found to be a catalyst for inefficiency.

But here is where Mann saves himself: Once processes are automated, the amount of human effort required to manage those processes is reduced. "It lets people do things for themselves and go faster," he explained. "It improves quality, because you do automation of testing, of QA, of releasing. And if it doesn't meet certain quality barriers, the automation won't let you release it."

"When you work inside one of those enterprise companies, you're told not to collaborate," remarked Chef founder and CTO Adam Jacob, "or you're told that the way to collaborate is through the org chart. So part of what you need to make [DevOps] work is a change in the tooling. You have an organization that is working just fine without that collaboration. You're a giant, global bank or insurance company, a huge retailer -- you became huge because, by definition, you're pretty good at the thing that you do. Getting past the bias that says, 'The way that we do it right now is the best way to do it,' and you know that it's going to work, requires a change not only in philosophy, and the soul of people wanting to collaborate, but also the way that people actually work. The day-to-day grind has to be different than it was before. That's where the tooling comes in."

Read also: Harnessing AI to make DevOps more effective

Put another way, enterprises may need better sensors and more relevant data, to bring them up to speed with the fact that more substantive, process-related changes are actually required. For software and service vendors, this argument has the danger of sounding a bit self-serving: Enterprises need new tools to help them understand not only the need for that first set of new tools, but subsequent sets as well. Meanwhile, DevOps practitioners such as Chef, Puppet, and DORA are responding to enterprises that they acknowledge can't see the dangers in front of them, with the promise that the antidote may manifest itself in any number of ways. So it's easy to see why some firms may still be skeptical.


Jacob did offer this message of hope: "I would argue that the number of successful patterns is not infinite. If you think about how many stable, high-velocity shapes there are for a given business process, there are not an infinite number.

"Right this second, every industry all over the world is trying to understand how to incorporate this higher velocity of technology advancement and R&D, into their existing business processes. Over time, each of those industries is going to find the stable shapes. They might emerge with one, two, or three. I'd be shocked if there were half-a-dozen. So there is stability to be found here."

It's this reconciliation that has enabled us to cross the Culture Chasm, in our journey through Middle Ground.

Read also: Is DevOps sustainable after the consultants leave?



In the next stage of our journey, we'll investigate whether any technology that asserts itself to be about the science of simplification requires a philosophy in order for it to be useful. And finally in Part IV, I'll introduce you to a person who was a key influencer in the DevOps community, who challenged himself to push the boundaries of DevOps beyond the domain of the CIO. Until then, hold fast.

Quest for the One True DevOps

Journey Further -- From the CBS Interactive Network


Related stories

The map of Middle Ground for this series of Scale was drawn by Katerina Fulton.

Editorial standards