Three reasons why automation doesn't quite cut it

Three reasons why automation doesn't quite cut it

Summary: Not every process can, or should be automated.

SHARE:
TOPICS: Telcos
8

Not every process can, or should be automated.

Julian Sammy provided a thoughtful response to my recent piece on when processes are beyond the reach of automation, illustrating the three reasons why automation may not be enough.

1) Impossible automation: Nothing can ever be completely, 100 percent automated to the point where it can handle every single scenario thrown at it.  As Julian posits, "any computational device — computer, brain, anthill, whatever — can run into problems that it can never solve.... In practical terms, this means that every process, not matter how well automated, is likely to encounter conditions that it can’t resolve by itself: errors and exceptions."  So, every process, no matter how brilliantly and elegantly executed, will break somewhere along the line, and require human intervention.

2) Unaffordable automation: Some things will take far too much time to change and automate than worth it.

3) Irrational humans: The subtitle says it all. Julian drives the point home, noting that pilotless passenger planes and driverless cars are technically possible, but no one is going to sign over complete control to these vehicles. "Humans are awful at taking rational risks, and happily demand a sense of control in preference to actual control."

The third point may be the most significant of all. Automation is great up to a point, but then there is a chasm of trust in systems and machines that many managers and executives find difficult to leap.  Just as employees and business partners need to earn trust over an extended period, so do the machines.

Topic: Telcos

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

8 comments
Log in or register to join the discussion
  • Not everyone is expendable

    But my my my, they're working hard to change that. As it is, we're all mercenaries now. [And with no end in sight.]
    klumper
  • Weak arguments

    1. "Impossible automation". Clearly, the processes and humans designing them are getting better and better. What can not be automated today may be automated tomorrow. The limit is complete automation. Therefore, this is a pretty weak argument.

    2. "Unaffordable automation". The cost of automation keeps dropping. What is not affordable today may be affordable tomorrow. This is the history of automation. Again a weak argument.

    3. "Irrational humans". Humans are also extremely adaptable. The local mass transit system (elevated rail) is entirely automatic, not an operator in sight. It felt a bit funny at first, but now nobody pays any attention to it. The system is extremely reliable. Again a weak argument.

    "Nothing", "can't" and "never" are simply silly words to use when it comes to human endeavors.
    Economister
    • Re: Weak arguments

      @Economister, thanks for the comment. I'll post this rather long response on my site too. I'm interested in having these ideas tested--poke holes, please!

      I agree that
      1. many processes can be automated,
      2. automation gets less expensive over time, and
      3. humans are enormously adaptable.
      I didn't clearly articulate what I meant by 'some processes', which makes the arguments appear weak.

      Impossible Automation: Perhaps a better way to think about this is that some aspects of every process can never be automated.* Error and exception handling are obvious cases. This doesn't mean automation is impossible, or that the 'happy path' process can't be automated. It means that robust, reliable processes should be designed to recover automatically first and with external intervention second. (In my experience, this meshes well with having a security mindset while designing solutions.) For more information on the core ideas, check out the work of Kurt G?del - a genius of the first order(https://secure.wikimedia.org/wikipedia/en/wiki/Kurt_G%C3%B6del). I'm arguing from G?del's incompleteness theorems, which describe this particularly confusing, disruptive absolute in algorithmic systems.** For an irreverent treatment from lcfr@abstrusegoose.com, see http://abstrusegoose.com/244; for a slightly rude treatment from Randall Munroe of XKCD.com fame, see http://xkcd.com/468/; for a year-long read, check out the spectacular G?del Esher Bach, by Hofstadter, perhaps starting at https://secure.wikimedia.org/wikipedia/en/wiki/G%C3%B6del,_Escher,_Bach.

      Unaffordable Automation: In this case, 'some processes' is a moving target; a process that can not be automated affordably today may be economical tomorrow. But tomorrow there will still be processes that we can't afford to automate. I believe this is consistent because automation leads to new categories of processes (the buzzword is 'emergent'). Consider email as the automation of the snail-mail process. E mail increased the volume and speed of information delivery. In turn, this made managing the flow of emails into a new category of process--which has been automated (with varying success) in several ways.

      Incomprehensible Automation *NEW*: Thinking about the last point brings up another limit to automation that I didn't consider before. Some processes can't be automated because we don't know how to do it, or don't have the right tools to do it. As with unaffordable automation, this is a moving target.

      Irrational Humans: "Adaptive" and "rational" are different. Humans are incredibly adaptive; we can survive almost anywhere, on almost nothing. It's just that our deeply ingrained assumptions about how rational we are don't reflect how we actually are much of the time. Scientific evidence says otherwise. I think I can actually make a stronger claim, that some processes can not be implemented to achieve desired results -- automation or no -- if they contradict human nature. In this situation, the process or human nature must alter for the results to be achieved. I think there are four categories here:

      A. The process implementation fails in that people
      - do not follow the process,
      - game the process (intentionally use it to achieve unintended results).
      In many organizations, project reporting and lessons learned processes fall in this category.

      B. The process implementation succeeds, but the results are different from the intended results. Many performance assessment processes, from the collection of metrics to the delivery of reviews, are like this. Bonus pay generally doesn't have the effect we think it does.

      C. The process is altered to account for human nature, and may appear to be irrational or wasteful. Many sales processes are like this, including advertising and marketing. The planning, design and implementation phases of getting citizens to ride an automated train fits here.

      D. The process implementation succeeds by altering human nature in some way. An example of a cultural alteration to humans is the way the technological world changed human sleep/wake cycles, supported by caffeine and artificial lighting. A biological example is the ability to drink milk as adults is an example of a genetic change to humans that has evolved "through at least four parallel evolutions starting several thousand years ago" (http://www.scientificamerican.com/article.cfm?id=lactose-toleraence).

      ___
      * I use absolute language with some care; I'm trying to not misrepresent the current best-evidence state of knowledge.
      ** Computers are devices that execute algorithms (programs). Processes with steps executed by humans--with or without automation--are also algorithmic, and subject to G?del's theorems.
      JulianSammy
  • I show this graphically...

    The way I show this is with a graph. The horizontal axis is percent automated and the vertical axis is cost. The line I draw shows that getting about 80% automation is pretty cost effective. After that, as total automation is reached the plot starts to approach infinite cost. What I say is that to get to about 80% costs $5000. (just a ballpark figure). To go much higher costs a million dollars.

    The reason we discuss is that certain exceptions and corner cases are better handled by humans than trying to identify them, specify the desired outcome, code them up, and then test them. If something is only hit once a year, is it worth the effort to spend the time automating that? Probably not.

    Clients figure out that it is better to have a bunch of processes at 80% and deal with the exceptions manually than spend a huge amount of money on one project.
    Bruce622
  • RE: Three reasons why automation doesn't quite cut it

    Another more subtle danger with inappropriate automation is around the skills needed to handle those inevitable exceptions.

    There's a clear sequence in skills-development, where we start with simple rules, extend to understand more the complicated factors, and onward to the point where we start to be able to deal with real-world complexities.

    The catch is that automation excels at following simple rules (whilst most people don't). IT-based automation can also be very good at handling complications, as long as those complications follow some kind of definable and logical pattern. That will take us all the way to that 80%-automation sweet-spot - and that's the point at which real people need to take over.

    But if they don't have the skills to take over, and if we've given them no means to _learn_ those skills (which we've often blocked _because_ of the automation), we're going to be in deep trouble. Likewise if we need real people to take over from the automation - as we do in many disaster-recovery / business-continuity scenarios.

    In short, any plan for automation _must_ include full processes not only for exception-handling beyond the automation, but how people are going to learn the skills needed to handle those exceptions. If we don't do that, we are setting up our automation to fail - and fail badly - but without any understanding as to why it will fail.
    TomGraves
    • Automation of Memory

      @TomGraves, great point. Today we rely on experience to get us through may tough situations; over time and with practice, we learn what doesn't succeed and what doesn't fail. I suspect we are in the early stages of the automation of memory, which might provide an alternative path for addressing this problem.

      By "automation of memory" I handing over the process of recall of factual information to context aware, unobtrusive augmented reality devices. This is already common in military applications (HUD or Heads Up Display has been around for a long time) and it's starting to reach the general public now. iPhone and Android apps like Yelp, Layar and Google Goggles could be the thin edge of the wedge. Always-on connectivity and access to the internet is in very early days: I think it's a mature HCI technology (Human Computer Interaction) in the same sense that punched cards were mature: it allowed for a useful, but extremely limited and difficult interaction.

      Even today, I hear people talk about Google, Wikipedia and IMDB as their 'external brain packs'. The conversation about "who was the guy in that movie we saw last year" is going away, because it only takes a few seconds to hop on IMDB and find out. It's a punched-card interaction--slow, with many steps and error prone--but it's easy enough for most people. What happens when your phone is context aware and can serve up relevant information? Imagine an interface that:
      - is aware of your personal experiences, because it
      - sees what you see,
      - hears what you hear,
      - records both of these with location and time information,
      - analyses the above to make it all searchable (e.g. tags the recording with the people present, the event underway, and so on),
      - is aware of external experiences
      - connected to the internet, with access to large repositories of information tagged in a manner compatible with your experience tags
      - unobtrusively presents contextually relevant personal and external experiences to you in a helpful way.

      A scenario for this kind of augmented reality (AR) technology:

      You go to a conference, and meet several dozen people. You are wearing your glasses, which have HD cameras and high-quality microphones built in. The glasses can also project images onto your eyes, much like a HUD, and use bone conduction speakers to play any audio you choose. The recordings are stored on your phone (wireless connection) and uploaded to a free, cloud-based storage and analysis service from Google.
      A few months later, you run into one of the people from that conference at another meeting. You are wearing your AR gear. As soon as you see the person, your glasses whisper "Alex Lee, Building Business Capability conference October 2010" in your ear. Alex, with a similar AR interface, has it configured to float your name across your face, along with the faces of other people that were encountered in the same context. Her AR interface whispers a note about the topic of your last conversation. You greet each other by name, and pick up where you left off. You ask "Would you like to grab lunch?" Your AR interface searches her public profile (from FourSquare or Yelp), notes that she often eats at sushi restaurants when travelling, and presents the nearest sushi option on your HUD.

      Back to your point about developing the skills needed to handle exceptions. I suspect that this sort of external memory and recall will encourage humans to act as in-the-moment analytical association engines when processes fail. When a server grinds to a halt, the support people can draw on what they know, but also on the bug fixes, trouble tickets and resolution options from across the internet--without lifting a finger to type in a search.

      There are some pretty substantial barriers to this sort of technology being adopted--those human frailties again. The one that looms largest in my opinion is the conflict between perfect external memory and our very unreliable natural memories -- but that's a topic for another post.
      JulianSammy
  • What's missing is....

    Interesting how the emphasis is always on automation. One of the most important points is actually fixing and stream lining the process. To paraphgrase and ex-boss of mine. "If you have a pile of pooh, and you automated it... all you have is automated pooh".

    Often you can better cost savings just by fixing the process. However, it is required even before you start automating.
    siobhanellis
  • RE: Three reasons why automation doesn't quite cut it

    Julian introduces three great points to think about when contemplating which processes to automate. It?s true that automation is not for every process, but if we make a list of processes that should be automated, deployment would be one of the ones at the top. Deployment automation is certainly possible and affordable, and does not take away IT professionals? jobs. In regards to your third point, people can trust deployment automation because it does not take away their control; it makes their lives easier. Would you agree?
    XebiaLabs