Automation makes things faster, cheaper and more efficient; that's great if you want the same thing every time and if you want to free up people from the boring bits (creating invoices, grinding out components, testing code for bugs) for the creative parts (working out a new business idea, designing the car the components go in, coming up with the idea for what the code will do).
Automation was the basis of the industrial revolution and the way technology has transformed business. But when inhuman efficiency appears in systems that have grown up around messy, inefficient humans, the side effects are often painful.
That's why technology and privacy are so often opposed. If I say something in the local bar or on the train, people might overhear — but that doesn't stop people having very personal conversations there.
I've been unable not to hear about people's illnesses, relationship issues or job interviews. Sit in the coffee shop of a hotel where a conference is going on and you can almost guarantee that you can listen in on a few interviews.
It's not usually seen as a privacy issue because it's not very efficient. As long as you look around to make sure your boss isn't behind you at the bar while you're chatting away about your plan to replace him, you don't usually have to worry about what you say coming back to bite you.
Even someone eavesdropping and typing it into Twitter doesn't mean your conversation will necessarily reach anyone you'd be embarrassed to have know about it.
Humans aren't efficient enough to hear everything, or remember it perfectly. But technology is. Whether it's an ill-considered comment about your job on Facebook or a picture that ends up on a revenge site or full-scale NSA-scale surveillance, when automated efficiency hits human behaviour there are unintended consequences.
The same is true in capitalism, as Bill Janeaway points out in his excellent book Doing Capitalism in the Innovation Economy. When you make a company or a process efficient, not all of the waste you take out is not actual, no-use-to-anyone Keynesian waste; some of it is Schumpeterian waste, the side effects of which are useful developments like the internet and Teflon frying pans and Ethernet.
Those came out of the 'wasteful' investments at ARPA and NASA and Xerox Parc. 3M didn't set out to create Post-It notes; an inventor had a glue that didn't work very well and came up with another use for it.
An overwhelming proportion of startups fail, but along the way enough of them act like tiny R&D teams that we get new developments. At the other end of the scale are research labs like MSR and HP Labs, which look ten and fifteen years ahead and appear to go in hundreds of different directions, many of which don't end up at products.
You wouldn't have called IBM's chess-playing computers a sensible investment, but Deep Blue was part of the path to Watson and machine learning APIs that any developer could use (because not every developer is going to be able to work out how to do good machine learning algorithms on their own).
A perfectly efficient system is sterile and perfect and implacable. It doesn't allow space for creative mistakes and waste that turns out to be useful or the funny ways people feel and react. We're spending a lot of time making things efficient and not that much time thinking about what that means for the humans involved.