The robotic datacentre: Is this the future of the cloud?
Summary: By the end of this decade it could be possible to automate datacentres to the point where humans need never enter them. The robotic datacentre could mean profound changes for the cloud
As part of this Cloud 2020 series I've been looking at how the cloud will develop over the next decade. So how does the datacentre itself fit in with all this change? Is a robot-run datacentre achievable at some point in the future?
I asked Facebook's VP of hardware design and supply chain, Frank Frankovsky, where he thought the datacentre could develop next.
"I've always envisioned what could we do with a datacentre if humans never needed to go into the datacentre," Frankovsky says. "What would a datacentre look like if it wasn't classified as a working space? What if it looked more like a Costco warehouse?"

Frankovsky's ideas are informed by two things: his previous stint as head of Dell's skunkworks Datacentre Solutions Division, which sold custom equipment to massive cloud companies, and his role as chairman of the Open Compute Project — an ambitious scheme initiated by Facebook to design its own server, storage, rack and potentially networking gear, with a view to publishing these designs for the rest of the IT community to use.
READ THIS: Cloud computing: 10 ways it will change by 2020
Though Open Compute is in its early days, the direction its equipment is taking breaks with the approaches popularised by major enterprise vendors. Instead of designing very high-performance servers with many 'value-added' features, Facebook has gone the other way and come up with a server specification that is lightweight and maintenance and performance-per-watt above everything else.
The company also has a commitment to designing technologies that let it control many of its servers with (frequently open-source) software, breaking from the closed management tools offered by IBM, Dell, HP and others. "That gratuitous differentiation that occurs in the system management space is going to go away over time," Frankovsky says.
READ THIS: Cloud 2020: What are the barriers to the cloud?
Over time Facebook expects that it will use more and more Open Compute-style equipment and software. Eventually, this could let it change the way it builds datacentres. One area the Open Compute Project is concerned with is "making sure the components are available at the front of the rack rather than the back of the rack", Frankovsky says.
No more people
Eventually Facebook would like to "change the way the mechanical designs work so the technicians never have to go into the hot aisle to service the machine".
And after Facebook conquers the hot aisle, it hopes its ability to manage its infrastructure mostly via software could cut the amount of time people spend on the IT floor of the datacentre — eventually, it might be possible to have no one there at all, Frankovsky says. This holds a number of intriguing possibilities for datacentres.
If people did not need to go into a datacentre, then you could deploy devices floor to ceiling and run them at a much higher heat, allowing the processors inside them to perform more efficiently, Frankovsky says.
Looking further ahead, the datacentre could be treated as a "degrade and replace" model, Frankovsky says. "Essentially, you fill up a datacentre, put it into production and weld the door shut." If a company did this, it would only need to send someone into the facility every six months to perform processor upgrade and swap out failed storage, he says.
Ultimately, if the hardware and the software gets developed, a lights-out datacentre "may be a realistic model" by 2020, he says.
A robotic future
Looking further ahead, Frankovsky imagines "a truly automated warehouse environment" where machines automatically service and swap out hardware. However Frankovsky said that this type of datacentre lies beyond 2020.

A clue to how the robotic future could develop can perhaps be seen in Amazon's acquisition of Kiva Systems in March.
Kiva specialises in robotic technologies to make warehouses more efficient. Amazon bought Kiva in an attempt to deal with some of the logistical problems brought about by the scale of the warehouses from which it ships its retail products.
The techniques applied by Kiva Systems to managing scale — automation, the use of robots to transport equipment, embedding sensors and software management into as many components as possible — deal with the same sorts of problems that a company would face in getting a robot datacentre off the ground.
Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback
This is good
Re: This is good
I am expecting the rise of Skynet level systems, in let say one decade or two. One emerging from the U.S army computing system and the other from Google.
Has to happen
Has to happen
This is beginning to sound
no it does not
No one needs to enter the datacenter??
the major flaw of the article
As it stands now, why would anyone enter a datacenter other than to swap failed hw, fix issues with air cooling, power, or perform hw upgrades.
As far as I know, all the softw upgrades / installations have been done remotely for many years now. Are you telling us that IT staff routinely comes into the datacenters to restart the servers? Give me a break.
Have you ever taken a server out a rack, or perheps seen others do it? It is screwed to the rack, and it is connected to the outside world by a dosen of wires. So now someone will actually use robots to swap hw? You are kidding me, right?
My recommendation to the zdnet staffers is to do their homework first, and generally to write about things they know about.
Placing 'data centers' and 'robots' into one sentence is clearly not enough.
Article was future-facing
The reason for the term "robot" is that it allows for a form of maintenance than current equipment does not permit. Eg - by 2020 it's likely that connectivity for servers will have moved to a top-of-rack switch of some sort so swapping servers will be doable as they'll be bladed. There are also storage arrays being designed in this area by facebook that permit modification by machine rather than human hands. As for software, while a lot is done remotely you have to do local stuff when yr changing things like network topology but the rise of SDN could help stop this.
What do you reckon?
Only an evolution of existing technology
The robotic computer --> Soon after, the robotic internet browser,
BTW, whatever happened to those smart programming languages from about 25-30 years ago, which would do all of our programming for us, no programmers needed?
I can't wait for the robots that will go to the bathroom for us.
EMP resistance?
As a parting thought. Just imagine how much data, how many scientific studies, how much raw knowledge and records could be lost if the cloud fails? People suck at backing this stuff up if they have too much faith in a technology.
Has some good points, but also misses some obstacles to the vision
The major server hardware manufacturers are promoting variations of data-centre-in-a-box, which means a collection of racks, modules, chassis with blades or other types of server modules in them. However, once you select one of these, you are stuck with that vendor if you want to efficiently expand your server farm. So select carefully the vendor which gives you the most standardised and easily expandable hardware for the most types of servers and operating systems you will need in the next ten years or so.
Question: would servers run more efficiently if they run at higher temperature? Maybe if the entire server is designed for that - today the servers are spec'd for a certain operating temperature range, and it is that, and not what humans in the data centre can endure, which requires a certain amount of cooling.
What is more important for efficient data centre operations are the interconnects between servers, and between servers and storage. Standardisation of the interconnects to provide sufficient bandwidth between all parts is vital.
When you have all this in place you also need good software tooling to create and manage virtual servers and applications with all interconnects, capacity management, software updates and so on. This isn't rocket science, but needs more standardisation also.
BTW: don't be scared of "the cloud" - it's just a collection of servers and storage as before.
robots
Processors run more efficiently in high temperature environments ???
Really ???
Having said that a modified tape silo robot like the ones in the old STK powderhorns which had video cameras in them, could replace/upgrade failed blades/disks, better still if blades had the same SBB formfactor as disks.
Of course this assumes that you're not running your datacenter in a developing nation where the costs of labor are so cheap that its easier to have a small army of techs with breathergear to do it instead