/>
X

Photos: The evolution of the datacentre

How the datacentre developed from the earliest computers to the present day
zd-defaultauthor-tim-ferguson.jpg
By Tim Ferguson on
40153869-1-unisys-univac-1952.jpg
1 of 13 Unisys

How the datacentre developed from the earliest computers to the present day

The datacentres that we know today have their origins in the early days of computing when individual computers took up whole rooms.

To manage the security, cooling and power requirements of these enormous machines, specially-designed rooms were developed to house them. These rooms were the first places in which the computer racks, elevated floors and cable trays that are commonplace in modern datacentres were pioneered.

One of the earliest commercial computers was the Universal Automatic Computer 1, or Univac 1, produced initially by Eckert-Mauchly Computer Corporation and then Remington Rand, after it acquired the company. The first Univac was used by the US Census Bureau in March 1951.

The fifth Univac 1 - originally built for the US Atomic Energy Commission - was used by US broadcaster CBS to predict the correct result of the 1952 presidential election and is shown above with legendary CBS news anchor Walter Cronkite.

Organisations making use of computers had previously been limited to those in academia and government but the Univac made it possible for businesses to take advantage of that computing power. Its arrival helped lay the foundations of business computing, which would lead to the development of mainframes and datacentres in later decades.

40153869-2-unisys-univac-1005-computer-van-vietnam.jpg
2 of 13 Unisys Corporation

The Univac 1005 was introduced in 1966 and used an internal stored programmed punched card data processing system - as opposed to the plugboard system of its 1004 predecessor.

The mobile computer was used by the US Army on the battlefield and was in service during the Vietnam conflict.

The idea of a mobile computer room is making a return with many datacentres now employing a modular approach in which containers of servers can be added and removed as needed.

40153869-3-ibm-madison-datacentre-edited.jpg
3 of 13 IBM

The earliest computers were built to perform very specific tasks. But by the 1960s, companies such as IBM were providing customers with general access to their systems on an hourly basis to carry out processing tasks. IBM's data processing centre in New York City, shown above, was one such facility.

The idea of buying processing power for a particular time period is one that is still popular today, with enterprises able to buy in computing grunt temporarily from external datacentres through cloud services.

40153869-4-ibm-370-model-195.jpg
4 of 13 IBM

The development of mainframes was the next step in the evolution of computing towards datacentres.

The term mainframe initially referred to the cabinets used to house the CPUs of early computers but later described the more powerful commercial machines themselves.

During the 1970s and 1980s, mainframes became the principal way in which businesses accessed computing power on their own premises before the client-server approach emerged.

Due to their power and cooling requirements, mainframes prompted the development of some of the concepts seen in today's datacentres. Techniques for cooling, ventilation and isolated power supply were all elements that first appeared in mainframe installations and which were developed further for the datacentres of today.

At the time of its launch in June 1970, IBM's Model 195 mainframe, pictured above, was the most powerful computing system built by the company. It used monolithic circuits that could process instructions at a rate of one every 54-millionths of a second.

40153869-5-hp-datacentre-3.jpg
5 of 13 HP

The arrival of client-server computing and the increasing complexity infrastructure led to a shift from single computers in the form of mainframes to the type of datacentre set-up we'd expect to see today, in which a number of connected computers are used to support business operations.

By the 1980s, datacentres were becoming much more common on company premises and were being used to help run businesses. Companies such as HP were selling standardised systems such as the HP 3000 microcomputer from 1986, pictured above, which companies could connect to increase computing power.

40153869-6-hp-datacentre-1.jpg
6 of 13 HP

As the size and computing power of datacentres increased over the next few decades, the search for greater power efficiency gathered pace.

By 1990, HP had launched its HP 9000 server system, pictured above, which focused on boosting efficiency by only drawing significant power when transistors switch on or off.

40153869-7-hp-datacentre-4.jpg
7 of 13 HP

As datacentres increased in complexity, the need for software to manage them also became more important.

HP's OpenView software on the screen above is an example of such management technology from 1994. It was designed to allow datacentre operators to monitor and control networks and systems as well as datacentre functions from a central console.

40153869-8-dell-datacentre-1.jpg
8 of 13 Dell

The concept of blade servers became part of the datacentre lexicon during the first decade of the 21st century.

These thin microcomputers can be slotted in and out of racks, or hot-swapped, without causing disruption to the rest of the datacentre.

This image shows a Dell high-performance computer with blade servers as used by seismic processing and imaging company Geotrace.

40153869-9-tesco-store.jpg
9 of 13 Tim Ferguson/ZDNet

With many datacentre managers tasked with cutting running costs and reducing energy use in the datacentre, the 2000s saw the rise of virtualisation.

Server virtualisation, whereby software is used to partition physical servers into several virtual machines, is one of the technologies making an impact on power usage as well as space in datacentres. Being able to reduce the number of physical servers in a datacentre through virtualisation reduces power consumption and the need for cooling.

Virtualisation also allows datacentres to be more flexible as it allows for virtual machines to be added or removed and for workloads to be shifted without the need to move physical hardware.

One of the more recent converts to server virtualisation was UK supermarket giant Tesco which carried out a project to virtualise 1,500 of its UK Windows-based servers with technology from Citrix last year.

It was expected the scheme would result in a cut of 20 per cent in datacentre carbon emissions.

Photo credit: Tesco

40153869-10-google-datacentre.jpg
10 of 13 Screengrab from Google video

The irresistible rise of Google in the first decade of the new century has made its datacentres some of the biggest in the world.

Shown above is Google's first container-based datacentre, opened in 2005. The 45 containers over two storeys can hold a total of more than 45,000 servers and the facility supports 10MW of IT equipment load. It has a power utilisation effectiveness - the power needed to support the facility divided by the power needed to support the hardware itself - of 1.25.

40153869-11-datacentre8.jpg
11 of 13 Microsoft

Microsoft's Chicago datacentre, which opened in 2009, also relies on containers.

It has been partially built by joining together 56 containers, each housing between 1,800 and 2,500 servers. About a third of the servers are located in a more traditional server room set-up elsewhere in the facility.

The container approach - first seen in the 1960s with the Univac 1005 - provides a more flexible way to add and remove capacity for the datacentre.

40153869-12-ibm-green-datacentre-heat-exchanger1.jpg
12 of 13 Martin LaMonica/CNET News

In the past few years, IT departments have come under increasing pressure to procure green products and services, and the datacentre is not exempt from the sustainability agenda.

Shown above is a shot of IBM's Green Innovations datacentre in Connecticut. IBM is using the facility to research how to make its datacentres as green as possible.

One of the techniques being experimented with is the Cool Blue system, which circulates cold water through a server aisle door to lower the temperature of air exiting the servers.

40153869-13-capgemini-merlin-cold-aisle-vents-open.jpg
13 of 13 Tim Ferguson/silicon.com

Capgemini's Merlin datacentre also prioritises sustainability. As well as being energy-efficient, the datacentre uses sustainable and recyclable materials, and employs free cooling in which air from outside the building is used to reduce datacentre temperatures.

This is a view into one of four cold aisles in a Merlin module, through vents in the access door. The datacentre draws air from the air optimiser through the door into the datacentre to cool the server equipment. The vents in the door are fully open in this case.

Find out more in our photo story The datacentre casting a green spell.

Related Galleries

Chargeasap Flash Pro Plus
Chargeasap Flash Pro Plus in action

Related Galleries

Chargeasap Flash Pro Plus

10 Photos
Galaxy S21 FE 5G, Huawei P50 Pro, Sony Xperia Pro-I, and more: ZDNet's reviews roundup
poco-m4-pro-5g-in-hand.jpg

Related Galleries

Galaxy S21 FE 5G, Huawei P50 Pro, Sony Xperia Pro-I, and more: ZDNet's reviews roundup

11 Photos
LinkOn 130W USB-C car charger
LinkOn 130W

Related Galleries

LinkOn 130W USB-C car charger

4 Photos
Apple 16-inch MacBook Pro, Microsoft Surface Go 3, Huawei MateBook 14s, and more: ZDNet's reviews roundup
linksys-velop-wifi-6-ax4200-card.jpg

Related Galleries

Apple 16-inch MacBook Pro, Microsoft Surface Go 3, Huawei MateBook 14s, and more: ZDNet's reviews roundup

16 Photos
DJI Mavic 3 in flight
Superb aerial camera platform

Related Galleries

DJI Mavic 3 in flight

10 Photos
Satechi Dock5 Multi-Device Charging Station
Satechi Dock5

Related Galleries

Satechi Dock5 Multi-Device Charging Station

6 Photos
Satechi USB-C Hybrid Multiport adaptor
Satechi USB-C Hybrid Multiport adaptor

Related Galleries

Satechi USB-C Hybrid Multiport adaptor

10 Photos