X
Business

Datacentres of the 21st century

Around the world, datacentres are transforming in response to energy- and cost-reduction imperatives, as well as new business demands for greater agility, flexibility, and speed to market.
Written by Tim Lohman, Contributor

The datacentres of the 21st century are increasingly designed with business rather than technology strategy in mind. They are also highly virtualised so as to better support cloud, and feature the latest in IT hardware: Blade and micro servers, and solid-state drives (SSDs).

Sophisticated software is also increasingly used; everything from software-defined networking (SDN) to demand modelling and forecasting, to orchestration and management tools for an ever-smaller team of highly skilled datacentre staff.

Rising energy costs and green imperatives are also powerful transformational forces, driving changes to everything from datacentre layout to cooling to IT hardware and software itself.

Just where do Australia's enterprises stand amid this global datacentre transformation? ZDNet spoke to two organisations — the National Australia Bank (NAB) and wholesale cloud provider OrionVM — to find out.

Datacentre strategy

Tim Palmer, senior manager of datacentre transformation at NAB, said that the bank's datacentre strategy is very much in line with the ideal of the datacentre of the 21st century. He said that the strategy, part of a wider IT transformation underway at NAB, is directly influenced by the bank's business strategy, which is all about remaining focused on banking and financial services.

ma036
Standby battery banks – to ensure power between any outage and generators kicking in. Click to enlarge. Image: NAB

This meant that a clear decision has been made that NAB is not in the business of running datacentres. As such, the bank is currently consolidating more than 20 disparate datacentres into two "fortress" datacentres, including a new facility run by Digital Realty, which NAB began moving into early this year.

The business strategy has also meant that the bank decided that it is not in the business of providing IT infrastructure or networking services. These have been handed off to IBM and Telstra, respectively.

"We want to be able to focus on the things that add value to our customers and shareholders, and we wish to consume technology services from technology experts," Palmer said of the thinking informing the strategy. Taking this approach also means that the bank gains access to skills and innovation that it cannot hope to develop internally.

"We can't possibly pretend to have the research and development budget that IBM has in the hosting and storage space, or like Telstra has in the networking space, or like Digital Realty has in the datacentre space... It makes sense to consume services from companies which have those economies of scale and skills in-house."

Over at OrionVM, business strategy is also informing the company's datacentre strategy — despite, or indeed because of, being a cloud provider. OrionVM's managing director Sheng Yeo said that the company's business strategy hinges on being able to quickly scale and develop points of presence close to its customers, as well as leverage the skills and facilities of a dedicated outsourced datacentre operator. This has meant that its datacentre strategy has been to turn to a large-scale datacentre operator, Equinix.

"We didn't roll out our own facilities in our own premise, primarily for reach, scale, and agility reasons," Yeo explained. "We are leveraging the close to 100 datacentres Equinix has globally, so that the moment we sign a customer in a particular region, we can roll out a point of presence there with minimal effort and accelerated timelines."

Given that OrionVM also has a number of state and federal government agencies as its customers, choosing to outsource its datacentre has also meant that it can leverage Equinix to tick its compliance and security requirements. It has also leveraged the provider's size and reach to move closer to its customers' own IT infrastructure.

"In every region, Equinix has one of our customers as their customer, so we can then cross-connect them really quickly and easily," Yeo said of the advantage. "We also use Equinix's Marketplace, which is an online directory, to see who else is already in that facility and how you can work together. We leverage that ecosystem."

Virtualisation

ma058
State of the art Melbourne data centre will house NAB’s new private cloud and is certified to international standard Tier 3. Click to enlarge.
Image:NAB

Just as in the ideal datacentre of the 21st century, both NAB and OrionVM make heavy use of virtualisation. For NAB, virtualisation is so ingrained that the bank's strategy is one of "virtual by default".

"Every new initiative that comes along needs to be virtualised and put onto a virtual platform, unless for some reason the application or vendor or software can't support it," NAB's Palmer said. "Where something can't be virtualised, it will have the most minimal footprint we can have on blades. For our server and storage platforms, everything is virtual by default."

As Palmer explained, the bank has taken a "pattern-based approach" — pre-defined configurations of hardware, which are then applied to a given workload, such as a Windows web server, Linux application server, or Unix-based Oracle database server. This avoids the clutter caused by separate business units engaging their own architects and designers to build custom solutions and application deployments.

"Based on the business functionality they want to roll out, [business units have] ... a menu of predefined services, and that helps ensure that our underlying platform is as standardised as possible, and is as current as possible, and doesn't get out of date," he said. "It's fewer moving parts, basically, and it allows you to get a higher level of density and consolidation inside your datacentre."

Unsurprisingly for a cloud provider, OrionVM is about 90 percent virtualised. "We have used high-density infrastructure to go through and chop up a lot of these big physical servers into the smaller, complex, and scalable environments that our customers need," Yeo explained. "We use Xen as a hypervisor, and we manage our own distribution of those apps and management infrastructure.

"Orion's secret source is our distributed storage system, which we have built in-house, and that allows us to deliver enterprise-class tier-zero, tier-one, and tier-two storage to our customers without the high price point of buying off-the-shelf technology."

Servers, storage, networking

At NAB, Palmer said that the bank acknowledges the role for blade and micro servers, SSDs, and advanced networking. However, given its outsourced datacentre, infrastructure, and networking strategy, NAB has largely handed the question of the use of these technologies over to its partners.

ma076
Data halls – showing power train and yellow fibre cable ducts. Click to enlarge.
Image: NAB

"While we own the strategy and overall architecture, we don't mandate a particular type of underlying server," Palmer explained. "We do have in our operational and service-level agreements requirements about power reduction and so on — so it is in both parties' interest to have the smallest footprint possible — but we don't mandate a particular flavour of server.

"Having said that, in being virtual by default, where something can't be virtual, it will sit on a blade server inside a chassis. With blade servers, you get the highest level of density, reduced electricity consumption, air conditioning, and floor space requirements."

On the storage front, Palmer said that the bank is yet to move to solid state, but will likely adopt it once SSDs become more cost effective. On the networking side, which is managed by Telstra, the bank has made few technology stipulations, other than that the network remain secure, easy to manage, and in line with industry direction and developments.

In being a cloud provider, OrionVM has taken a more active interest in its servers, storage, and networking, Yeo said. Rather than relying on a mix of blade and micro servers, the company has opted to create an homogenous infrastructure based off spec'd-out Dell and Supermicro servers with onboard storage and connected with high-speed InfinBand to create one big compute and storage pool.

"What we have done is effectively created an infrastructure which runs just as well, if not better in value, than a lot of the enterprise storage solutions and virtualisation solutions — but at a price point which allows us to offer a significant margin at a wholesale level so that our business model works."

Yeo also said that in going for a distributed platform, OrionVM has reduced a lot of the possible single points of failure in its cloud. "With Blades and SANs, one of the key risks is that you are putting a lot of your eggs in one basket," he said. "If you lose one chassis, you lose all the blades in it. If you have independent servers, it is a lot harder to take out a cluster in a single go."

That's not to say that he doesn't see the value of blade servers, or anticipates that in the future, datacentres won't make use of them. "If you are using a large-scale SAN array like a NetApp or EMC, then blade servers are the best combination," he said. "You take a blade server, a high-throughput switching fabric, and then a SAN, and you gain a very automatable and efficient virtualisation platform."

As for micro servers, Yeo said they have their place in the datacentre of the 21st century — primarily as a cheap way to host a website, or as a dedicated alternative to virtualisation if there is a requirement to not use a virtual machine, such as for security reasons or where guaranteed throughput is required.

"The number one reason [for micro servers] would be down to silly things, like needing a USB dongle for a piece of software you are running. On a highly available VM farm, the VM you need could be on any machine, so doing things like passing through a USB token can be quite difficult. So, if there is a specific hardware requirement, then micro servers can be good for that."

On the storage front, Yeo sees an increasing use for SSDs — particularly as the price per GB falls, and as requirements for storage with very high input/output per second (IOPS) grow. The company has used SSDs for about a year and a half.

"In enterprise, a lot of the large-scale SANs now have SSD caching to allow you to accelerate storage in general," he said. "A lot of servers now come with SSDs. That has come about as pricing for solid-state storage has dropped five or six times in the last three or so years.

"Primarily, we use SSD very heavily in our hybrid storage pool. It delivers a lot of our storage performance through using SSD storage caching," he said. "We also use SSD for a pure flash tier we are about to push out — a tier-zero guaranteed I/O for VMs on our platform. If you need 10 or 20,000 IOPS for your database, then we can deliver that to tier zero."

On the networking front, Yeo said that SDN is one of the key components for OrionVM's virtualised platform. "We use a lot of common SDN protocols to be able to do a lot of environment isolation so that each environment has a completely segregated, logical network, and then extend down into a converged physical network," he said. "That is the core reasoning behind SDN."

However, Yeo believes that the use of SDN in datacentres is likely to remain niche, or at least confined to use by service providers, rather than enterprises, due to few businesses needing the ability to rapidly change their network and architecture.

"With software-defined networking in datacentres, you will find most of it is in public or private cloud solutions providers or service providers," he said. "In the telco space, MPLS networks and automatable end-to-end trunks have already been [doing what SDN does] for 10 or 15 years, if not more. It is just a different name for achieving the same goal."

Management tools and people

In considering the new datacentre management tools at staff retraining, NAB's Palmer said that the bank anticipates no major changes in either its people or in the management tools it uses. Again, this is largely due the heavy use of technology partners IBM and Digital Realty, and in using a third party for facilities management.

However, at OrionVM, Yeo said that the company has placed a lot of emphasis on these issues, opting to craft its own set of management and orchestration tools based on industry-standard methodologies, rather than using those available off the shelf.

Sheng-Yeo
OrionVM's Sheng Yeo
Image: OrionVM

"We have done it for efficiency," he explained. "We have had to do a lot of optimisation to allow us to place virtual machines as close to the storage to reduce cross-bisectional bandwidth in the datacentre and things like that. None of that configurability is available in the off-the-shelf solutions.

"On the other hand, we have given up a lot of the features in the off-the-shelf solutions, as they take a lot of effort to build. We are missing out on integration and compatibility with other pieces of software — we have to write the integration ourselves. But most of what we have to be able to do is already baked in."

While the company is currently assessing predictive modelling and forecasting tools to allow it to create even higher density with its virtual machines, Yeo said that these tools are likely to be most often found in industrial-scale datacentres, such as those of Amazon or Google, rather than among your average enterprise.

"With the likes of Amazon in their own facilities, they can take advantage of the fact that they can move all the virtual machines in racks 10 to 20 and move them to racks one to nine, and then turn off those unused racks to save on electricity costs," he said. "We have traded off that flexibility to reduce our power bill in partnering with Equinix ... but we gain the ability to roll things out a lot faster."

As far as retraining staff to handle modern datacentres, Yeo said that OrionVM has only had to worry about training staff on the company's own management tools and virtualised platform. That's in part due to the wholesale cloud business model that the company has, and its use of channel partners. In taking this approach, OrionVM can focus on its IT infrastructure and virtual machines, while partners handle the complexity of applications running on those VMs.

"We deliver the infrastructure, and the partners deliver the integration and management of the deployment services for customers," he explained. "Otherwise, we would need expertise in almost every single software solution and manage that for the customers."

Energy, cooling, green

In creating its datacentre of the 21st century, NAB's Palmer said that the bank has put a lot of consideration into energy efficiency, cooling, and environmental concerns. Central to this is the decision to utilise free air cooling.

MA021
Louvres on NAB’s high volume air conditioning units open to release hot air and inhale Melbourne’s cool breezes. Click to enlarge.Image:NAB

This approach takes external air, runs it through a particulate filter, and directs it at IT circuitry — rather than water-based cooling or cooling and then reusing hot datacentre air. Palmer said that this is far more energy efficient, given that conditioned air is only used when external air is 23 degrees Celsius or above. Fully air-conditioned air is used exclusively when external air is 30 degrees Celsius or above.

Palmer said that the bank's new datacentre facility has a guaranteed power utilisation effectiveness (PUE) rating of 1.3 or less. The datacentre that the new facility replaces has a rating of 2.1, while many of the bank's older facilities have an average of 2.5.

"That is very much the stock-standard way datacentres are built these days," he said. "There is no point in cooling hot air if you can just take outside air and put it through a particulate filter. That absolutely minimises your energy consumption."

In addition to free air cooling, the bank has also made heavy use of high-density blade servers and efficient layouts, as well as computational fluid dynamics (CFD).

"We have designed the floor plan using CFD modelling to understand exactly where the circuitry will sit, how many amps we are consuming in every cabinet, and only directing cool air to where it is exactly needed," Palmer said.

"This has enabled us to get high levels of density and greatly reduce our levels of power consumption. It means that for approximately nine months of the year, you can take outside air, without the need for air conditioning."

OrionVM's Yeo said that energy efficiency has also been a consideration, but that the company has largely left it up to Equinix to tick these boxes through giving it specific SLAs and PUE targets.

"The obvious by-product of greater efficiency is that [the datacentre owner] should pass on a lower price point," he explained. "From that perspective, we care about what Equinix is doing.

"But the facilities we are in are built to best practice or better. A lot of that is because they have the scale to focus on that. Their scale is such that 5 or 6 percent efficiency matters as it goes straight to their bottom line."

The future

Looking ahead to datacentre evolution during the next few years, OrionVM's Yeo said that the major trends will be an ever-greater push for enterprises to outsource their datacentres due to the lower TCO, higher quality and scale that dedicated providers can offer. Another driver for this will be compliance.

"Compliance is one of the things business are looking for, so moving into an outsourced facility is a way to tick that box," he said. "The cost of labour in this country is also going up year on year, so the more you can centralise your infrastructure in the one place, the fewer people you need to watch over your infrastructure, so the cheaper the management of it will get. So, in the long run, it is cheaper to outsource it."

Similarly, NAB's Palmer said that he expects the major trend to be toward outsourcing, but, in particular, to larger providers for the scale and lower cost they offer.

"Consuming services from third-party providers is also the future, as is consuming cloud services," he said. "Where we have something which is core customer data and critical, we will keep it inside our fortress datacentres. But where we have services which are cost effective and meet certain security requirements, we will consume those from third parties via cloud providers. You will see that more and more in Australia and around the world."

Editorial standards