X
Business

Is your data centre up to scratch?

Page II: With the combination of self-management, virtualisation, and other technologies, vendors are promising an end to server administration drudgery. But is the vision really possible?
Written by David Braue, Contributor
Page II: With the combination of self-management, virtualisation, and other technologies, vendors are promising an end to server administration drudgery. But is the vision really possible?

Data centres' reality check

In theory, virtualisation represents an important step away from the inflexibility of homogeneous data centres and the practice of overprovisioning of heterogeneous environments. By adapting resources to match changing application needs, technology investments can be better utilised and data centre reliability increased.

As is so often the case, reality is much different. For most customers, high-end platforms remain painfully elusive, even for those that have invested in SANs, blade servers, or other infrastructure that is readily virtualised.

Their more immediate focus is to complete the quite demanding process of server consolidation, which is a useful precursor to full virtualisation of the data centre resources. StorageTek's customer survey found that 71 percent of respondents were planning or considering consolidating their server and storage infrastructure into a data centre, suggesting that consolidation remains a work in progress for most companies.

That consolidation process can often be quite eye-opening, says Sun's MacDonald. "So many organisations really don't understand what they have in their data centres today," he points out. "You can't automate something if you don't know what you're automating."

This makes it early days for the automated enterprise management that N1, like competing paradigms, is designed to make possible. With customers still needing to make investment decisions based on potential business return, MacDonald concedes it is likely to be two years before N1 really starts to gain traction within the corporate data centre.

A far bigger impediment to the spread of autonomous monitoring is the issue of heterogeneity. Few companies have implemented a single technology platform across their entire data centre, which means that effective autonomy is going to require a multi-platform solution from the start. Each of these platforms is affiliated with a vendor that's trying to dominate the market with its own approach to grid and autonomous computing; put them together and, without some crucial interoperability standards, full virtualisation capabilities -- and the attendant access to heterogeneous resources that they enable -- will prove elusive.

-A lot of the disciplines that were pumped into us in the 1980s are still not being managed properly 20 years later."

--David Cowell,
Principal Consultant for Data Centre Services,
StorageTek Australia

Enter DCML (Data Center Markup Language), an emerging standard that promises to bridge the gaps between the myriad components of today's data centres. Built around a standard XML-based vocabulary, DCML code describes the components of a data centre and the policies governing it. It's being designed to enable data centre automation, utility computing, and system management solutions by providing a standard method for interchanging information.

Systems management, virtualisation, and other tools will use this information to recognise and automatically adjust for the variations between data centres. Yet while it's ambitious in scope, just what effect DCML has is yet to be seen: the 44-member organisation is due to issue the first draft of the standard is due by June, so it will be several years before a final standard is ratified and incorporated into products.

Where autonomous software may prove beneficial is in the use of pseudo-intelligent monitoring agents to provide more meaningful views of enterprise network activities. This is particularly the case with regards to security solutions such as intrusion detection systems (IDSes), which have earned a bad reputation for throwing up alerts at a rate of knots and drowning security personnel in unusable information. The addition of increasingly popular correlation engines, which have gained currency in conventional network monitoring environments as a way of better prioritising network alarms, can reduce this type of information to a volume that's small enough to manage and respond to.

StorageTek's Cowell flat out doubts that fully automated data centres can deliver the reliability they're supposed to. "Although automated management products have done backups and restores automatically in the past, a lot of customising and tuning have to be done," he says. "As much as vendors may say their systems are all singing and all dancing, the reality is that once you get to implementation it's different. Running a data centre is all about process -- for example, the process of change management -- and I can't ever see the day where you're going to be able to automate that."

At this pace, should customers really be buying into vendors' discussions about automation and the lights-out data centres they will enable? Not yet.

For now, you can put all the technology you want into your data centre but you're still going to require the expertise of real live humans to deal with exceptions and the ever-tricky issues intertwined with change management. In the near future at least, autonomous agents will become tools to help people, rather than replace them. And as long as the business continues to grow and change, the lights will almost always have to stay on inside the data centre. Otherwise, your people might trip over a cord.

Case study: Lights still on at MCT datacentre
Automation may help with some aspects of data centre management, but Glen Noble believes humans are still critical in keeping things running smoothly.

He should know. As general manager of data and hosting with Macquarie Corporate Telecommunications (MCT), Noble heads a growing team that offers collocation, managed dedicated hosting, managed shared hosting, and customised data centre outsourcing to a broad array of corporate clients.

With loads of bandwidth coming into the data centre, redundant everything and all sorts of management software keeping tabs on things, MCT's 24x7 data centre is fully wired. Yet while tools such as correlation engines are proving useful in sorting through trimming tens of millions of security alerts -- typically reducing the volume of alarms by more than 90 percent -- Noble says there's a limit to how much technology can do.

"The amount of automation we have now is fantastic," he says. "Management software suites have lots of element managers and correlation engines. On the other hand, I've got an army of engineers who have to patch servers very couple of weeks. With all these patches constantly coming out, you've got to look at them, analyse them, and make decisions as to whether you should install them. Security is about countermeasures and interpreting stuff. At the end of the day, it's extremely laborious and manual."

Noble gives the example of a hosted customer whose systems have become infected with one of the many e-mail viruses currently doing the rounds. Although management systems will quickly pick up on the infection, it ultimately takes a human to decide whether to cut that customer's access for the time it takes to resolve the issue.

Will MCT ever be able turn the lights out on its data centre? Noble doesn't think so; in fact, he's continuing to expand his data centre monitoring team with new hires. And while he appreciates that autonomous computing technology continues to improve by leaps and bounds, he doubts it's ever going to be able to do all the work on its own.

"If you had a data centre that was standalone and not connected to the world, maybe [you could switch off the lights]," he says. "But this one is not an island and it's a fair bit of effort to manage it efficiently and securely. You need human intelligence to make interpretive decisions, and no amount of automation can take away decision making."

Executive summary: keep your lights on

Vendors like to talk about building automated, lights-out data centres. In the real world, however, it's going to be a long time, if ever, before you can flip the switch for real. Here are a few things to remember:

  • Virtualisation rules. In the past, data was tied to specific servers and operating systems. This is a major reason why data centres have been so hard to manage: things work differently on different systems. Virtual servers and storage make applications run at an abstract layer divorced from the underlying hardware -- improving uptime and availability.
  • Consolidate before you automate. Data centres are rightfully becoming centralised places for storage of all corporate data and application servers. But don't jump the gun: ensure you've undergone the often tedious process of server consolidation before giving management systems free reign with virtualisation tools. Otherwise, unconsolidated servers may be left out of the loop and lose their mission-critical value.
  • It's all about availability. Make sure your planning ensures that mission-critical business processes are supported by high-availability systems.
  • Systems can't run everything ... People do. In the end, good old-fashion human intuition and abstract thinking -- two capabilities still lacking from artificial intelligence engines -- are key skills in ensuring that the data centre can change to meet every new challenge.
  • But they're great at grunt work. Use automation where it's appropriate -- like to sort out the wheat from the chaff of the millions of security alerts the average company will get every month from overcautious security systems.
  • Think smart. The biggest threats to business continuity these days aren't mean, nasty hackers so much as common things like virus-laden e-mails. Formulate policies for preventing them, then make sure your employees learn why it's important to follow them. Simple precautions now could spare you a data centre meltdown down the track.
  • Think heterogeneous. Vendors won't, but you need to if you're going to introduce some level of consistency across varying types of equipment. Look to standards like DCML to normalise the structure of your data centre in understandable terms.
  • People still know best. If you think automated tools will let you reduce your data centre head count, think again. You may be able to improve management of certain parts of the facility, but only people can effectively handle the change management that is part of everyday life within the data centre. Keep them around. Besides, machines aren't very interesting at meetings.

This article was first published in Technology & Business magazine.
Click here for subscription information.

Page II: With the combination of self-management, virtualisation, and other technologies, vendors are promising an end to server administration drudgery. But is the vision really possible?

Data centres' reality check

In theory, virtualisation represents an important step away from the inflexibility of homogeneous data centres and the practice of overprovisioning of heterogeneous environments. By adapting resources to match changing application needs, technology investments can be better utilised and data centre reliability increased.

As is so often the case, reality is much different. For most customers, high-end platforms remain painfully elusive, even for those that have invested in SANs, blade servers, or other infrastructure that is readily virtualised.

Their more immediate focus is to complete the quite demanding process of server consolidation, which is a useful precursor to full virtualisation of the data centre resources. StorageTek's customer survey found that 71 percent of respondents were planning or considering consolidating their server and storage infrastructure into a data centre, suggesting that consolidation remains a work in progress for most companies.

That consolidation process can often be quite eye-opening, says Sun's MacDonald. "So many organisations really don't understand what they have in their data centres today," he points out. "You can't automate something if you don't know what you're automating."

This makes it early days for the automated enterprise management that N1, like competing paradigms, is designed to make possible. With customers still needing to make investment decisions based on potential business return, MacDonald concedes it is likely to be two years before N1 really starts to gain traction within the corporate data centre.

A far bigger impediment to the spread of autonomous monitoring is the issue of heterogeneity. Few companies have implemented a single technology platform across their entire data centre, which means that effective autonomy is going to require a multi-platform solution from the start. Each of these platforms is affiliated with a vendor that's trying to dominate the market with its own approach to grid and autonomous computing; put them together and, without some crucial interoperability standards, full virtualisation capabilities -- and the attendant access to heterogeneous resources that they enable -- will prove elusive.

-A lot of the disciplines that were pumped into us in the 1980s are still not being managed properly 20 years later."

--David Cowell,
Principal Consultant for Data Centre Services,
StorageTek Australia

Enter DCML (Data Center Markup Language), an emerging standard that promises to bridge the gaps between the myriad components of today's data centres. Built around a standard XML-based vocabulary, DCML code describes the components of a data centre and the policies governing it. It's being designed to enable data centre automation, utility computing, and system management solutions by providing a standard method for interchanging information.

Systems management, virtualisation, and other tools will use this information to recognise and automatically adjust for the variations between data centres. Yet while it's ambitious in scope, just what effect DCML has is yet to be seen: the 44-member organisation is due to issue the first draft of the standard is due by June, so it will be several years before a final standard is ratified and incorporated into products.

Where autonomous software may prove beneficial is in the use of pseudo-intelligent monitoring agents to provide more meaningful views of enterprise network activities. This is particularly the case with regards to security solutions such as intrusion detection systems (IDSes), which have earned a bad reputation for throwing up alerts at a rate of knots and drowning security personnel in unusable information. The addition of increasingly popular correlation engines, which have gained currency in conventional network monitoring environments as a way of better prioritising network alarms, can reduce this type of information to a volume that's small enough to manage and respond to.

StorageTek's Cowell flat out doubts that fully automated data centres can deliver the reliability they're supposed to. "Although automated management products have done backups and restores automatically in the past, a lot of customising and tuning have to be done," he says. "As much as vendors may say their systems are all singing and all dancing, the reality is that once you get to implementation it's different. Running a data centre is all about process -- for example, the process of change management -- and I can't ever see the day where you're going to be able to automate that."

At this pace, should customers really be buying into vendors' discussions about automation and the lights-out data centres they will enable? Not yet.

For now, you can put all the technology you want into your data centre but you're still going to require the expertise of real live humans to deal with exceptions and the ever-tricky issues intertwined with change management. In the near future at least, autonomous agents will become tools to help people, rather than replace them. And as long as the business continues to grow and change, the lights will almost always have to stay on inside the data centre. Otherwise, your people might trip over a cord.

Case study: Lights still on at MCT datacentre
Automation may help with some aspects of data centre management, but Glen Noble believes humans are still critical in keeping things running smoothly.

He should know. As general manager of data and hosting with Macquarie Corporate Telecommunications (MCT), Noble heads a growing team that offers collocation, managed dedicated hosting, managed shared hosting, and customised data centre outsourcing to a broad array of corporate clients.

With loads of bandwidth coming into the data centre, redundant everything and all sorts of management software keeping tabs on things, MCT's 24x7 data centre is fully wired. Yet while tools such as correlation engines are proving useful in sorting through trimming tens of millions of security alerts -- typically reducing the volume of alarms by more than 90 percent -- Noble says there's a limit to how much technology can do.

"The amount of automation we have now is fantastic," he says. "Management software suites have lots of element managers and correlation engines. On the other hand, I've got an army of engineers who have to patch servers very couple of weeks. With all these patches constantly coming out, you've got to look at them, analyse them, and make decisions as to whether you should install them. Security is about countermeasures and interpreting stuff. At the end of the day, it's extremely laborious and manual."

Noble gives the example of a hosted customer whose systems have become infected with one of the many e-mail viruses currently doing the rounds. Although management systems will quickly pick up on the infection, it ultimately takes a human to decide whether to cut that customer's access for the time it takes to resolve the issue.

Will MCT ever be able turn the lights out on its data centre? Noble doesn't think so; in fact, he's continuing to expand his data centre monitoring team with new hires. And while he appreciates that autonomous computing technology continues to improve by leaps and bounds, he doubts it's ever going to be able to do all the work on its own.

"If you had a data centre that was standalone and not connected to the world, maybe [you could switch off the lights]," he says. "But this one is not an island and it's a fair bit of effort to manage it efficiently and securely. You need human intelligence to make interpretive decisions, and no amount of automation can take away decision making."

Executive summary: keep your lights on

Vendors like to talk about building automated, lights-out data centres. In the real world, however, it's going to be a long time, if ever, before you can flip the switch for real. Here are a few things to remember:

  • Virtualisation rules. In the past, data was tied to specific servers and operating systems. This is a major reason why data centres have been so hard to manage: things work differently on different systems. Virtual servers and storage make applications run at an abstract layer divorced from the underlying hardware -- improving uptime and availability.
  • Consolidate before you automate. Data centres are rightfully becoming centralised places for storage of all corporate data and application servers. But don't jump the gun: ensure you've undergone the often tedious process of server consolidation before giving management systems free reign with virtualisation tools. Otherwise, unconsolidated servers may be left out of the loop and lose their mission-critical value.
  • It's all about availability. Make sure your planning ensures that mission-critical business processes are supported by high-availability systems.
  • Systems can't run everything ... People do. In the end, good old-fashion human intuition and abstract thinking -- two capabilities still lacking from artificial intelligence engines -- are key skills in ensuring that the data centre can change to meet every new challenge.
  • But they're great at grunt work. Use automation where it's appropriate -- like to sort out the wheat from the chaff of the millions of security alerts the average company will get every month from overcautious security systems.
  • Think smart. The biggest threats to business continuity these days aren't mean, nasty hackers so much as common things like virus-laden e-mails. Formulate policies for preventing them, then make sure your employees learn why it's important to follow them. Simple precautions now could spare you a data centre meltdown down the track.
  • Think heterogeneous. Vendors won't, but you need to if you're going to introduce some level of consistency across varying types of equipment. Look to standards like DCML to normalise the structure of your data centre in understandable terms.
  • People still know best. If you think automated tools will let you reduce your data centre head count, think again. You may be able to improve management of certain parts of the facility, but only people can effectively handle the change management that is part of everyday life within the data centre. Keep them around. Besides, machines aren't very interesting at meetings.

This article was first published in Technology & Business magazine.
Click here for subscription information.

Editorial standards