X
Tech

Part 1: Guide to IT consolidation

Page III: When is the right time to consolidate your IT resources . .
Written by Stephen Withers, Contributor

Part 1
Part 2
insight03.jpg
Desktops
Storage
Application servers
General
insight08.jpg
Case study
Executive Summary

Page III: When is the right time to consolidate your IT resources . . . or is there ever a good time?

Changes, availability and service levels must all be managed more carefully in a consolidated environment, and a change in the technical culture is needed. "The tolerance for error is dramatically reduced," he says. Using disciplines and approaches with their roots in mainframe culture such as ITIL helps.

Consolidation can be expensive. One of Favetti's clients spent around $1 million on consolidated server hardware for a cluster running SQL Server for a mission-critical Internet application. Centralisation also requires bandwidth and extra communications equipment such as switches and routers. "You're always going to have tradeoffs," says Favetti.

Workload management tools may help get the best out of consolidated hardware, but there's still the trade-off between upfront cost and anticipated savings. A $500,000 investment versus a possible $600,000 saving over four years might not be enough, says McIsaac.

One problem with using virtual servers is that the effect on licensing costs is not obvious. If software is licensed per-processor, there will be a saving, but keep in mind that you may need one copy of the operating system (and possibly other software) for each virtual server plus another for the physical server.

Simon Elisha, senior technical consultant at Veritas, says application consolidation is particularly appropriate for databases. For example, four older servers each running an Oracle database could readily be moved onto one new box. Mixed loads are more contentious, and he counsels against attempting to consolidate (for example) a 24x7 application with another that's only needed for two hours a day, as their requirements are too different.

Consolidation also has a place in high availability situations, he suggests. Instead of the traditional "Noah's Ark model" (servers lined up two by two in active/passive pairs), consolidation can enable a switch to n+1 clustering which is cheaper and easier to manage. Some of Veritas' large US customers have eliminated hundreds of servers in this way, Elisha says.

A lack of project leadership can be a particular problem for consolidation, says Hanrahan. Turf wars can break out if you try to consolidate currently decentralised functions, so good leadership and a good business case are important. He strongly recommends that IT should report the measured benefits of consolidation to the business so that the actual savings are visible. Good results can help generate support for subsequent projects.

"Overconsolidation" is also a risk. Taking a staged approach avoids highly visible problems or failures. Hanrahan recommends separating a general consolidation project into a number of smaller, stand-alone sub-projects. Start by identifying the hard cost savings, administrative savings and productivity benefits, and then factoring in the cost of the proposed changes to calculate the three-year net present value of the project. The sub-projects with the highest value get priority; the lowest can be delayed or even dropped completely.

"Many consolidation projects fail because they are too large," says Leworthy, so it makes sense to identify individual projects and tackle one at a time. Considerations include changes to bandwidth requirements, high availability (if 50 servers are being replaced with one or two, some sort of high availability or disaster recovery will be needed).

"Some organisations have an 'if it ain't broke, don't fix it' mentality, but consolidation would allow them to move to a new level of productivity and efficiency."

Careful planning is needed for consolidation projects, but Microsoft's partners are successfully doing such work for their customers, he says. Not all servers are equal candidates for consolidation, for example you might consolidate 30 percent of file servers but only 10 percent of Web servers. Microsoft's partners have solid consolidation assessment services, he says, and can identify prime targets.

File servers and like-load servers are the low hanging fruit for consolidation, he suggests, and efforts are now moving to mixed loads. "We're seeing really high levels of success in our consolidation projects," says Leworthy.

Hanrahan recommends that you should analyse services for their costs and benefits as part of the consolidation process. This provides a basis for reporting against your budget, or for determining chargebacks. It is important to apply metrics rigorously. The full lifecycle costs should be included, along with availability, downtime and responsiveness (in terms of the ability to handle requests for new services as well as the responsiveness of existing services). If this isn't done, the results can be misleading.

"People have a belief that the cost of managing servers is proportional to the number of boxes," McIsaac says, but it's really about the complexity, "the degree of diversity--how uniform are your systems?"

Rationalisation to a small number of hardware configurations (perhaps a larger and a smaller model), one operating system and one database (and maybe not more than two versions of any piece of software) should precede consolidation, he advises, and then "stick with Exchange consolidation, and file and print consolidation."

The number of applications, operating systems and databases gives complexity, regardless of the number of machines, he says. "Be waring of assuming that less machines means less work." He points to Google as an example, where each systems administrator manages in the order of 1000 servers. This is possible because they are all similarly configured.

Reducing the number of systems tends to drive down the number of people needed to manage them, says Favetti, but the skills needed by those people may increase, for example to cope with the management of virtual servers. Another aspect is that consolidation can increase the amount of damage caused by one mistake, so it becomes more important that staff are well trained and knowledgeable.

Muller sees hardware standardisation as a useful side effect of consolidation, allowing consolidation of the skills and competencies required by IT personnel. People need to know how all the "Lego blocks" fit together, he says, but they should focus on the functions performed by each block, not the inner workings.

Eighteen months ago, META forecast that SAN storage was too expensive for some purposes, and its customers are now reporting that is the case. SAN is not about cost savings, says McIsaac, but about providing new capability such as disaster recovery, high availability, and rapid provisioning. With server consolidation, it's "probably the same story, but it doesn't play out so well" in terms of provisioning a new partition, he says, and it isn't as easy, simple or obvious as vendors suggest. If complex analysis is required to make the case for a project, there's a high risk that the analysis is wrong.

The second part of our Guide to IT consolidation special report can be found here.

Subscribe now to Australian Technology & Business magazine.


Part 1
Part 2
insight03.jpg
Desktops
Storage
Application servers
General
insight08.jpg
Case study
Executive Summary

Page III: When is the right time to consolidate your IT resources . . . or is there ever a good time?

Changes, availability and service levels must all be managed more carefully in a consolidated environment, and a change in the technical culture is needed. "The tolerance for error is dramatically reduced," he says. Using disciplines and approaches with their roots in mainframe culture such as ITIL helps.

Consolidation can be expensive. One of Favetti's clients spent around $1 million on consolidated server hardware for a cluster running SQL Server for a mission-critical Internet application. Centralisation also requires bandwidth and extra communications equipment such as switches and routers. "You're always going to have tradeoffs," says Favetti.

Workload management tools may help get the best out of consolidated hardware, but there's still the trade-off between upfront cost and anticipated savings. A $500,000 investment versus a possible $600,000 saving over four years might not be enough, says McIsaac.

One problem with using virtual servers is that the effect on licensing costs is not obvious. If software is licensed per-processor, there will be a saving, but keep in mind that you may need one copy of the operating system (and possibly other software) for each virtual server plus another for the physical server.

Simon Elisha, senior technical consultant at Veritas, says application consolidation is particularly appropriate for databases. For example, four older servers each running an Oracle database could readily be moved onto one new box. Mixed loads are more contentious, and he counsels against attempting to consolidate (for example) a 24x7 application with another that's only needed for two hours a day, as their requirements are too different.

Consolidation also has a place in high availability situations, he suggests. Instead of the traditional "Noah's Ark model" (servers lined up two by two in active/passive pairs), consolidation can enable a switch to n+1 clustering which is cheaper and easier to manage. Some of Veritas' large US customers have eliminated hundreds of servers in this way, Elisha says.

A lack of project leadership can be a particular problem for consolidation, says Hanrahan. Turf wars can break out if you try to consolidate currently decentralised functions, so good leadership and a good business case are important. He strongly recommends that IT should report the measured benefits of consolidation to the business so that the actual savings are visible. Good results can help generate support for subsequent projects.

"Overconsolidation" is also a risk. Taking a staged approach avoids highly visible problems or failures. Hanrahan recommends separating a general consolidation project into a number of smaller, stand-alone sub-projects. Start by identifying the hard cost savings, administrative savings and productivity benefits, and then factoring in the cost of the proposed changes to calculate the three-year net present value of the project. The sub-projects with the highest value get priority; the lowest can be delayed or even dropped completely.

"Many consolidation projects fail because they are too large," says Leworthy, so it makes sense to identify individual projects and tackle one at a time. Considerations include changes to bandwidth requirements, high availability (if 50 servers are being replaced with one or two, some sort of high availability or disaster recovery will be needed).

"Some organisations have an 'if it ain't broke, don't fix it' mentality, but consolidation would allow them to move to a new level of productivity and efficiency."

Careful planning is needed for consolidation projects, but Microsoft's partners are successfully doing such work for their customers, he says. Not all servers are equal candidates for consolidation, for example you might consolidate 30 percent of file servers but only 10 percent of Web servers. Microsoft's partners have solid consolidation assessment services, he says, and can identify prime targets.

File servers and like-load servers are the low hanging fruit for consolidation, he suggests, and efforts are now moving to mixed loads. "We're seeing really high levels of success in our consolidation projects," says Leworthy.

Hanrahan recommends that you should analyse services for their costs and benefits as part of the consolidation process. This provides a basis for reporting against your budget, or for determining chargebacks. It is important to apply metrics rigorously. The full lifecycle costs should be included, along with availability, downtime and responsiveness (in terms of the ability to handle requests for new services as well as the responsiveness of existing services). If this isn't done, the results can be misleading.

"People have a belief that the cost of managing servers is proportional to the number of boxes," McIsaac says, but it's really about the complexity, "the degree of diversity--how uniform are your systems?"

Rationalisation to a small number of hardware configurations (perhaps a larger and a smaller model), one operating system and one database (and maybe not more than two versions of any piece of software) should precede consolidation, he advises, and then "stick with Exchange consolidation, and file and print consolidation."

The number of applications, operating systems and databases gives complexity, regardless of the number of machines, he says. "Be waring of assuming that less machines means less work." He points to Google as an example, where each systems administrator manages in the order of 1000 servers. This is possible because they are all similarly configured.

Reducing the number of systems tends to drive down the number of people needed to manage them, says Favetti, but the skills needed by those people may increase, for example to cope with the management of virtual servers. Another aspect is that consolidation can increase the amount of damage caused by one mistake, so it becomes more important that staff are well trained and knowledgeable.

Muller sees hardware standardisation as a useful side effect of consolidation, allowing consolidation of the skills and competencies required by IT personnel. People need to know how all the "Lego blocks" fit together, he says, but they should focus on the functions performed by each block, not the inner workings.

Eighteen months ago, META forecast that SAN storage was too expensive for some purposes, and its customers are now reporting that is the case. SAN is not about cost savings, says McIsaac, but about providing new capability such as disaster recovery, high availability, and rapid provisioning. With server consolidation, it's "probably the same story, but it doesn't play out so well" in terms of provisioning a new partition, he says, and it isn't as easy, simple or obvious as vendors suggest. If complex analysis is required to make the case for a project, there's a high risk that the analysis is wrong.

The second part of our Guide to IT consolidation special report can be found here.

Subscribe now to Australian Technology & Business magazine.


Part 1
Part 2
insight03.jpg
Desktops
Storage
Application servers
General
insight08.jpg
Case study
Executive Summary

Page III: When is the right time to consolidate your IT resources . . . or is there ever a good time?

Changes, availability and service levels must all be managed more carefully in a consolidated environment, and a change in the technical culture is needed. "The tolerance for error is dramatically reduced," he says. Using disciplines and approaches with their roots in mainframe culture such as ITIL helps.

Consolidation can be expensive. One of Favetti's clients spent around $1 million on consolidated server hardware for a cluster running SQL Server for a mission-critical Internet application. Centralisation also requires bandwidth and extra communications equipment such as switches and routers. "You're always going to have tradeoffs," says Favetti.

Workload management tools may help get the best out of consolidated hardware, but there's still the trade-off between upfront cost and anticipated savings. A $500,000 investment versus a possible $600,000 saving over four years might not be enough, says McIsaac.

One problem with using virtual servers is that the effect on licensing costs is not obvious. If software is licensed per-processor, there will be a saving, but keep in mind that you may need one copy of the operating system (and possibly other software) for each virtual server plus another for the physical server.

Simon Elisha, senior technical consultant at Veritas, says application consolidation is particularly appropriate for databases. For example, four older servers each running an Oracle database could readily be moved onto one new box. Mixed loads are more contentious, and he counsels against attempting to consolidate (for example) a 24x7 application with another that's only needed for two hours a day, as their requirements are too different.

Consolidation also has a place in high availability situations, he suggests. Instead of the traditional "Noah's Ark model" (servers lined up two by two in active/passive pairs), consolidation can enable a switch to n+1 clustering which is cheaper and easier to manage. Some of Veritas' large US customers have eliminated hundreds of servers in this way, Elisha says.

A lack of project leadership can be a particular problem for consolidation, says Hanrahan. Turf wars can break out if you try to consolidate currently decentralised functions, so good leadership and a good business case are important. He strongly recommends that IT should report the measured benefits of consolidation to the business so that the actual savings are visible. Good results can help generate support for subsequent projects.

"Overconsolidation" is also a risk. Taking a staged approach avoids highly visible problems or failures. Hanrahan recommends separating a general consolidation project into a number of smaller, stand-alone sub-projects. Start by identifying the hard cost savings, administrative savings and productivity benefits, and then factoring in the cost of the proposed changes to calculate the three-year net present value of the project. The sub-projects with the highest value get priority; the lowest can be delayed or even dropped completely.

"Many consolidation projects fail because they are too large," says Leworthy, so it makes sense to identify individual projects and tackle one at a time. Considerations include changes to bandwidth requirements, high availability (if 50 servers are being replaced with one or two, some sort of high availability or disaster recovery will be needed).

"Some organisations have an 'if it ain't broke, don't fix it' mentality, but consolidation would allow them to move to a new level of productivity and efficiency."

Careful planning is needed for consolidation projects, but Microsoft's partners are successfully doing such work for their customers, he says. Not all servers are equal candidates for consolidation, for example you might consolidate 30 percent of file servers but only 10 percent of Web servers. Microsoft's partners have solid consolidation assessment services, he says, and can identify prime targets.

File servers and like-load servers are the low hanging fruit for consolidation, he suggests, and efforts are now moving to mixed loads. "We're seeing really high levels of success in our consolidation projects," says Leworthy.

Hanrahan recommends that you should analyse services for their costs and benefits as part of the consolidation process. This provides a basis for reporting against your budget, or for determining chargebacks. It is important to apply metrics rigorously. The full lifecycle costs should be included, along with availability, downtime and responsiveness (in terms of the ability to handle requests for new services as well as the responsiveness of existing services). If this isn't done, the results can be misleading.

"People have a belief that the cost of managing servers is proportional to the number of boxes," McIsaac says, but it's really about the complexity, "the degree of diversity--how uniform are your systems?"

Rationalisation to a small number of hardware configurations (perhaps a larger and a smaller model), one operating system and one database (and maybe not more than two versions of any piece of software) should precede consolidation, he advises, and then "stick with Exchange consolidation, and file and print consolidation."

The number of applications, operating systems and databases gives complexity, regardless of the number of machines, he says. "Be waring of assuming that less machines means less work." He points to Google as an example, where each systems administrator manages in the order of 1000 servers. This is possible because they are all similarly configured.

Reducing the number of systems tends to drive down the number of people needed to manage them, says Favetti, but the skills needed by those people may increase, for example to cope with the management of virtual servers. Another aspect is that consolidation can increase the amount of damage caused by one mistake, so it becomes more important that staff are well trained and knowledgeable.

Muller sees hardware standardisation as a useful side effect of consolidation, allowing consolidation of the skills and competencies required by IT personnel. People need to know how all the "Lego blocks" fit together, he says, but they should focus on the functions performed by each block, not the inner workings.

Eighteen months ago, META forecast that SAN storage was too expensive for some purposes, and its customers are now reporting that is the case. SAN is not about cost savings, says McIsaac, but about providing new capability such as disaster recovery, high availability, and rapid provisioning. With server consolidation, it's "probably the same story, but it doesn't play out so well" in terms of provisioning a new partition, he says, and it isn't as easy, simple or obvious as vendors suggest. If complex analysis is required to make the case for a project, there's a high risk that the analysis is wrong.

The second part of our Guide to IT consolidation special report can be found here.

Subscribe now to Australian Technology & Business magazine.




Editorial standards