How do you manage something that's constantly growing at an unprecedented rate, with no end in sight? That's the question many IT managers around Australia are currently asking themselves, as they size up their storage and data management strategy going into 2009. Unfortunately, there's no easy answer.
We went to the coalface and talked to three organisations about their storage and data management challenges in general, and came up with three different points of view.
Web hosting and domain specialist Melbourne IT has a lot of storage. In a recent interview with ZDNet.com.au, the company's chief IT architect Glenn Gore pegged the total at just under a petabyte (500 terabytes). An astounding 330 terabytes of that was purchased in one hit last year. But the problem is just not going away.
"We are getting to the point where we now are actively looking at purchasing more storage because we have exhausted that purchase," says Gore.
And that's the company's only problem. The nature of Melbourne IT's business means that the storage needs of its customers change at different periods of the year -- for example, financial institutions get busy at tax time. Storage virtualisation, including the use of dynamic performance tiering, has gone some way towards helping Gore's team with dealing with the demands of the business.
Coming from a company who last year bought 330 terabytes in one hit, Melbourne IT's next move may give some insight into the extent of the storage and datacentre issue hitting Australian organisations. The company went out to market for a long-term storage partner to help it keep ahead of the race.
Gore says the company initially ruled out Network Appliance for being too expensive for its specific requirements, eventually selecting IBM over EMC. "I think ... if you talk just about the raw storage capabilities, EMC storage is probably better than the IBM storage," he says. "But you have to take that holistic view of, 'how do all the components of my storage fit together?"
The eventual solution includes a disk to disk to tape backup facility, with the aim being that most data recovery scenarios would see data recovered from a disk in the same facility. The tapes are encrypted and stored remotely by a third party, with Melbourne IT spending more than $10,000 a month on tapes.
Box Hill TAFE might be in the same state, but the datacentre and storage challenges outlined by IT services manager Chris Tayler are different from those of Melbourne IT's Gore.
The organisation recently awarded APC and Dell a contract to build a state-of-the-art new datacentre that will give it enough space to grow by about 80 per cent, including an initial 15 terabytes of storage and a new building. However, it's the devil in the details that has given Tayler the proverbial headache.
The new datacentre will be the culmination of a year-long project to shift Box Hill TAFE's existing datacentre, which had outgrown the corridor where it was first established, to a new site. While the location was only over the road diagonally from the current IT setup, that diagonal is on one of the busiest intersections in Melbourne.
In short, Tayler knew that moving the datacentre was going to involve some significant challenges, but one he hadn't originally anticipated was having to negotiate with the local council to run fibre optic cables from one side of a major road to another.
It eventually proved easier to send the fibre further up the road and back again in a U-shape, thus avoiding having to close both branches of the intersection. Even so, Tayler still wryly recalls hearing a travel announcement on the radio and realising that a traffic jam being discussed was due to the datacentre migration he'd been planning. "Digging up Whitehorse Road for fibre channel was hard work," he tells ZDNet.com.au during a recent interview.
Making the move had become inevitable, however. "We had the usual issues in an ageing datacentre: lack of available power in the area, lack of real meaningful physical security, lack of fire survivability" he says. A building program within the TAFE provided the opportunity to shift — "my datacentre was getting squeezed out physically," Tayler recalls — but determining the best approach was time-consuming.
"We looked at moving it to one of the outsourced datacentres down the road, but the cost of communications was the deciding factor there. We also looked at outsourcing some services, but one of the limiting factors is we have a high-definition television studio that we teach from and do internal resourcing from. After much argy-bargy we finally discovered a location we could live with and that practically the business could live with."
That process took close to a year, during which time Tayler also had to find the million dollars required to make the shift, though he had some leverage with management on that front: "It was holding up building projects."
The project was eventually sold on a 6-year ROI plan, something that many business finance managers might resist. Tayler acknowledges that it's a lengthy investment, but says that's inevitable: "A datacentre is a specialist thing. Its ROI is longer, but its lifespan is longer."
Getting down to the nitty gritty
Like Melbourne IT (and most large organisations in Australia), Box Hill TAFE has made extensive use of virtualisation technology to solve some of its management headaches, with about 40 per cent of the 160 servers in its new environment being virtualised.
However, that approach has definite limits. "Virtualisation is great, but the data farm is still growing sideways," Tayler says. "We also looked to provide redundant power pathing. Because we're only on one grid here, we couldn't do real redundancy."
Budget constraints restricted any further power protection strategies: "Yes, a generator would be lovely, but that's another $80,000 plus running costs, and I didn't have the change."
The relocation schedule called for a planned four-week switchover from the old datacentre to the new one. One complication in that process is that the TAFE campus runs for extended hours, six days a week.
"Out of hours is a real issue We are running out of hours on the clock where we can switch stuff off," says Tyler. "But we are having a lot of scheduled outages on this project; it is unavoidable. We looked at doing the big bang and bringing everybody in on the weekend, but it was just too much and too limiting."
One benefit of making the change outside of improved server management has been the chance to rationalise network and telephony. "Box Hill never designed itself to have comms rooms in logical positions in each floor. We've managed to simplify our cabling and shorten a lot of cable runs. Of course, we've had the other side where we've lifted ceiling tiles and gone 'Oh dear'. In one case, we discovered a dump valve for the air conditioning where it wasn't supposed to be."
Another big issue is that other IT projects, such as migrating from Windows Server 2003 to 2008, have needed to be managed in the same timeframe. "The first challenge is the datacentre, though we do have simultaneous projects, because this is the real world," Tayler said.
Higher up the chain
Those storage and data management challenges exist in organisations of all sizes, and often have to be managed in parallel with other changes. However sometimes they aren't noticed until higher-level applications work is underway.
For example, the National Australia Bank is in the process of rolling out a major upgrade to its human capital management systems, involving the use of HR-specific data marts, according to its head of human capital management, Andrew Ross, who recently spoke on the matter at an Infohrm user conference in Queensland.
When Ross took on the role 18 months ago, there was a distinct lack of information integration. Getting a figure from the finance department on the number of full-time employees could take up to 12 months, rendering the numbers essentially useless; not the best solution when you're a bank with close to 39,000 employees.
Ross knew that getting systems to work together would be difficult in both technological and staffing terms. "You've got to have multiple lenses, and you've got to have the systems and the processes and the people to understand that," he told the conference.
Multiple data stores; what an ugly story that was and still is. We'll shut them down eventually though.
NAB head of human capital management, Andrew Ross
The first stage was incorporating a full human capital management system with dashboards for measuring key metrics. That task is now largely complete, but the cultural change -- getting the numbers accepted by other parts of the business as valid -- may take much longer. "In a few weeks time when we roll out the new dashboards, there's going to be questions," Ross said.
"People will question the validity of the data."
The second and bigger stage, which Ross anticipates will take three years, relates to a problem higher in the software/hardware stack that many organisations are facing as their data usage grows and grows.
The NAB is building a series of human resources data marts to allow further in-depth analysis that will draw on the centralised systems currently being built by the bank, in the process rationalising a number of data stores. "Multiple data stores -- what an ugly story that was and still is," Ross said. "We'll shut them down eventually though."
Three years is a long time to wait for a functioning system, but Ross is looking on the bright side: "At least we know we've got a road map, which is great."
Have you got a storage or data management story to tell? Drop us a line or post your experiences below this article.