X
Innovation

GPT Group goes "all-in" to the cloud

GPT Group took an "all-in" approach moving to the cloud, seeing the company decommission its datacentres and move its entire infrastructure into the cloud within a year.
Written by Aimee Chanthadavong, Contributor

Real estate investment trust, GPT Group, took the risk to decommission its datacentres in a single move as part of its cloud initiative project, which it started in September 2012 and completed a year later.

Unlike other companies that have been making gradual moves to the cloud, GPT Group cloud solutions architect Andrew Duggan explained the company took an "all-in" approach in replacing its two datacentres operating in primary/secondary configuration — with 110 servers and 30 terabyte worth of data — to Amazon Web Services.

"The 'opportunity' — as we like to call it — was presented to IT to cut their opex budget by 20 percent, so that's a fairly significant cut. Couple that with kicking the cloud idea around, and working out what we wanted to do with it and where it was relevant," he said, speaking at the Gartner IT Infrastructure Operations and Data Centre Summit 2014 in Sydney.

"Looking then at the datacentre contract, which was due to expire in September 2013, it made sense to have a serious look at cloud and see whether cloud could be used as an alternative to the existing datacentres."

Given that it was a cost-driven project, Duggan said the cost model was used to track every migration movement, and to ensure the team "was able to achieve the cost savings it needed", especially given there was "looming" SAN refresh that was due in early 2014 for the whole enterprise SAN.

Duggan highlighted there were a number of "stand out" features of AWS that made it the contender of choice for the project.

"In order to move to Amazon we needed the common building blocks that we had in our existing environments. We needed to have virtual machines and storage and what we found is in Amazon, all those same elements exist," he said.

"Some really standout items include the Amazon virtual private cloud. It is the key technology that enables us to run a corporate environment in the Amazon space. We can define our own private subnets, our own hierarchical three tiered security model, and basically implant that in the Amazon public cloud as if it were our own private space, and we're isolated from all the other customers.

"Amazon direct connect is another key one for us. If we were connecting to all of our core systems across the internet, we would haven't got the same low latency performance we require, and so direct connect gives us high speed, low latency connection into our virtual data centre, and that's another really key that makes it work for us."

One key benefit the company saw as part of the transformation was when it began automating its development and test workloads.

"We'd turn off at 6pm when all the developers went off-duty and we'd turn it back on before those guys turn up for work the next day. So that means we have those workloads turned off for 60 percent of the hours in a month, and so we saved 60 percent of the cost," he explained.

GPT Group was also able to eliminate unused resources that was often sitting dormant. Duggan said as part of its primary/secondary datacentre configuration, its secondary data centre was doing "nothing 100 percent of the time" and it was "sitting there consuming heat and air conditioning, and it was really there not intended to be used".

"We replaced that with our infrastructure spread of 50/50 on Amazon's availability zone and we can now rapidly move those workloads between those availability zones using underlying snapshot technology. So we don't have any wasted resources or headroom that we don't actually need."

But the initial feeling about the move wasn't always a positive one, with Duggan admitting that the IT department "weren't very excited" about the idea of being able to "programmatically to talk to the cloud".

"We are not programmers, we are infrastructure guys and we were at a loss as to why we would be that excited about it," he said.

"As the project developed we started to realise we could fairly rapidly use command line tools and scripting capabilities to drive our data centre and automate and run things that meant we had less people work to do."

While the initial target was to reduce costs by 20 percent, according to Duggan, a cost saving on infrastructure of just under 30 percent was achieved instead, as costs related to running servers, data centres, interconnect costs, and an enterprise SAN refresh were eliminated.

"We offloaded the commodity work. We aren't running around datacentres with cables and screwdrivers. Those guys can now focus on high value added work by fine tuning the service on how we can deliver it better, rather than working on underlying technology that doesn't add any value."

Duggan concluded believing the "all-in" approach to cloud was the most effective, in comparison to other companies that have been taking gradual steps to move the cloud.

"I think one of the reasons a lot of cloud projects stagnate is because if you want to save money out of cloud, you stop doing what you're doing now and you move things to cloud," he said.

"Most organisations out there are kicking around the idea of cloud by using Amazon, or some kind of cloud, and they're keeping all their data centres, all their infrastructures, and then they're going — it's not saving money, it's actually costing money because they're doubling up.

"In order for it to save money you have to eliminate the other infrastructure. From the point of view from GPT Group, compress that as quickly as possible and then start saving as early as you can, so your business case works; it made the numbers work.

"If we still had the datacentres right now, the board would be unhappy because we'd be chewing into the cost savings, so it had to be done fairly quickly."

Editorial standards