Extreme makeover, legacy technology edition

Extreme makeover, legacy technology edition

Summary: If you were to design an IT infrastructure from scratch would it look like anything you have today? 'No freaking way,' says a leading virtualization proponent.


"If you were to design IT from scratch, if you walked in with a clean sheet of paper, you could design anything you want, would you design what we’ve got now?" That's the question asked by Tech Field Day organizer Stephen Foskett in a SiliconAngle interview at the recent Dell World 2012.

He answers his own question: "And the answer is 'no freaking way.' No way in the world would I make a bunch of giant PCs attached to a bunch of fake disks over a specialized network. None of this makes sense."

Many IT executives and professionals don't have the option to redesign their systems from scratch -- that's why virtualization and converged infrastructure is such a game changer, Foskett continues. 

Currently, within most enterprises, there's a push and pull between conventional  applications designed for a conventional windows and Linux fat PC and server world and the new, emerging cloud world. "That’s the cool thing about virtualization," he explains. "It lets you transition from a real backward concept -- which is the Microsoft server concept -- into the future, which has standardized layers and standardized infrastructure."

Foskett points to new, smaller vendors emerging on the scene -- such as Nutanix, Scale Computing, and Simplivity -- which offer ways to bring legacy applications into the cloud realm. But the shift to virtualization and converged infrastructure will be gradual and take some time, he cautions.  "What they’re doing is basically asking that core question, which is: 'if we were to do this again, how would we do it? How would we do it differently, but yet maintaining backward compatibility?'"

Virtualization and converged infrastructure help bring together the old world of on-premises legacy applications and the new world of cloud applications, Foskett points out. Today, backward compatibility is an issue with cloud computing, he continues. "With the cloud, there’s no compatibility," he says. "And we're  still going to have to have the same kind of applications running for the next 10-20 years. It's just the new applications that are going to run in the cloud."

Scaling is another issue that's not going to go away anytime soon, he adds. "The hardest, number-one challenge technically is scaling," Foskett says. "Getting anything -- storage, network, servers whatever -- to get bigger and smaller on demand. The coolest thing that the cloud does is scale, to tremendous levels. If you're going to have a converged infrastructure, it has to scale, and frankly, that’s a huge technical challenge."

(Thumbnail photo: Stephen Foskett, via Tech Field Day site.)

Topics: Virtualization, Cloud, Data Centers

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • very interesting.

    Ram U
  • Apart from the relentless DELL advertising comments ...

    1. I hope these new companies do produce radical designs to topple AMZN, MSFT, VMWare, ... because the incumbents will try to keep all the money. The DELL-SUSE alliance is a big opportunity to do that also.

    2. I don't trust DELL either: they will try to sell their special boxes currently only available to the biggest cloud players. We don't want them now and we definitely don't want them in a new architecture. I looked briefly at the Simplivity box: it looks like the typical data centre server crap. We don't want expensive enterprise crap - we want the Google cheap commodity network architecture with built in failover solution. Fossett castigated IBM and HP for withdrawing from the consumer PC market. Listen up: we want smaller nodes running on even cheaper hardware!

    3. What is this scalability problem? Most consumers and most business users do diddly-squat with their computer's power. There are some monolithic applications, so distribute them over a network. Fossett sympathised with designs that could not go from 50 to 200 and back down to 8 (or some such). How is that a problem? If the cloud is large and shares resources dynamically ... then the random requests from a large number of sources will balance out to a predictable load (The Central Limit Theorem). My favourite example is the comparison between RAID, AMZN and symform data storage. There should be no 'small' clouds ... everyone uses a little part of a 'big' cloud.

    4. I don't like the word 'converged' - it really means 'vendor lock-in'. I prefer, indeed will only tolerate, 'interoperable', 'efficient' and 'cheap'. There is a precedent for this ... it's called THE INTERNET.
    Yes ... get that compatibility back from the incumbents!

    5. I recall a programme on the inventor Barnes Wallace. He looked at the trends in aircraft design and shook his head at the rush towards large, heavy, fixed-wing warplanes. He provided a model of a small, lightweight, flexible-wing craft which soared and bounced in the wind like a bird. We need lots and lots of these little birds ... and only a very few pterodactyls!