X
Tech

What would a next generation datacenter look like?

It appears that my recent post, IBM announces the Z10: Is the mainframe still relevant?, must have touched on something that had an impact on quite a number of people.
Written by Dan Kusnetzky, Contributor

It appears that my recent post, IBM announces the Z10: Is the mainframe still relevant?, must have touched on something that had an impact on quite a number of people. I received quite a number of messages from people who either thought that I was on to something or thought that I obviously was living near the beach in Florida rather than near the datacenters they knew. So, I thought I'd start collecting a list of requirements for that next generation datacenter that was based upon my warped views and some of the things that these folks told me. There's a Talkback button over to the left, add your views if you'd like. No, no, don't hold back. We're all friends here.

Requirements for the next generation datacenter

  • Today's datacenter is a layer cake of technology. It would be nice if the next generation datacenter would be constructed using systems based upon a standard architecture that could support encapsulated virtual systems that could support all of the important operating systems and their supported workloads. This, of course, would mean supporting virtual servers that looked like your typical industry standard system, a system based upon any of the important processor architectures (Alpha, PA-RISC, Power, SPARC, and, of course, IBM's Z-series).
  • Orchestration software, sometimes thought of as "the operating system for the datacenter", that makes it it possible for the organization to set policies and know that the datacenter would configure and reconfigure itself as necessary to meet those objectives.  Products offered by quite a number of suppliers including 3Tera, Cassatt, Novell, Virtual IronVMware, and many others are heading in that direction today.
  • The datacenter would automatically power down unneeded resources and wake them back up again as needed. Cassatt is pushing that button now.
  • The datacenter would automatically project capacity requirements based upon historical usage patterns and inform administrators that new blades, nodes or whatever must be acquired and installed well in advance of actual need.
  • The datacenter must be smart enough to support pre-production development and testing as well as production work. Software from suppliers such as Surgient are headed in that direction now.
  • Users should be able to access applications and data from wherever they are using whatever networked, intelligent device they choose.

What do you think should be on this list?

Editorial standards