We've only just begun to embrace the potential of cloud. As the so-called Internet of Things takes hold, cloud computing services will need to acquire a new depth and breadth to handle petabytes of data, demanding, complex applications, and millions of users. New, evolving architecture is needed.
The National Science Foundation (NSF) wants to promote a new generation of applications, including real-time and safety-critical applications such as those used in medical devices, power grids, and transportation systems.
These are among the reasons NSF recently announced two $10 million projects to create cloud computing testbeds--to be called "Chameleon" and "CloudLab"-- that will enable researchers to develop and experiment with, as they put it, "novel cloud architectures and pursue new, architecturally-enabled applications" of cloud computing. Ultimately, the goal of the NSFCloud program and the two new projects is to advance the field of cloud computing broadly, its promoters said.
Chameleon is a large-scale, reconfigurable experimental environment for cloud research, co-located at the University of Chicago and The University of Texas at Austin. Chameleon will consist of 650 cloud nodes with 5 petabytes of storage. The system will run on commodity processors, as well as a variety of network interconnects and storage devices. Researchers can mix-and-match hardware, software and networking components and test their performance.
Researchers will be able to configure slices of Chameleon as custom clouds using pre-defined or custom software to test different cloud architectures on a range of problems, from machine learning and adaptive operating systems to climate simulations and flood prediction. The testbed will even allow "bare-metal access"--an alternative to the virtualization technologies currently used to share cloud hardware, allowing for experimentation with new virtualization technologies that could improve reliability, security and performance.
"Like its namesake, the Chameleon testbed will be able to adapt itself to a wide range of experimental needs, from bare metal reconfiguration to support for ready made clouds," said Kate Keahey, a scientist at the Computation Institute at the University of Chicago and principal investigator for Chameleon.
"CloudLab," a large-scale distributed infrastructure based at the University of Utah, Clemson University and the University of Wisconsin, is designed to enable researchers to construct many different types of clouds. Each site will have unique hardware, architecture and storage features, and will connect to the others via 100 gigabit-per-second connections on Internet2's advanced platform, supporting OpenFlow (an open standard that enables researchers to run experimental protocols in campus networks) and other software-defined networking technologies.
In total, CloudLab will provide approximately 15,000 processing cores and in excess of 1 petabyte of storage at its three data centers. Each center will comprise different hardware, facilitating additional experimentation. In that capacity, the team is partnering with three vendors: HP, Cisco and Dell. Like Chameleon, CloudLab will also feature bare-metal access.
"Today's clouds are designed with a specific set of technologies 'baked in', meaning some kinds of applications work well in the cloud, and some don't," said Robert Ricci, a research assistant professor of computer science at the University of Utah and principal investigator of CloudLab. "CloudLab will help researchers develop clouds that enable new applications with direct benefit to the public in areas of national priority such as real-time disaster response or the security of private data like medical records."