Intel outlines updated forecast, strategy for software-defined datacenters

Intel execs argue that today's infrastructure for networks, servers, and storage are strained. The processor giant reasserts it can fix the problem.
Written by Rachel King, Contributor

SAN FRANCISCO -- Intel introduced its updated strategy for tackling evolving datacenter issues in the enterprise IT space, much of which revolves around a very hot topic in enterprise IT right now: software-defined networking.

Diane Bryant, senior vice president and general manager of the datacenter and connected systems group at Intel, moving to centralized control at the datacenter level means the transformation from "static to dynamic" workloads along with manual to automated processes across network, servers and storage.

"Now we're going into a whole new era where we look at IT as the service," declared Bryant, explaining that means IT is not just supporting the business, but it is the business itself.

Bryant continued that this is giving way to a new "virtuous cycle of computing," comprised of connected machine-to-machine (M2M) devices connecting to the datacenter, then to services (i.e. Amazon Web Services, Microsoft Office 365, Google Drive, etc.), and then back to more end users and devices.

Bryant cited a few examples of businesses tapping Intel for support here. One major partner is Disney, which has invested nearly $1 billion in IT to transform the visitor experience at Walt Disney World in Orlando through connected wristbands linked to analytics.

Another is Bocom, which has been said to improve the ability to locate a car in a city of more than 10 million people within 300 milliseconds.

While describing those examples as "leading-edge," Bryant admitted that only six percent of enterprise IT organizations that have actually deployed big data solutions and only nine percent of enterprise workloads reside in the public cloud.

"Now we're going into a whole new era where we look at IT as the service," declared Bryant.

To "enable the masses to move," Bryant asserted that we're going to need technology solutions that are easy to deploy at scale and lower cost.

Thus, Intel's strategy for "re-architecting the datacenter" consists of going back to the drawing board for tackling network and server infrastructures.

Bryant pointed toward Intel's own IT unit as a real life example. Citing internal figures, the previous estimated time frame to provision a new service was two to three weeks.

With SDN, the time to provision a new service has been cut down to minutes with a three-step process consisting of brainstorming the idea for a service, self-service configuration, culminating with the service up and running.

With more than five million base stations deployed worldwide today, Intel's strategy suggests that new services at the edge of the network are moving toward service-aware, radio access networks to enable rapid delivery of content to devices.

For storage, Intel believes that traditional storage infrastructures are moving away from storage area networks (SAN) with shared capacity toward "storage-as-a-service," where the application is in charge for greater efficiency with a wider range of optimized solutions.

Intel is touting a lot of its own hardware for this pillar, from Xeon accelerators and cache acceleration software to storage System-on-a-Chip solutions for tiered capacity and availability.

To demonstrate where Intel is going with this, Bryant outlined one possible infrastructure based Intel solutions consisting of a a Xeon E5 2690 processor with an Intel 520 series solid state drive, 10GBe adapters and Intel's distribution for Apache Hadoop.

Without specifying the infrastructure for the comparative example, Bryant said that this combination could sort 1TB of big data within seven minutes versus four hours.

Overall, Intel's strategy is designed to cover the "full solution space" through app optimization and architecture consistency across Atom, Xeon, and Core processor families.

Promising lower power requirements with each new processor generation, Bryant said this will continue as Intel trots out the Avoton and Rangeley 22-nanometer editions later this year, followed by 14-nanometer Broadwell and Denverton shipping in 2014.

Wedged in the middle there is a a new development also set to debut in 2014: the 14-nanometer Broadwell SoC, boasted to be the first SoC based on a high-performance architecture.

Jason Waxman, vice president and generation manager of the cloud platforms group at Intel, outlined how this will affect Intel's approach for supporting cloud infrastructures, which he said boils down to three key elements: workload-optimized, application-driven, and software-defined infrastructures.

Waxman unveiled Intel's next-generation rack architecture plan, essentially designed to spread the wealth of available resources. Running on Atom and Xeon silicon, Intel's new rack scale scheme is based on an open network platform comprised of photonics and switch fabrics along with storage-PCIe-SSD support and caching.

The benefits are supposed to be up to 1.5 times density improvements, up to 2.5 times better network uplinks, up to six times better power provisioning, up to 25 times better network downlinks, and up to three times cable reduction.

Waxman also went back to the Intel roadmap, singling out the Atom Processor C2000 product family.

Codenamed Avoton and Rangeley, Waxman reiterated these second generation architectures are scalable up to eight cores with seven times higher performance and up to four times higher performance per watt. Datacenter class features consist of 64-bit workload optimized SoCs, ECC memory, and Intel's virtualizaiton technology.

The endgame is taking advantage of the opportunity promised by cloud systems.

Citing a forecast from Lightspeed Venture Partners in April, Waxman defended that the compound annual growth rate for software-defined networking is supposed to jump by 175 percent to produce up to $5.5 billion in revenue by 2016.

Editorial standards