With the sudden increase in interest in low-powered CPUs (specifically ARM-based servers) in the datacenter, NVIDA has raised their hand and reminded us that their widely used Tegra CPUs are based on current generation ARM technology with the extra kick of the NVIDIA GPU.
Building on the "Project Denver" technology goal, announced at the beginning of this year the plan is to incorporate the full line of NVIDIA GPU technologies, from the relatively lowly Tegra, through to the HPC-focused Tesla into combination packages with multiple ARM cores along with the GPU with an emphasis on retaining the power efficiencies delivered by the Arm cores and the focus on mobile devices.
While it looks like there will be a lot of players in the power efficient, non-86 server for the cloud movement, NVIDIA has a lot of experience in building systems that deliver performance through parallel processing, especially with their Tesla processors which are well represented in the top end of supercomputing environment. This experience should serve them well in designing servers that make use of ARM cores, and with the addition of the NVIDA GPUs allow them to deliver on a wide range of application services beyond the relatively simple tasks that have been envisioned for ARM in the datacenter.
With the cloud allowing a scale-out model for increasing the performance of applications delivered via cloud services, the experience that NVIDIA has with building massively parallel computing systems should be able to be translated into delivering very competitive application performance from very energy efficient systems when compared to the traditional high-performance X86-based datacenter server.
If they were able to deliver, for example a blade system with the type of flexibility that let you choose performance and application suitability by adding or subtracting blades from the system (with all other components remaining the same) and still offer power savings with equivalent performance compared to traditional servers they would definitely have a compelling argument.
This isn't the same idea as "build an ARM-based server with lots of CPUs that draws less power and can handle web serving tasks"; this is the scale out processors to deliver enterprise class computing through parallelism and get benefits in power efficiency and performance. Literally a modular computer design that with the proper peripherals (networking, storage) can handle any task thrown at a state-of-the-art datacenter by adding additional, otherwise identical, processing modules. And that could change the way datacenters work