Chip maker AMD is looking to aid those wishing to make use of GPGPU by throwing more acronyms at them in the forms of HSA and hUMA.
While CPUs are great when it comes to processing single-threaded code with branches, they're not so good when it comes to parallel operations. However, it just so happens that GPUs are great at crunching through parallel operations, but weak when it comes to processing single-threaded code. This has given rise to the general purpose GPU (GPGPU) that has been designed to offer the best of both worlds.
While GPGPU offers the best of both worlds in terms of processing power, it does have a drawback – it's not easy to leverage. Specifically, addressing memory is cumbersome, because while the CPU and GPU may share the same physical memory chips, they have their own pools of memory. This means that data has to be copied back and forth between the CPU and GPU, which is not only wasteful in terms of processing power, but also a massive code overhead.
AMD wants to eliminate this burden with a new system architecture called Heterogeneous Systems Architecture (HSA), and at the core of that is "heterogeneous Uniform Memory Access", also known as hUMA (as if we didn't have enough acronyms already).
Boiled down to its simplest terms, hUMA allows both the CPU and GPU to share the same chunk of memory, and this in turn makes the hardware simpler, which makes it easier for developers to leverage GPGPU.
The first AMD hardware to support hUMA will be the upcoming Kaveri APUs. These will feature Steamroller processing cores, and are expected to make an appearance during the second half of 2013.
Even better for developers is the news that hUMA will be supported by mainstream programming languages such as C++ and Java.
hUMA is expected to find its way into a broad range of hardware, from servers to games console. In fact, an interview with the PlayStation 4 lead architect Mark Cerny, he suggested that Sony's upcoming console may make use of this technology.