X
Tech

Stanford pushing for new parallel computing model

Stanford University, in conjunction with Sun, AMD, NVIDIA, IBM, Intel, and HP, is working to create a new computing model that fully exploits modern, multicore processing. As a feature in Ars Technica points out, parallel computing has always posed significant challenges to programmers.
Written by Christopher Dawson, Contributor

Stanford University, in conjunction with Sun, AMD, NVIDIA, IBM, Intel, and HP, is working to create a new computing model that fully exploits modern, multicore processing. As a feature in Ars Technica points out, parallel computing has always posed significant challenges to programmers. However, since most of us have dual or quad core processing power sitting on our desks, a lot of serious computing power is going untapped.

While multithreaded operating systems can all leverage these extra cores to some extent, future generations of software will need to use them much more actively. More importantly,

even a small amount of non-parallel code in a program can introduce significant limits on how much a program can be sped up by adding more processors. With the days of significant serial performance scaling behind us, there's a concern that computing as a whole is going to suffer—the huge progress we've seen in the digital age could grind to a halt.

While this sounds somewhat doom and gloom, similar efforts at other universities points to the general consensus in the industry that future advancement (and certainly the capabilities to mine, search, and use the terabytes of data being created all the time) depends on solid parallel computing skills.

The Stanford effort incorporates educational components for new programmers, but is really focused on developing new, highly scalable, hardware-independent methods of completing parallel computing tasks. The new group at the school has a budget of $6 million dollars over the next three years.

Editorial standards