War on tera: Intel picks C for parallel computing

Intel has been showing off a programming model which it claims will help C and C++ developers take advantage of a parallel computing without the need for any code changes.

Intel has been showing off a programming model which it claims will help C and C++ developers take advantage of a parallel computing without the need for any code changes.

Speaking at the Intel Developers Forum in Shanghai last week, Wu Gansha said that the model, called Ct, will let developers use their C++ programs for parallel computing applications "without having to modify a single line of code".

With Intel and others seeking to push chips with an ever increasing number of cores, code now needs to be rewritten in ways that allow tasks to be split up and handled in parallel, a significant technical hurdle.

Wu said Ct is "pretty mature now for quadcore and eight core", but did not give a timeframe on when Ct will be ready for programmers.

"When terascale processors [where there are tens or hundreds of cores on a single chip] come out, it will be available," he added, "either through productisation or open source."

According to Intel, "Ct code is dynamically compiled, so the runtime tries to aggregate as many smaller tasks or data parallel work quanta so that it can minimise threading overhead and control the granularity according to runtime conditions."

Typically, applications for terascale computing, where trillions of calculations can be done with terabytes of data, have centred on scientific research: image recognition, genomics, meteorology, medical imaging and seismic data processing are considered future uses for the technology.

Intel, however, believes the processing power presented by terascale computing could also be used by individuals, for example, in cars.

"It could detect the car in front of you, perhaps behind you, and avoid a collision by warning you of problems," said Jerry Bautista, director of technologies management, at Intel, while terascale computing could also open up the possibility of new types of haptic interfaces, video mining and better ray tracing.

However, despite the determination of chipmakers to add more cores, there are already looming limitations on terascale computing. "There are a lot of challenges, it's not just the software... ray tracing is bandwidth-intensive, it requires terabytes of bandwidth," Bautista said.

"There are no technology hurdles that are show-stopping," he added, predicting terascale systems will become more common within seven to 10 years.

CNET News.com's Ina Fried contributed to this report.

Show Comments