commentary The pursuit of faster CPUs has AMD and Intel back at the core.
While Moore's Law is holding more or less true there are some nasty potholes on the road to faster CPUs. Just look at the massive heat sinks and fans on the fastest P4. To clock them any faster you have to start looking at liquid-cooling systems.
|Just look at the massive heat sinks and fans on
the fastest P4.|
AMD has managed to avoid some of the thermal pitfalls by simply running its CPUs at a slower rate, the P3800+ for example is only clocked at 2.4GHz yet it still manages to outperform a 3.6GHz P4. AMD also falls back to a much slower internal clock rate than the Intel CPU's when the CPU workload is light and this helps them run even cooler.
So it comes to the question "how do you overcome the thermal problems and still create a faster processor?" The answer is follow Moore's Law -- don't increase the CPU clock, simply place another CPU core on the processor. Both AMD and Intel have dual-core CPUs ready to market, boasting greater performance than the single-core CPUs, but at a similar or lower clock rate.
There is a caveat for the performance of the dual-core CPUs. The performance will be greater, provided your application is optimised for multiple processors, or you are running more than one process at a time.
If you look at the major driving force for faster CPUs -- games, then you are not going to see any benefit in the near future. I realise that we tout multimedia and visualisation as must have, requiring loads of CPU grunt, but the reality is that the only applications that challenge CPU and GPU performance are games. The problem is that games are not multithreaded-multiprocessor applications -- on a dual-processor system the second processor is pretty much left twiddling its thumbs.
In the Lab, the vast majority of our testing is performed using what would be considered entry- level systems. I have never been fond of spending the Lab's budget on bleeding edge products -- for the most part I could not see the sense -- unless it was a test requirement. But, I must admit that I'm starting to bend a little. No, I haven't found a new killer game -- when do I ever have time to play games?
In the past I would line up multiple PCs to test products side by side. Using a single PC with multiple Ghost images was OK but you did not have the luxury of trying new features or tests simultaneously.
It has taken a little while, but I have found that VMware has worked its way slowly into our software toolkit. If I'm testing the functionality of some applications, I simply create a corresponding number of identical virtual machines (VM). This is just a matter of creating a single VM and then cloning it several times. I can then install each of the apps in it's own VM and flick from one to another following a particular path of functionality. Obviously, there are some issues with performance and compatibility testing as the VM configurations are limited by the hardware.
For the Labs, the days are gone when all purchases are middle-of-the-road PCs. We each crave at least one PC with the fastest CPU possible -- but how fast is fast?
The humble human brain's neurons fire at a relatively slow one thousandth of a second. Cannon fodder one would think for a 3GHz plus CPU, but what the brain loses in speed it more than makes up for in quantity. In any one second up to 10,000,000,000,000,000 synapses fire. This computational power could only be equalled by one million P4 computers, chewing up hundreds of megawatts of electricity.
Steven Turvey is Lab Manager of the RMIT IT Test Labs.
This article was first published in Technology & Business magazine.
Click here for subscription information.