X
Innovation

A crash-proof computer, really?

British computer scientists claim that they have designed a system that actually works the way it should, all the time.
Written by Tuan Nguyen, Contributor

Anyone who's ever used Microsoft Windows' Millennium Edition knows all too well the trauma of computer crashes while typing up an assignment. If you're lucky, auto-save would have salvaged most of your work. Though the situation has improved in the era of multi-core chips, by no means has computing become a crash-proof experience.

That may all change if a purported breakthrough reported by a team of scientists at the University College London holds up to scrutiny. They claim that they've built a self-repairing computer that actually prevents the lapses in processing from wrecking one's workflow. While computer users can certainly appreciate a world where our trusty laptops run seamlessly, the implications are much more profound for complex systems like military drones, where ability to self-repair would prove invaluable in combat situations or for businesses that rely on critical processes being up and running.

The researchers say that their machine achieves this by modeling its functionality on the human brain, a nature-made device seemingly prone to constant errors (Where did I put my keys?). That's because unlike a chip's linear, sequential execution of instructions, the mind operates in a decentralized and chaotic fashion, with neurons firing and misfiring, often simultaneously and all the while forging new connections or repairing old ones. Our minds may get disoriented at times, but they won't go all blue screen on us.

According to New Scientist, here's how the researcher Peter Bentley designed a brain-like computer:

He and UCL's Christos Sakellariou have created a computer in which data is married up with instructions on what to do with it. For example, it links the temperature outside with what to do if it's too hot. It then divides the results up into pools of digital entities called "systems".

Each system has a memory containing context-sensitive data that means it can only interact with other, similar systems. Rather than using a program counter, the systems are executed at times chosen by a pseudorandom number generator, designed to mimic nature's randomness. The systems carry out their instructions simultaneously, with no one system taking precedence over the others, says Bentley. "The pool of systems interact in parallel, and randomly, and the result of a computation simply emerges from those interactions," he says.

Basically by having each of these systems function separately and all at once, none run the risk of being an Achilles heel. They each carry their own memory, which keeps the entire operation from breaking down in the instance they can't access the computer's memory, which tends to be a key cause of crashes with multi-core systems. The computer also keeps extra copies of instructions circulated throughout to enable each to repair corrupted data on the fly.

But with any idea that promises sweeping fixes to a persistently stubborn problem, there will be those who will take it with a grain of skepticism. John C. Dvorak of PC Magazine points to the fact that other systems have promised to deliver a crash-free experience only to be victimized by real world complications such as gunk code and malfunctioning hardware.

The only technology I've seen that has some aspects of self-healing, or at least self-repairing, has been the hard disk systems that constantly remap the drive when bad sectors appear. This was done out of necessity since no hard disk is without flaws and became more and more important as disk capacity jumped ahead.

That said, hard disks still do fail; a hardware component can crap out and the disk is done. You cannot self-heal a faulty part.

And there are some machines that have borderline components. These components do not fail, but they sort of sputter under adverse conditions. I had a computer that worked perfectly, but when the temperature in the room rose over 80 degrees, it would constantly crash. Some component was flaking out at high temperatures. Components are always a threat to the stability of the machine. How does this "self-repairing" nonsense work under those circumstances?

And let's not forget that there are already a ton of system checks built into Microsoft Windows. You've all witnessed how they work. Suddenly the machine stops working as the OS tries to reorganize something or rework some subroutine or who knows what. It could take an hour or two to finish, if you are patient enough to wait it out. Generally speaking, you hit the reset button and get back to work after a reboot.

The researchers were confident enough about their technology to demonstrate and discuss its inner workings in April at the IEEE International Conference on Evolvable Systems in Singapore. Maybe somewhere down the road, once it processes through everything thrown at it, we will feel good about easing up on the restart button.

Breakthroughs in computing:

Real fixes for real life problems:

This post was originally published on Smartplanet.com

Editorial standards