X
Business

Boot out the BIOS

A good BIOS replacement such as EFI is going to be necessary, and it should be capable of surviving the next twenty years just as the old technology has survived the last
Written by Rupert Goodwins, Contributor
Almost every aspect of the original IBM PC's architecture has been bypassed, superceded or extended beyond recognition by now. One that hasn't is the BIOS, the Basic Input-Output System collection of software routines that define how the computer behaves between being turned on or reset, and the point at which the operating system is loaded and running. This was a trivial task with the original IBM PC, with its strictly limited choices of storage, input output, memory and graphics cards: a short set of power on self tests (POSTs) and a selection of preparatory commands to the circuitry was all that was necessary. Because this software was permanently present in the hardware, usually resident in a chip as irremovable as any other, it was called firmware These days, a PC's operating system may reasonably expect to find a complex mix of storage devices, network, graphic and sound subsystems, and memory configurations ready for it when the computer hands over control. All of these need their own initialisation and configuration information -- ideally with some diagnostic help when things go wrong. The BIOS system makes this very hard to get right: each component needs to co-exist with the others in an under-specified environment, and all the software needs to be hand-crafted, usually in assembler, to work precisely with the hardware. This often leads to subtle incompatibilities between different devices from different manufacturers -- the worst sort of problems to diagnose and fix -- while the manufacturers themselves have huge difficulties in maintaining their product lines. Intel has identified all of the above as a wart on the nose of progress. The company has therefore developed the Extensible Firmware Interface, a collection of ideas that is much more like a tiny, complete operating system in its own right than a monolithic program. As the company says, the concept is really just adding standard best software engineering practices to the problem of booting and controlling the low level aspects of a computer. Among those practices, EFI is a coherent, scaleable platform environment. This means that the rules for writing software to use EFI are largely independent of the nature or size of the hardware underneath it: as far as possible, people writing initialisation, diagnostic or control firmware for expansion systems don't need to know what sort of computer it'll be running on. EFI will take care of shielding the details from them. EFI manages its own area of storage space, normally envisioned as a partition on a hard disk, so the firmware doesn't have to live on a chip on the expansion systems -- and isn't limited to any particular size. So hardware manufacturers can add many more diagnostic and control options, and also include support for many different kinds of computer systems and configurations, without having to program huge amounts of expensive onboard flash memory. EFI also has its own basic network, graphics, keyboard and storage handling software -- which other software can use -- all independent of whatever operating system will be running when the computer finally gets going. This opens up many new ways to remotely diagnose and fix problems, as well as letting manufacturers make far more user-friendly embedded software than has until now been the case: anyone who's battled with new PC hardware that stops their computer from starting up, emitting a curt error message at best, will appreciate this. EFI does all this by moving some of the definitions of Intel architecture hardware into data structures -- non-Intel architectures that can use the same data structures may be accomodated --, mediating between requests for storage in the disk partition, and defining a set of boot time and run time services. These services include the loading, running and management of EFI drivers, which must be written by manufacturers to strict, well-defined rules for co-existence and performance. However, these drivers can be written in normal C in normal development tools, and debugged just like other software. To keep their size down, and to ensure compatibility across multiple systems, they must compile down to EFI's own Byte Code Virtual Machine which -- like Java -- will run common code on wildly differing hardware. None of this is, by itself, particularly difficult -- and there have been previous attempts to add these sort of ideas to the PC architecture. Intel has studied these and their commercial failures, and concluded that EFI had to have very good compatibility with legacy systems. Furthermore, it must be very portable across different architectures: these somewhat contradictory requirements have contributed to the thousand-page document that's the current specification. Intel has further decreed that Itanium-2 systems will use EFI, and it would be very happy if IA-32 designers also started to switch. It's also done some experimental portings of EFI onto Xscale, just enough to know that it works. What roles EFI has to play with digital resource management (DRM), trusted platform computing (such as Intel's LaGrande technology) and other security-related ideas have yet to be divulged. Likewise, how Intel intends to licence the technology -- and to whom -- is a question worth answering. But there is no doubt that EFI, or something like it, is going to be necessary, and that a good BIOS replacement should be capable of surviving the next twenty years just as the old technology has survived the last.
For a weekly round-up of the enterprise IT news, sign up for the
Enterpise newsletter. Find out what's where in the new Tech Update with our
Guided Tour. Tell us what you think in the
Enterprise Mailroom.
Editorial standards