From Apollo-age tech to the IT powering the spacecraft of tomorrow
Watch a Nasa shuttle burning a path into space or a video of Saturn's rings taken by the Cassini satellite and it's hard not to marvel at man's technological prowess.
But the surprising truth is that space exploration is built on IT which lags many years behind that found in today's consumer gadgets and corporate PCs.
To this day, Nasa still uses elements of technology that powered the moon landings of the 1960s and 1970s, while the International Space Station (ISS) - the manned station circling the Earth 250 miles above our heads - relies on processors dating back more than two decades.
To find out how supercomputers are being used to model the universe, see How galaxies are born inside computers
It's a similar story for Nasa's space shuttles, the soon-to-be-retired workhorses for manned spaceflight, which have only undergone a single major avionics computer systems upgrade since the shuttle programme was launched in the 1970s.
"In aerospace, you don't fly the cutting-edge technology that is being used on the ground by business," Alessandro Donati told silicon.com.
Donati is head of the advanced mission concepts and technologies office at the European Space Agency's (ESA) Space Operations Centre at Darmstadt, Germany - the hub from where the agency manages 15 in-orbit satellites and is getting ready for 11 future satellite missions.
When it comes to spacecraft, design reliability - and not bleeding edge technology - is the watchword, with onboard chips having to undergo extensive testing to prove their robustness and compatibility with the spacecraft's onboard software.
Each of its computer chips has to be "hardened" to protect it from the high-energy radiation that permeates outer space, a complex process that means the newest processors are almost never used onboard spacecraft.
Space exploration agencies have good reason for favouring robust chips over high-performance processors. Firstly, it ensures the safety both of the crew and the highly expensive spacecraft that took years to build; and secondly to avoid having to fix kit onboard, which is tricky when a spacecraft is 200 miles up in orbit, and nearly impossible when it's millions of miles from the Earth.
"A spacecraft is not accessible - once it is launched it is there, so you have to be extremely sure that things work," said Donati.
Upgrading computing hardware is another task that is normally...
...straightforward on the ground but that becomes an expensive and time-consuming job in space.
It is for that reason that when Nasa upgrades the processors used by various systems on the space station next year, it will be the station's first major avionics computer redesign in the 12 years it has been in orbit.
As new modules containing additional scientific experiments and other components have been added to the space station over its lifetime, it has grown to the size of a five-bedroom house - to the point where Nasa has no choice but to replace these close to maxed-out core systems that support the spacecraft.
Just speccing out and procuring the three Pentium chips for the upgrade has taken years, due to budgetary constraints and the need to design the hardware to minimise conflicts with the onboard system software.
"The challenge is trying to build new technology that looks like the old technology - you don't want to impact the software so it has to look as much like the old hardware as possible," said David Pruett, a former software and computer system manager for the space station who now works for Nasa contractor GeoControl Systems.
Along with new hardware, engineers must also ensure that software upgrades don't cause any hiccups in the space station's operations.
According to Dr Norman Kluksdahl, systems engineer with the mission operations facilities division at Nasa mission control, upgrades must be performed in parallel with the normal station operation.
"Then we have to seamlessly hand over [from the old software to the new software]," he told silicon.com. "I would equate it to driving your car down the road at 80km an hour and changing the tyre while you're moving."
To avoid any unwanted surprises, every new software revision goes through two and a half years of development, planning and testing before being uploaded to the station.
Despite all of the rigorous testing that both Nasa and ESA put their spacecraft systems through, the occasional problem does inevitably arise.
Earlier this year one of the space station's command and control computers went down after the ESA uploaded a badly formatted command to the station that caused the computer to crash.
"Normally we do everything we can to prevent something like this from getting into space so when something like this occurs, it's all hands on deck," Marcy Kerr, International Space Station software development manager, told silicon.com.
"We have recovery plans in place so we were able to get our computers back up but...
...it was a fairly shocking response [from the ISS command and control system]."
Operators within Nasa's mission control monitor the station's systems 24/7, and teams of software engineers are on standby to fix any problems that occur.
ESA knows all too well the problems that the failure of a piece of computer hardware can cause a spacecraft.
The Goce Earth observation satellite was temporarily left unable to send scientific data back to the ground after its main primary and back-up computers each suffered from individual, unrelated failures within six months of each other.
Normal communications were restored with the satellite, which is making the most precise map yet of how gravity varies across the globe, after ESA and its industry partners sent commands from the ground that raised the temperature of the compartment holding the satellite's computers.
On the ground: Inside ESA's European Space Operations Centre
Every spacecraft relies on a network of computers and operators on the ground to oversee its mission.
Since ESA's European Space Operations Centre (ESOC) was created in Darmstadt, Germany in 1967 it has managed more than 60 satellite missions - from satellites observing Earth from orbit to space telescopes peering into the depths of the cosmos.
Today ESOC manages 15 in-orbit satellites - planning missions, shepherding craft into orbit, monitoring critical systems onboard and collecting and processing the data that they beam back.
Because the computer systems operated at ESOC play a vital role in helping to guide satellites and to relay the data they collect, the agency has to be very careful when upgrading mission systems on the ground.
The process of rewriting the ground control software for an active satellite mission remains very challenging, according to Mario Merri, head of the mission data systems division at ESA.
"It is a considerable piece of work: you do it very carefully, you go through an extensive validation campaign, then you...
...run both systems in parallel, before changing them over," said Merri.
"You never endanger the mission."
Due to the complex and costly nature of upgrading ground control systems in ESOC, ESA will sometimes run an entire satellite mission - which can last several years - without performing an upgrade to the original hardware, OS or applications.
Consequently ESOC runs computers of varying ages, with the oldest machines dating back at least 15 years, with ESOC staff having to support these legacy machines alongside the modern HP workstations used to control the newer satellite missions.
"This is one of the issues we are facing: a significant portion of our work is spent on maintaining the older systems - which we consider a less noble part of our work," Merri said.
The tendency for ESA satellite missions to be extended beyond their original projected lifespan - the ESA still supports the Earth observation satellite ERS-2 that was launched in 1995, for example - also adds to the range of different hardware and operating systems that ESOC needs to support.
Maintaining such aged hardware unsurprisingly brings challenges such as getting hold of spares to repair computers that are no longer being made or finding staff with the skills needed to maintain software written in old languages.
"You have a problem with the availability of expertise: to find a good Fortran programmer or someone that can work on the VMS [Virtual Memory System] OS is not...
...easy today - you don't find many young people who want to do that," Merri said.
Inside Nasa Mission Control Center
At the Johnson Space Center in the US, Nasa manages manned spacecraft, space shuttles and the space station from its Mission Control Center.
Today Mission Control Center has 550 workstations linked to 150 servers running 23 different sub-systems.
Since 1996, Nasa has used standardised Unix workstations in both the shuttle and space station flight control rooms, which system engineer Kluksdahl said has greatly simplified their running.
Before the 1996 revamp the flight control rooms relied upon a patchwork of incompatible computer hardware of varying ages, which needed custom-built interfaces to get them to work together.
"If you replaced one piece of it, you either had to redevelop the old interface [between the machines in the flight control room] for the new hardware or you had to replace the whole building at once.
"Because the machines were obsolete, procuring spare parts was difficult and it became almost impossible," Kluksdahl said, adding that Nasa staff had...
...had to hunt out spare parts on the second-hand computer market.
One of the last relics of the Apollo-era technology still in use within mission control is the Nascom block communications protocol, which ground operators use to communicate with the other ground stations.
"That protocol dates back to 1966 and drives the hardware we use on both ends of the communications network," said Kluksdahl.
"That's where we get hit with the bill for custom hardware that no one uses any more - we have some stuff that's 16 years old."
However such legacy hardware is now the exception within the flight control rooms - the move to using standardised workstations within mission control has greatly reduced the number of different computer systems that Nasa needs to support and almost eliminated the agency's reliance on custom-built hardware.
Today there are two computer workstations at every critical console within the flight rooms, allowing the console operator to swap to a back-up machine if the first one fails.
Before the introduction of pairs of standardised workstations in 1996, technicians had to disassemble computer consoles and attempt repairs while manned missions were taking place.
"In the old days, because of the limitations of hardware, every console was tuned for only one purpose. If you had a console failure you lost that console and a team had to come in and try and fix it, meaning the entire team lost capabilities," said Kluksdahl.
"Now if a workstation fails, we can move to another workstation and be in business. It takes some of the mad frantic scramble out of maintenance."
Given the difference in age between the new hardware used on the ground and the legacy hardware used onboard its spacecraft, Nasa uses a buffer system to translate information sent between the two.
The buffer system converts commands sent from the newer systems on the ground into a format that the older systems onboard the spacecraft can understand, as well as translating data sent back from the spacecraft into a form that the newer systems on the ground can understand.
Having the man in the middle system also means that the majority of updates to spacecraft or ground computer systems only require changes to the buffer system, and not...
...systems on the ground or onboard spacecraft.
Monitoring systems onboard the shuttle and the ISS from the ground is also a much simpler job today, thanks to the introduction of the modern standardised workstations within mission control.
Prior to the upgrade, console operators had to monitor systems onboard the shuttle by deciphering flashes from warning lights on their consoles and by picking through impenetrable blocks of green figures on their console monitors.
"There was a lot of brainpower required to understand the spacecraft," said Kluksdahl.
"They would spend their entire shift scanning rows and columns of numbers, remembering what those values were and knowing what they were supposed to be, all the time watching for data to go outside of allowable values.
"The most powerful thing we have got now is the graphics display capability with colour displays. If something is nominal it's displayed green, if it goes outside of acceptable limits, it is displayed yellow or red.
"It's going to catch their attention instantly."
The workstations can also plot graphs of data over the course of hours or days, allowing operators to easily spot significant changes that occur incrementally over long periods of time, such as gradually rising temperature or humidity within a spacecraft.
"Our operators are now using more information and doing less manual data processing in their head - we have simplified their lives considerably," said Kluksdahl, adding that Nasa has been able to reduce the number of staff needed to support operators in the shuttle and space station flight rooms.
But the agency is not stopping there: Kluksdahl said that the agency is continually researching how it could use new technologies to automate manual processes, and to further reduce the number of people it takes to run mission control.
"We are always trying to reduce our footprint," said Kluksdahl, "We're looking at virtualisation, advanced network management capabilities, workflow processes - basically any technology that is out there.
"Right now it takes a dozen and a half people to run the building and all of its systems - our vision is that one day we could control the entire building with two people."