Homer Ahr had been asleep for 15 minutes when he got a call from his boss at Johnson Space Center.
"All he said was, 'Homer, get into mission control as fast as you can.' I didn't have an idea of why I was going in there," he said.
"Within 30 minutes at most I knew that they were truly in a life or death situation," said Ahr.
Earlier that evening, Apollo 13 astronaut Jack Swigert had brought NASA mission control to a standstill with the now famous statement, "Houston, we've had a problem."
The Apollo 13 craft was more than 300,000 kilometers into its journey to the moon when an explosion ripped through the tiny capsule.
On that day in April 1970, with the vessel venting its precious supply of oxygen, NASA knew it had few options for getting the three Apollo astronauts on the stricken spacecraft home safely.
"From that realization on, all we did was do everything we could to get them back," Ahr said.
"It's sort of like being in the ER, you know? If you have to jam a needle into somebody's chest to reactivate their heart, you just do it. You don't think about what you're doing. You just do it."
One of the many pressing issues was how to mount a rescue without firing the engines on the damaged part of the craft. At Johnson Space Center in Houston, TX, mission control narrowed the options to a maneuver never attempted before. The survival of the astronauts now hinged on using the descent engines on the lunar lander to put the craft on a homeward trajectory.
Mission control had limited time to work out how to pull off the maneuver. Luckily, just months before the crew blasted off from Cape Canaveral, two programmers had written the software for mission control to calculate just such a move.
One of those programmers was the 22-year-old Ahr, just a year out of college and working for IBM as a maneuver-planning expert supporting NASA flight officers in mission control.
"If what had occurred on the Apollo 13 had occurred on Apollo 12, we would have had a real bear of a problem," said Ahr, since the algorithms for calculating the maneuver had only just been added.
"Physically you could do it, but computationally and in the mission control center, it would have been extremely difficult to figure out when to do the maneuver and how to do it," he said.
Mission control needed assurance that firing the descent engines would work, and Ahr and a colleague spent the night running what was called a dispersion analysis, checking every possible parameter to see if the move would put the craft on the right course.
"I couldn't even tell you the number of times we ran computations," said Ahr, "but we did the dispersion analysis, and the conclusion was, 'Go ahead and do the maneuver.'"
The computers as heroes
The eventual safe return of the astronauts was due to far more than that series of calculations, but Ahr's recollection illustrates just how crucial the early computers were to the lunar missions.
With its goal of putting a man on the moon, NASA's Apollo program is perhaps the most ambitious technical endeavor ever undertaken. Throughout the 15 Apollo missions that included six moon landings, the precision needed--in terms of positioning and velocity--to put the craft on the correct trajectory on the journey to and from earth was exacting.
Every maneuver that would be carried out by the spacecraft was calculated in advance by IBM computers in the Real-Time Computer Complex (RTCC) at Johnson Space Center, and checked against the craft's actual maneuvers throughout the mission.
Just as important to the return of Apollo 13, and the success of the wider program, were the computer systems underpinning the numerous simulators at Johnson Space Center and Cape Canaveral. The simulators included working copies of the spacecraft's command and lunar modules, and allowed NASA astronauts and the flight controllers on the ground to practice every part of the journey: from the launch, to the lunar landing, to earth re-entry, working in tandem as they would during the mission.
Simulators replicated not only the workings of the onboard computers, but also fed data into ground systems, recreating the experience of an actual mission as closely as possible, and preparing staff to deal with a host of potential problems.
Jack Winters, who managed simulators testing and started out writing software for simulators during the earlier Gemini missions, said the training for the flight controllers was invaluable for the Apollo Project.
"On Apollo 13, for example, they were much, much better able to spot the problem and develop workarounds because of the training," he said.
During Apollo 13, these simulators would let engineers and astronauts on the ground--working alongside astronaut Ken Mattingly, who had been replaced on the Apollo 13 flight crew at the last moment--figure out how to bring the command module's onboard systems back online with the limited power available, a crucial step ahead of re-entering earth's atmosphere.
Merritt Jones was working at Johnson Space Center as a computer programmer and an astrodynamicist, calculating the mechanics of how a spacecraft moves in orbit.
Working out the correct order to restore the lander's systems was incredibly important for the safe return of the Apollo 13 crew, he said.
"They had to reduce the power required for the startup sequence. The startup sequence was critical. If you didn't start in the right sequence, the systems wouldn't work well or wouldn't work at all."
The computers used during the Apollo missions were impossibly crude by modern standards. Each of the RTCC's five IBM System/360 Model J75 mainframes had about 1MB of main memory, not even enough to load a typical web page in 2017.
"The software that controls what happens when you move your mouse on your PC--the mouse driver for Windows--takes more memory than all the NASA supercomputers put together had for Apollo," said Jones.
Despite filling an entire hall with electronics, the mainframes each topped out at about one million instructions per second (MIPs), some 30,000-times slower than the fastest processors used in today's personal computers.
NASA was bumping up against the limits of what technology at the time could do, which often meant relying on cutting-edge, and sometimes unproven, hardware and software. And where the tech simply didn't exist, NASA's commercial partners had to invent it.
A case in point was the Apollo Guidance Computer (AGC). While the ground systems might sound underpowered, the onboard computers were orders of magnitude more simple. The guidance computer for the Apollo spacecraft needed to be small enough to fit in a cramped capsule and light enough for the Saturn rocket to get it into space. The wardrobe-sized IBM mainframes that NASA used on the ground were out of the question.
Massachusetts Institute of Technology Instrumentation Laboratory (MIT-IL), which had the contract to develop the AGC, turned to a new technology, integrated circuits, which had the potential to make computers faster and smaller by etching multiple transistors onto small chips. At that time in 1961, integrated circuits had only been invented two years earlier and were something of an unknown quantity, but by 1963 MIT-IL had ordered some 60 percent of the world's available ICs.
"A lot had to do with power and weight," said Bob Zagrodnick, an engineer who worked on the AGC at Raytheon, which built 43 of the computers during the course of the Apollo program.
"These are small units and they didn't take up a lot of power. We'd constantly strive to minimize weight and power consumption."
More unusual was the way the software running on the AGC was literally woven together. At Raytheon's production line in Waltham, MA, weavers looped wire through circular magnets, creating a metallic tapestry whose pattern corresponded to digital zeros and ones, which in turn encoded the programs run on the computer.
"They actually threaded the flight program information into the core rope memories," said Zagrodnick. "It was a very intense activity, so mostly women who were good at needle and thread were the ones who weaved or put together the core memories."
Back in Johnson Space Center, IBM found itself facing an entirely different challenge. As the name suggests, the computers in the RTCC needed to be able to handle new jobs and data in real time, to fulfill their role monitoring spacecraft trajectories and driving complex simulations of the missions. The problem was that at that point in the early 1960s real-time operating systems didn't exist.
According to Ahr: "We had to get a multi-tasking, multi-jobbing operating system in the 1960s -- before IBM had ever built a multi-jobbing, multi-tasking operating system."
So IBM invented one, modifying the existing OS on its System/360 mainframe, in addition to creating a real-time database called DataTables, "well before you had anything called a relational database," said Ahr, with strict rules around which data could be updated and when, to ensure critical information relating to the spacecraft would be accurate and available when needed.
Working with a new OS added a fresh wrinkle to a task already fraught with challenges, with calculations throwing up unexpected results due not just to application errors, but also mistakes in the relatively untested operating system.
Shooting for the moon
The pressure on the young IBM programmers was intense, with individuals working as many as 80 hours in a week, in a bid to hit the hardest of deadlines.
Winters said: "NASA had a schedule. They were going to fly on certain dates. They announced that to the public and IBM surely did not want to be the one that caused the flight to be delayed."
That drive to work round-the-clock was partly driven by the punishing timetable, but also by the thrill at working to put a man on a moon, and desire to beat the Russians in the space race.
"We were so excited," said Winters. "We were young. I think I was 21 when I started. We didn't know what we couldn't do. We just thought we could do anything. Here I was working on software that was going to go into space and eventually to the moon. The adrenaline factor was tremendous."
Being young enough to not fully appreciate what they couldn't or shouldn't do sometimes paid off handsomely, according to Harry Hulen, who primarily worked on the software used at the simulators in Houston before going on to oversee others' code.
Hulen recalls having difficulty simulating the propellant tanks on the Agena unmanned rocket during the Gemini missions, the US manned spaceflight project that preceded Apollo, when he took a trip down to his local Sakowitz department store.
"There on the shelf, along with the usual kinds of books that you see in a store, was a book called Rocket Propellant and Pressurization Systems," he said.
SEE: How to implement AI and machine learning (ZDNet/TechRepublic special report)
"I bought it, and it turned out that that book was exactly telling me what to do with the requirements that I had. I just totally ignored the requirements that NASA had written and programmed out of this book that I bought at Sakowitz," said Hulen. "It worked well. No one caught me, and the results of it worked just fine. They were able to simulate certain things to a higher degree of accuracy than was required.
"The important thing is I was, maybe, 22 years old, and I didn't know I was doing the wrong thing. I just said: 'This looks to me like what I ought to be doing.' I suspect that there was quite a bit of that," he said.
That's not to say it was always easy to strike the right balance between personal and professional commitments. Many of the young programmers were starting families at the time, but often found themselves having to work late to test software, due to machines being in constant use during the day.
"A lot of our development time was in the middle of the night," said Winters.
"I spent many a late hour in the computer room testing software and overseeing the testing of software. In fact, my first divorce was probably caused by all the hours I worked during that period," he said.
At the back of every engineer's mind wasn't just the success of the mission, but also the lives of the astronauts that depended on software doing its job.
From the moment Tom Steele joined IBM in 1963, working out of Huntsville, AL on software for the guidance systems on the Saturn rockets used during Apollo, he said his team were made acutely aware of what was at stake.
"Every contractor had a program of manned-flight awareness. Those programs were designed to both make you do things better, but also to make you be able to handle the idea that you could kill a guy if you messed this up," he said.
Ahr felt that responsibility particularly keenly during during Apollo 11, the 1969 mission that landed the first men on the moon.
His job at that time was to run software that computed maneuvers of the rocket and the spacecraft at each stage of the mission, and check the real-time position of the spacecraft against the projected results, working to support the flight officers in mission control.
The role was by no means straightforward, requiring Ahr to sit at a console listening to about eight different phone lines at once, as well as the audio feed from the astronauts. He remembers the fear he felt when those lines began echoing to the sound of an alarm as the Apollo 11 lunar module descended towards the moon's surface.
"When those alarms were going off during the first descent, it was scary, to say the least," said Ahr. "It was chaotic to listen to all of that at one time and try to stay calm, keep your head up and not panic.
"It was like you're sitting at a football game, there's tons of people yelling, and screaming, and hollering, plus at the same time there's a fire. You know, fire trucks show up, ambulances show up, police show up, and they all have their sirens on," he said. "That's what you're listening to as you're trying to sit there and calmly watch the real-time data come in."
Fortunately the descent to the lunar surface was near-on perfect: "As good a descent as we could have ever flown," said Ahr, thanks to the many fail-safes built into the Apollo systems. In this instance, the rendezvous radar in the module had been switched on, overloading the Apollo Guidance Computer with jobs. However, the system was able to prioritize the tasks needed for the descent, and ignore those related to the radar.
Ahr credits his ability to stay calm and do his job to the extensive training ahead of the mission, where all manner of problems had been simulated, and an awareness of the important role he and other ground staff had.
"Every action you did on the console affected the success of the mission and could affect the lives of the astronauts," he said. "You had to have sort of a characteristic where the more pressure there was and the more stress there was, the better you worked."
There was mutual respect between the IBM engineers and their NASA colleagues, born out of their close working relationship and the high stakes involved in making manned spaceflight a reality.
"We were truly a band of brothers," said Ahr. "We were always committed to the same goal: successful missions."
"I'm kind of amazed we pulled it off"
Adding to the pressure were the profound limitations of the primitive technology at the time, whether it was the ease with which console operators like Ahr could make mistakes when typing data into a teletype machine during a mission or having to write programs for the IBM mainframes on punched cards.
"In retrospect I'm kind of amazed that we were able to pull it off," said Winters.
Each stage of the programming process was incredibly cumbersome. Programs for the IBM mainframes at Houston were written on coding pads, which would then be given to keypunch operators who would punch them onto card decks.
With the main IBM Federal Systems Division office situated nearly a mile from the computers, it was often necessary for a courier to deliver the card trays to the Computing Center and return the results, limiting the number of times software could run to an average of 1.2 times per programmer per day.
Hulen said: "If you were lucky, you got a run back the next day. What you got back was paper, and quite often, it was a core dump," a sign that the program had crashed. Debugging this code, mostly Assembly language with some Fortran, to identify the cause of the problem was nothing like today.
The core dump would be a stack of paper, "maybe eight or nine inches" high, without a word of human language on it.
"It would all be in hexadecimal, and you had to learn to read that and find key points that were within the dump. What you needed would probably have fit on one page, but there wasn't any real means to know what you really needed, so you got these huge core dumps back," said Hulen.
"You had to be very fluent in hexadecimal and be able to recognize your assembler language instructions literally in machine language," he said. "You had to know a good many people to get help. On a bad day, it might take two or three days to work it out."
Documentation was also minimal, particularly in early missions. Hulen recalls working on telemetry software related to the Agena rocket used in Project Gemini. Programmers kept a bit-by-bit breakdown of each of that software's basic components, known as words, which was written on a piece of cardboard they called the bit board.
"The only documentation was this bit board that we had put together. It worked quite well until one night the cleaning crew threw it out," said Hulen. "We had no backup for it. We had a real crisis there, where we had to figure out what each one of those bits was and try to recreate the bit board, which never was totally successful."
Over time, however, the unprecedented scale of the programs being worked on required IBM to develop sophisticated project-management plans, techniques that would be used for decades to come.
"These were extremely large software programs. There were over a million lines of code of application software," said Winters. "There were very few successful examples of developing that amount of software successfully."
There were about 500 IBM programmers based around Johnson Space Center in the Clear Lake area of Houston, spread over about 10 different buildings. To effectively coordinate the workforce, IBM came up with an approach of breaking down the software into modules managed by different teams and setting dates for when design, coding, integration and testing of the code would be complete.
SEE: Digital Transformation: A CXO's Guide (ZDNet/TechRepublic special report)
Winters said one of the early managers, Dick Hanrahan, "pioneered that kind of management technique," adding that the approach would go on to be used for large projects across IBM's Federal Systems Division.
The power of the processors and the amount of memory available was so constrained that programmers would spend an extraordinary amount of time trying to simplify code, particularly if an instruction was carried out repeatedly.
"I would spend months trying to put the equation into a form that would take less memory and execute faster and still get an acceptable correct answer. And I do mean months," said Jones of his work calculating spacecraft trajectories on IBM mainframes.
"I would spend a month trying to remove one instruction from a loop," Jones said. "If I took one instruction out, I could save, say, 10,000 instruction executions. I only had 1,000,000 available to me and the operating system took some of those."
Another instance saw Jones writing code to directly manipulate the zeros and ones of the machine code, using "masking instructions" to derive a far more efficient way of checking if one number was bigger than the other.
"If you looked at the code, it would look horrible, but it would be fast," he said.
According to Jones, without these extreme optimizations, software that carried out real-time calculations, such as the spacecraft's current position, simply wouldn't have been able to run on the computers available.
Given the IBM mainframes used in mission control had thousands of times less memory than a machine today, software had to be loaded in sections, each of which related to a different phase of the journey to the moon. Each section would take seconds to load, further complicating the process of tracking a spacecraft that could be moving as fast as seven miles per second.
Systems onboard the spacecraft were massively more limited than those on the ground, with the Instrument Unit computer aboard the Saturn V rocket capable of just 15,000 operations per second and not supporting floating-point math. "You can't even begin to compare that to today," said Jones.
The scarcity of computing power meant the relative value of the programmer's time to the computer's time was the inverse of what it is in 2017, said Jones.
"Today, you can buy as much compute power as you need if you have the money to buy it, and the programmer's time is worth more than the computer by far. In those days, one hour on an IBM mainframe was worth one month of someone with a Master's degree in math doing this work. My monthly salary was the same as one hour on the mainframe."
Surprisingly, Steele said most of the programming techniques used today were available in the 1960s: "You just didn't have any computers that could take advantage of them," he said.
The limitations of the technology were so numerous and the calculations so complex that the best that the engineers could hope for was getting as close to certainty as possible.
"There weren't any absolute yes / nos. [But] you had to make the decision like there were," said Steele.
Getting as close as possible to certainty meant testing for every conceivable outcome. However, certain things couldn't be tested on the ground, like how the weightless environment of space would disturb air bubbles in the soldered joints on electronic circuits. So, equally important, was learning from every earlier mission.
"The one thing that is true: we never, ever flew a mission that didn't have a failure. Ever," said Steele.
Despite the seemingly insurmountable challenges, the Apollo program not only achieved President John F. Kennedy's massively ambitious goal of putting an American on the moon by the end of the 1960s, but followed it up with multiple return trips to earth's rocky satellite.
Decades later, the Apollo programmers describe a feeling of pride, but even more of being incredibly lucky.
"I feel very blessed to have had the opportunity to have been a part of that. I feel it's probably one of the most fortunate things that has ever happened to me in my life," said Winters.
Steele sums it up even more succinctly: "There was never a day, never a single day there wasn't a problem to solve, and it was amazing. It was an amazing ride."
- HPE and NASA experiment with 'Spaceborne Computer,' a supercomputer that could help us get to Mars (TechRepublic)
- NASA Mission Control: Photos of the past and present at Johnson Space Center (TechRepublic)
- How NASA uses virtual reality to train astronauts (TechRepublic)
- NASA picks research teams to tackle advances in drone, self-driving car tech (ZDNet)
- NASA pushes ahead with asteroid deflection tests (ZDNet)
- Uber working with NASA to make flying taxis a reality (ZDNet)