X
Tech

Microsoft recapitulates IBM

Therehave actually been two great computing movements, one of which started in the 1880s andbecome commercially important during the 1920s, and one of which started in the late 1930s andbecome commercially important in the 1960s
Written by Paul Murphy, Contributor
This is part two of an answer to talkback contributor Carl Rapson's question about why, if dumb terminals were dumb then, would they be smart now?

This isn't an easy question, and to get to the answer I need to start with a long digression on history. Please bear with me, you'll see the linkage later.

Most people believe that the age of computing started somewhere in the late 1940s and early 50s and has carried forward in one accelerating wave ever since, but it's not true. There have actually been two great computing movements, one of which started in the 1880s and become commercially important during the 1920s, and one of which started in the late 1930s and become commercially important in the 1960s.

There's a fascinating book on part of this by James W. Cortada: Before the Computer: IBM, NCR, Burroughs, Remington Rand and the Industry they created, 1865-1956, (Princeton University Press, Princeton NJ, 1993). which everyone ought to read. It's about the struggle for market supremacy during the initial widespread adoption of electro-mechanical counting machines in American business. IBM's 1921 Hollerith Type III Tabulator, for example, dramatically lowered the cost of recording certain kinds of transactions and allowed early adopters to lay off hundreds of clerks.

The introduction of those machines brought change - in particular it led to the creation of a new class of data processing professional and a new default organisational structure featuring a data processing department working within Finance.

Those 1930s data processing departments had three roles: to record transactions after they occurred; to generate standardised summary reports on those transactions; and to produce custom reports on request.

Data for use in accounting machines was coded on punch cards -in fact the standard 24 row, 80 column, IBM punch card was patented in 1937- and the machines were mainly good at counting, totalling, and sorting cards based on those coded values. Thus an application like General Ledger Maintenance would consist of multiple batch jobs: verify the transaction code on each card, sort cards by account, produce general journal entries, count the cards, compute their totals by account to produce the general ledger entries, and, sort and total them by account group to produce summary reports.

In those days each step, or batch, required its own job control and resource assignments, so a big data processing shop might easily accumulate ten thousand or more of these standardised routines in its action inventory - almost all of them designed to produce the results needed for the next step in some overall process. Thus a real application like "GL" might require forty batch jobs - each of them separate, but forming a real application if (and only if) run in the right order.

Not much has changed since. Some engineers at Hughes Aircraft used an early computer to automate parameter passing between paired accounting machines. Grace Hopper and her staff extended that idea to create Flowmatic, a language aimed at making it easy to pass parameters between job control decks and thus to assemble jobs going beyond the scale of individual machine steps. Flowmatic became COBOL - and still replicates the head and roller movement controls needed for IBM's electromechanical punch gear from the thirties. The System 360 implemented the transition from mechanical card processing to electronic card image processing - but the IBM 3274 controller implements almost exactly the same card image controls as that 1921 Hollerith, the IBM 3278 terminal's block mode update sends 24 x 80 card images to that controller in very much the same way the input hoppers operated in the 1920s, and the remote Job entry station almost exactly duplicates functionality from 1937.

The organisational structure in data processing didn't change either: the emphasis on discipline and control didn't change, the focus on after the fact processing didn't change, the assumption that data processing works for Finance and exists to reduce clerical cost and error didn't change, and neither did the batch job structure, the focus on report generation, or the separation between data processing and the data producers.

Notice that this doesn't make the mainframe architecture bad in itself - just fundamentally inappropriate to current organizational needs. People in the seventies and eighties didn't learn to distrust and even despise Data Processing because of the hardware or software involved, but in response to the inappropriate perpetuation of management attitudes and assumptions long since obsoleted by cultural and organizational change.

Think of it this way: I generally like horses and the people who look after them, but we're not living in Paul Revere's time. He had to ride around like a mad thing to get his job done; but I'd just use email or the phone -and that's really what started to happen when the real computing revolution, the science based mini-computer, became commercially visible in the sixties and seventies. Users still saddled with Data Processing's costs and restrictions were metaphorically seeing other people using phones - and demanding in.

Unfortunately, by the time science based computing became commercial IBM had a fifty year head start on earning corporate loyalty, the average chief financial officer easily understood what its products did for him, and almost nobody noticed that scientific computing and data processing had essentially nothing in common beyond the use of computers.

As a result companies like Digital Equipment Corporation were left to look for niche markets -like selling PDP gear to people using MTS to bridge the gap between the real need for time shared terminal services and the typical senior administrator's assumption that all computers came from IBM and belonged to Data Processing. Similarly CDC, whose gear out-performed IBM's 360 by nearly an order of magnitude, sold mainly to the military and into special purpose functions like airline crew scheduling, while a host of smaller players like Data General, Wang, and Honeywell sold science applications in shop floor optimisation, production scheduling, text processing, or medical pharmacy and laboratory support.

By the late seventies these intruders into IBM's corporate markets were growing rapidly and users, who were the people bringing this stuff in and running it despite objections from Data Processing, were not only getting real productivity benefits but starting to push serious questions about Data Processing's role. As a result a power grab developed as mainframe managers tried to get control of enterprise computing by claiming that they, and only they, had the right to set enterprise computing standards and run enterprise applications like materials resource planning.

Unfortunately they had the corporate clout to make it happen, but neither the skills nor the technology to succeed, and so another round of multi-million dollar boondoggles developed -clearing the field for the absurdities of 1984/5 in which, as I mentioned last week, 97% of personal computer sales went to IBM's over priced and obviously inferior PC/AT because users demanded something, and management wouldn't buy from anyone else.

The result was an epic conflict over control - which users might have won except that IBM crippled the PC by choosing PC-DOS over Xenix and thereby created an architecture: the Microsoft client-server idea, that simply doesn't work no matter how much money and wishful thinking is thrown at it. As a result, expansions in desktop functionality and network connectivity were counter balanced by thinning out of the client role and shifting control to the server - eventually producing what we have now: the locked down corporate PC entirely controlled from data center servers.

There's an ACM study by Julie Smith et al (Managing your total IT cost of ownership, Communications of the ACM (Volume 45, Issue 1 (January 2002)) that I love to cite on this. They assumed everybody of interest would use Microsoft's client-server architecture and set out to find, mainly by studying real users, what the cheapest way is of doing that. The answer? turn the PC into an expensive terminal - lock it down hard and run everything centrally.

So how does that differ from mainframe practice in 1984? It differs dramatically in origin, but not so much in practice - fundamentally the PC's costs, failures, and weaknesses allowed the mainframe mindset to take charge and ultimately produce what we have now - in which the locked down desktop PC amounts to little more than a marginally prettier, but less reliable, 3278 dumb terminal to a central processor.

Tomorrow I'm going to talk about the alternative to all this: science based processing, the user managed mini-computer, and today's logical successor to that: the Unix business architecture -combining smart displays with big servers and Unix management ideas.

Meanwhile, however, there's a simple but convincing test you can do: find people old enough to remember working with dumb terminals on mini-computers or mainframes and then ask them two questions: what was the background of the people who ran them? and, were the systems any good? What you find may surprise you: mini-computer users, like mainframe users, generally hated dumb terminal systems run by data processing professionals - but loved mini-computer systems with the same dumb terminals and basic software when these were run by user management.

So that's the bottom line: it's not the horse that's the problem -it's the refusal to use the phone.

Editorial standards