X
Tech

From Chapter Two: The appliance computing culture

The key thing differentiating the appliance computing community from the data processing one was its absolute focus on making the money, instead of on counting monies magically made by somebody else, somewhere else.
Written by Paul Murphy, Contributor

This is the 15th excerpt from the second book in the Defen series: BIT: Business Information Technology: Foundations, Infrastructure, and Culture

Note that the section this is taken from, on the evolution of appliance computing, includes numerous illustrations and note tables omitted here.

About this grouping

The application appliance information systems architecture consists of:

1 - one or more data center computers running an interactive operating system such as MPE, OS/400, PICK, SSP, Unix, or VMS and research or packaged applications;

2 - a systems management group which is part of operational management, not Finance; and,

3 - a data center business or organizational context defined mainly by functional support responsibilities.

Traditionally all of these would be described as mini-computer systems - a term from the sixties which came to mean "a commercial computer other than a mainframe" since many mini-computers are far more powerful than the largest mainframes.

However, the distinguishing characteristic of this group is not its reliance on minis but its focus on interactive functional support delivered at the user's workplace through terminals like this Sun Ray.

That's so different from automatic data processing with its emphasis on after the fact reporting that the two disciplines aren't really related despite the coincidence of their common reliance on digital computers.

Internally to the data center the core operational difference shows up in the absence of the systems programmer. In a pure mini-computer environment the machine is treated solely as a means of delivering one or more applications and there is no interest in programming at the operating systems level.

In this environment computer usage is centrally controlled just as it is in the mainframe world but:

1 - the underlying applications model is based on providing direct support for business functions at the time and place those functions are executed;

2 - information integration is done through shared data rather than job scheduling; and,

3 - applications are usually purchased or licensed rather than custom developed.

There was a strong resurgence of this functionally oriented mentality in the late nineties as applications providers side stepped the conflict between the market's perceived preference for Microsoft Windows based products and its demand for reliability by introducing "application appliances" whose central characteristic was reliability --achieved mainly by running the application on Unix and not allowing customers access to the OS level.

The long run business process differences between this environment and the mainframe one all derive from the fact that the interactive applications culture always focussed on making the money while the data processing culture evolved out of the after-the-fact record keeping mandate held by financial management.

Thus mainframers started by processing transactions the way financial clerks originally did --long after the business denoted by those transactions was done. For example, GL entries are made, and invoice totals are prepared, long after the deals are done. Over the years, however, that automated clerking mandate evolved toward real time support in large part in response to data entry problems including errors and delays.

In contrast the mini-computer world started by supporting the people creating the transactions at the time they create those transactions and then migrated toward business intelligence and other predictive functions aimed at helping those people create the right transactions.

Consider, for example, someone working in receiving at a distribution business. Prior to the mini-computer, incoming orders would be checked against paper purchase orders and both inventory management and payables sign-offs lagged considerably behind warehouse operations. With early interactive systems from makers like Honeywell and Microdata, companies could put a terminal at the receiving station and check receipts against orders on-line, thus eliminating this time lag while reducing costs and improving overall operational efficiency.

In an interactive functional support application of this type:

1 - users start and stop several different applications at will;

2 - users access applications from their work area using an interactive device like a terminal; and,

3 - the applications are designed to provide direct functional support for work being done by the user.

Thus the receiving process adds directly to inventory and the two applications are linked through access to the same database. As a result a receiving update entered on-line is immediately visible to a goods available for sale query issued in order entry.

For the Windows generation
Microsoft's Windows telnet emulates a DEC VT102 terminal. The window you get is 24 rows deep by 80 characters wide because the first terminals were designed to show one complete IBM card image at a time.

By default characters are transmitted when you hit carriage return (enter) and there are no graphics capabilities. The VT100 was Digital's standard terminal from about 1978 to 1983. The VT102 added a row of function keys.

Dumb terminals were called that because they did no local processing --and are sometimes known as "green screens" because many IBM products used a pale green background.

The 1979 VT103 incorporated an 11/23 mini-computer inside the VT100 case and was, therefore, an early personal computer intended to act as a client to a VAX 780 in imitation of IBM's 5200 client for its Future Systems machines.

This architecture is tremendously effective but has consistently been denigrated by the mainframe community as somehow not real data processing and by the Microsoft PC community because the clients aren't computers. As a result usage volume decreased throughout the ninties with only IBM's iSeries machines and Sun's Sun Ray surviving as a pure play mini-computer environments.

More recently, however, the increasing costs and risks associated with Microsoft's architecture have been driving a resurgence in appliance computing with sales of Sun Rays, for example, essentially doubling each year since about 2004.

Network Computing Displays
The NCD smart display, developed in the late eighties, gave the mini-computer the ability to deliver superior desktop graphical user interface support and was widely adopted in the Unix and iSeries environments.

Unfortunately NCD's management appeared to lose their direction in the early ninties while potential competitors like Wyse and NCR focused on the PC market instead. As a result the network computing revolution promised by the spectacularly successful NCD 19C X-terminal simply didn't happen.

Today upgraded versions of the X-terminal are re-appearing as full fledged smart displays. Sun's Sunray, for example, once again supports high end graphics and hardware based security, making it a natural desktop device.


Some notes:

  1. These excerpts don't (usually) include footnotes and most illustrations have been dropped as simply too hard to insert correctly. (The wordpress html "editor" as used here enables a limited html subset and is implemented to force frustrations like the CPM line delimiters from MS-DOS).

  2. The feedback I'm looking for is what you guys do best: call me on mistakes, add thoughts/corrections on stuff I've missed or gotten wrong, and generally help make the thing better.

    Notice that getting the facts right is particularly important for BIT - and that the length of the thing plus the complexity of the terminology and ideas introduced suggest that any explanatory anecdotes anyone may want to contribute could be valuable.

  3. When I make changes suggested in the comments, I make those changes only in the original, not in the excerpts reproduced here.

Editorial standards