From Chapter Three: The Windows Culture

From Chapter Three: The Windows Culture

Summary: The history of Windows, from 3.11 to Vista, is one of ad hoc adaptation to market conditions and expectational change among customers. To a large extent it's been about playing catch up to Unix and the Mac - with fascinating devolutions along the way.

This is the 28th excerpt from the second book in the Defen series: BIT: Business Information Technology: Foundations, Infrastructure, and Culture


Once several million PCs moved into offices it immediately became apparent that some better means than "sneaker net" (carrying floppies between machines) needed to be found for users to be able to share access to documents.

At that time (early 1986) the choices were:

  1. The PC-AT could be connected to mainframe networks using an SNA board to act as a terminal and remote job entry station but there was little ability to use the mainframe as a file sharing switch for PCs.

  2. Running Xenix on the PC gave it access to Unix networking (by then well established in both the already declining Usenet/UUCP form and the growing internet form) using either serial or TCP/IP connections.

    Adding Xenix was expensive, network boards were typically about $1,195, Microsoft's Xenix ran to $795 for the base package, and the machine needed additional RAM at $1,125 for the 512K upgrade.

    The pre-configured IBM PC-AT based multi-user system ran around $13,995.

    Equivalents from companies like Altos, Tandy, DEC, and NCR were considerably cheaper and ran standard Unix, but people generally bought the PC first and only then discovered networking and multi-user issues.

  3. Switching to the Mac, where Appletalk and Appleshare were widely used and highly effective; or,

  4. Adopting Novell software on a "server" with Novell network cards and client software on the PCs.

    Novell cards and software worked with PC-DOS in the standard PC-AT and so quickly gained market share, reaching 90% of the market by 1989 with IBM's token ring based LAN manager taking most of the rest.

The basic Novell software worked extremely well in part because it used a networking technology that was just a marginally modified form of TCP/IP, and in part because it did very little - it merely turned a server into a shared disk and printer manager accessible by users on network attached personal computers. That, however, was enough to turn them into a billion dollar company by mid 1990.

The response IBM and Microsoft co-developed was a networking technology known as NetBEUI (NetBios Enhanced User Interface) which extended the PC's BIOS, stored in ROM and run at boot time, to make it "network aware" and so laid the foundations for a PC network architecture known as SMB/NetBIOS that is still in use in many corporate Microsoft Windows installations today despite having been officially deprecated years ago.

File and print sharing through a server led to demand for more shared services - particularly shared databases and, from this, the PC version of client-server (processing on the client, data on the server), started to evolve.

At about the same time, however, there was a growing rift between Microsoft and IBM. The success of GUIs on Atari, Macintosh, and Unix computers had demonstrated their potential and, of course, both IBM and Microsoft wanted to own the inevitable PC version.

As a result Microsoft, which had developed a partial windowing environment modelled on a GEM application for the Tandy 2000 (the only 1983/4 Intel machine with the power to run anything remotely like a GUI, in this case using a 12Mhz 80186) but dead ended it, revived that project and issued a partially graphical interface for DOS 3.1 called Windows 2.1 in December of 1987 - just over three years after announcing it and four years after abandoning Windows 1.0.

At that time Microsoft also released Word and Excel for the Macintosh as well as a version of Excel for the PC combining the application binary with a limited set of run-time graphics libraries to produce a Windows-like effect when the application was launched from MS-DOS. Sales of this product were extremely limited in large part because the PCs of the time were unable to run it effectively - leading many Microsoft reviewers and evangelists to stage their Word and Excel demonstrations on Macs.

By mid 1988 Computerworld and most of commentators outside the PC press were agreed that IBM's forthcoming OS/2 signified the end of Unix and would own the desktop by 1990. When OS/2 release 1.1 shipped in December of 1988, it proved to be a superior product, but most of Microsof's applications wouldn't run on it, and most of the clone makers turned out to have ironclad contracts with Microsoft preventing them from adopting it.

As a result IBM was left without a market for OS/2 outside its own loyalists in the mainframe community.

When Microsoft Windows 3.0 shipped on May 22, 1990 it turned out to be the same kind of MS-DOS application the December, 1987 Release 2 had been; but with one significant difference: essentially all of the internal APIs and libraries had been changed.

As a result work done by leading software developers like WordPerfect, then the market leader in office automation, on Windows 2.1 had to be written off and their applications rebuilt from the ground up to work with the new APIs and other components. Microsoft, however, had ported its Word (introduced in 1984 on the Mac) and Excel (introduced in 1985 on the Mac) products from the Macintosh to the PC under Windows 3.0, and so had a substantial head start that gave it dominance of these markets before competitors could get their products out.

Enter Windows NT
Windows NT (New Technology) started out to be a version of Windows 3.0 which did not require MS-DOS to run and was initially known and tested as Windows 4.0. This product failed because many PC machines wouldn't boot it, although the core elements were eventually morphed into Windows CE.

When IBM's OS/2 work seemed to threaten Microsoft, the NT project was revived. Dave Cutler, a key VMS designer from DEC was given more latitude, the product renamed NT 3.1 to surpass IBM's pending OS 2, release 2.1, and development re-started on what was then intended as a kind of second generation VMS.

Unfortunately the fastest Intel based PCs then available (the i80486 was state of the art in late 1992 and early 1993) were too limited to run it even with additional memory and this too failed in the public beta. NT Version 3.51, a stripped down variant designed to accommodate processor and memory limits, worked reasonably well and gave rise to an NCD software project that was later to become the basis for Citrix Systems, but was generally so slow and unstable an OS that public support for Microsoft started to erode.

NT 4.0, really little more then a port of DEC's VMS to Intel, was released in July of 1996 as a substitute and was the first Microsoft OS to fully integrate TCP/IP capabilities. This was an immediate hit as critics of 3.51 scrambled to get back on board with Microsoft and the usual publications made the usual announcements about the death of Unix.

The hype was so overblown that serious people predicted 30% annual growth for Microsoft until past 2010 while others trumpeted the end of brick and mortar retailing at the hands of the Microsoft E-Commerce machine. Unfortunately what the enthusiasm really led to was 500:1 price earnings ratios for some internet companies and billions in later losses for investors.

When Windows 3.1 appeared in April of 1992 it was a place holder for the much more significant release 3.11, delayed until October of 1992. This later version included the SMB (server message block) based peer to peer networking promised for 3.1 and changed the industry. Windows 3.11, known as Windows for Work Groups, brought local area network file sharing into the Microsoft product line and was therefore the first product to nail down the Microsoft client-server paradigm and so set the stage for the all Microsoft Office environment we often see today.

Windows for Workgroups, like Windows 95, 98, and ME, ran as an MS-DOS application, thereby maintaining backward compatibility with older hardware and inadvertently re-creating the two tiered architecture of the original CP/M product - this time with MS-DOS acting to serialize user input coming from applications running under the graphical shell.

SQL stored procedures
Every database application has to have some logical processing, at a minimum to format data, but more commonly to do fairly complex things; for example, multiplying hours worked by hourly rates to get gross wages due.

In mainframe and mini-computer applications this logic is handled in the application code outside the database because that is the simplest and most natural way of doing it. Underlying that design, however, is the assumption that the operating system will resolve database serialization and other multi-user issues to make it possible to share one application among many users.

In the original Microsoft client server world you had many users, but no multi-user operating system. Since serialization had to be handled somewhere, it devolved to the database engine which then had to handle application logic as well as data storage and retrieval. The stored procedure idea was developed to accommodate this.

In a payroll example, an application request would start a script written in SQL (Structured Query Language, see Databases in Chapter 5) and executing within the database management system's memory area. This script would create a temporary payments table, read the hours from one table and rates from another, do the multiplication, store the results in the temporary table, and then send the requesting application client the contents of that temporary table.

Operationally, stored procedures tend to be extremely inefficient, but are the only option in a PC client-server model where the database is the only multi-user component.

Notice that because all connections are handled via TCP/IP, the database running the stored procedure need not be on the same machine as the one storing the data - it only needs to be the one the application connects to first.

As a result a three tiered database access architecture - PC client, stored procedure server, and data server, quickly developed to support multi-user applications on Windows sized servers.

The local area networking capabilities within Windows 3.11 were severely limited, but adequate for the simple file and print sharing characteristic of most small offices.

Third parties, including Sun Microsystems, then ported what became known as TCP/IP "stacks" (because the software consisted of multiple layers) to Windows 3.11 to give it some internet access capabilities.

As a result Windows LANS became Windows WANS (wide area networks) that ran parallel to the LANS which, of course, continued to use SMB/NetBIOS networking. In combination these facilities enabled departmental workgroups to share files, obsoleted the previous "sneaker net" approach, gave users access to internet resources, and quickly enabled the development of stored procedure execution engines within client-server databases.

In this configuration a number of Windows PCs running a Windows specific piece of software known as a client connect to the database using TCP/IP. The database engine then handles all required serialization and is, therefore, the multi-user component that turns a collection of single user PCs into a multi-user system.

Oracle was the first major company to support this approach to multi-user access for isolated single user systems, developing and selling its own TCP/IP "stack" and related tools, known as SQL-NET, to do this. Sybase, however, took the next step - that of optimizing the database for stored procedure execution rather than database management, and so quickly became the pre-eminent PC database supplier.

There have been thousands of changes to the Windows architecture since, but none are technically significant. Today's latest Windows for Workgroups LAN is implemented using vastly more powerful hardware running Windows 2003 instead of Windows 3.11, uses TCP/IP instead of SMB/NetBIOS, and connects to a client running Windows/XP instead of Windows 3.11 - but is fundamentally the same in terms of its structure and operation.

Some notes:

  1. These excerpts don't (usually) include footnotes and most illustrations have been dropped as simply too hard to insert correctly. (The wordpress html "editor" as used here enables a limited html subset and is implemented to force frustrations like the CPM line delimiters from MS-DOS).

  2. The feedback I'm looking for is what you guys do best: call me on mistakes, add thoughts/corrections on stuff I've missed or gotten wrong, and generally help make the thing better.

    Notice that getting the facts right is particularly important for BIT - and that the length of the thing plus the complexity of the terminology and ideas introduced suggest that any explanatory anecdotes anyone may want to contribute could be valuable.

  3. When I make changes suggested in the comments, I make those changes only in the original, not in the excerpts reproduced here.

Topics: Networking, Apps, Software, Operating Systems, Microsoft, IBM, Hardware, Enterprise Software, Data Management, Data Centers, Windows

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • You forgot to mention

    any bit of the Lotus 1-2-3 stories and the whole memory problem (expanded memory cards, EMS, HMS)

    And the IPX stories (Novell's Internetwork Packet Exchange).
    • And also

      the DR-DOS fake error message.

      Also the fact that M$ "stole" the BSD UNIX TCP/IP stack and changed the default packet size for pings - so the "Ping of Death" from a PC could bring down an UNIX box. I also heard stories of how M$ changed the networking code so that connections from UNIX (specifically Sun?) would run at 1/4 speed.
      Roger Ramjet
  • Internet or intranet?

    By December, 1995 there were still only 16 million internet users. That number wouldn't support a whole hardware/software ecosystem.

    The content of the discussion also appears to be referring to intranets. So I think you intended intranet rather than internet.

    "Internet" quotes:

    Running Xenix on the PC gave it access to Unix networking (by then well established in both the already declining Usenet/UUCP form and the growing internet form) using either serial or TCP/IP connections.

    Third parties, including Sun Microsystems, then ported what became known as TCP/IP ?stacks? (because the software consisted of multiple layers) to Windows 3.11 to give it some internet access capabilities.

    In combination these facilities enabled departmental workgroups to share files, obsoleted the previous ?sneaker net? approach, gave users access to internet resources, and quickly enabled the development of stored procedure execution engines within client-server databases.
    Anton Philidor
    • intranet = LAN

      Intranets connected workgroups. Unix used the internet from day one.
      • Not quite

        The definition of an intranet includes what it does, while a LAN is part of (along with applications) how it does what it does.

        Quoting an example definition:

        An Intranet is essentially a mini in-house Internet. We can define intranet as an organization's private, secured computer network system that uses the same concepts, technologies and protocols (standards) as The Internet, but operates on a Local Area computer Network (LAN). It incorporates a working, interactive custom environment to serve the business/organization model, with familiar Internet website-like navigation and functionality. In other words, an intranet is a corporate networked internal web site with other features like internal e-mail, news group and chat facilities etc. Just like you find in the Internet.

        I'll guess that you'll have use for the internet/intranet concepts as distinct from LANs later on. And that calling Unix connections "internet" may create some confusion.
        Anton Philidor
      • Definitely internet

        1990-91 we were accessing NASA from Sun workstations
        here in Oz. The "internet" of the time was largely restricted to
        government-ish agencies.
        Richard Flude
      • yes, IPX story

        You neglected the entire earlier workgroup PC technology that used IPX for a while. There was also something called LANTASTIC which i never found that fantastic, i liked Netware Lite at the time.
  • This riddled with errors.

    Like the fact that is was NT 3.1 that was the first version to included a stack, not NT 4.0.

    And idea that three tier architectures have a "stored procedure server".

    Or that Windows NT 4.0 is a port of VMS (and an abandonment of the NT 3.51 code base)

    At least you stopped short of spouting more crap anout Windows threading model.
    • NT 3.1

      If you can document that 3.1 contained a tcp/ip stack I'll make the change. (I thought it just had whatever MS called decnet).

      Bear in mind that I'm serializing this thing to find errors - like this one, if its real.

      By the way, Do you understand why your title "this riddled with errors" struck me as very expressive?
      • Here;sid=2001/6/19/05641/7357
        • Perhaps better

          Microsoft Windows 3.1x
          TCP/IP Stack Configurations

          Your link is about about NT and TCP/IP, no?!
          Anton Philidor
          • it's not about whether tcp/ip could be used, it's about who supplied it

            Sun's NFS (1984!) included tcp/ip for a PC with the right card running MS-dos.

            The issue here is about when it became possible to run it on MS without third party software (like spider's or Sun's.)
          • Plus

            the automount worked from the PC! Unbelievable! M$ STILL can't do that!
            Roger Ramjet
          • The Microsoft Way

            Microsoft relies on third party software in ways that may seem inefficient to you. Microsoft also provides capabilities which you might consider better supplied by third parties.

            But aren't those different issues from when a capability became available on Microsoft software?

            If Sun supplied the capability of TCP/IP and it worked, wouldn't you be a purist to say the capability was still not available because Microsoft hadn't supplied it?

            That would be a significant issue if everyone with Microsoft software needed the capability. But here Bill Gates was years away from saying the internet was essential to Microsoft's future. At that point the company could safely ignore assuring the capability was universally available.
            Anton Philidor
        • umm, thanks, but..

          1) it says that the first MS tcp/ip was 3.5, not 3.1 - and 3.5 was a pilot release for 3.51.

          2) that the writer doesn't seem to know how streams fits into the picture casts some doubt on his other statements.

          3) I'll review the docs I have here and see if this needs changing.
      • Re: NT 3.1


        Why is the onus on me (or anyone in this forum) to prove to you that NT 3.1 had a TCP/IP stack in it?

        Since you are writing the book, shouldn't you already have documentation/reference for everything you state (unless you intend your book to be a loose bundle of more-or-less-facts)?

        Serializing it to find errors is one thing - using that as a means to do fact-finding is yet another thing. Being the science-based-computing-guy, one would expect you to have done the hard work of getting the facts right.

        Your modus operandi seems to be:
        a) State outrageous things as facts.
        b) If no one refutes it, hammer it in.
        c) Anyone refutes, push them back with curt responses.
        d) If they push back, go silent.

        How is your research into threads coming along?
        tick tock
        • Love the comment about threads

          on which you're wrong, of course.

          And in being that you illustrate why I'm asking for input - because I can be wrong too. I know, shocking, almost inconceivable, but true.

          One of the fun things here is that many sources are definitive - and differ from each other - so it's hard to get things right.

          You, for example, might want to find out what a LWP is outside the wintel world. - :)
          • A Thread of Truth?

            Sun and Linux differed on how to manage threads. I believe that Sun finally broke down and accepted the Linux thread scenario. My memory is hazy . . .
            Roger Ramjet
          • Try again Murph

            You just illustrate his point.
            He's not wrong. You should do some research on the windows side and on the solaris side.

            The best answer was posted by bob kerns, but typically you don't reply when you are shown to be wrong.
  • Whose fault?

    You wrote:

    By mid 1988 Computerworld and most of commentators outside the PC press were agreed that IBM???s forthcoming OS/2 signified the end of Unix and would own the desktop by 1990. When OS/2 release 1.1 shipped in December of 1988, it proved to be a superior product, but most of Microsoft???s applications wouldn???t run on it, and most of the clone makers turned out to have ironclad contracts with Microsoft preventing them from adopting it.

    As a result IBM was left without a market for OS/2 outside its own loyalists in the mainframe community.

    [End quote with "t" in first Microsoft added.]

    Microsoft discovered the network effect quickly. That's Applications are written first (and maybe only) for Windows because everyone has Windows, And everyone has Windows because of the applications.
    (Judge Jackson sourly called this the Applications barrier to entry.)

    Wouldn't IBM know that OS/2 had to run applications written for Windows/DOS in order to obtain a substantial market share?

    Windows 3.0 didn't ship until May, 1990 so IBM had months competing against a product the company had partly developed.

    So if IBM chose to ignore compatibility, isn't IBM to blame for OS/2's start? And can the clone-makers be blamed for continuing with the operating system which ran the software people wanted to use?

    Microsoft had iron-clad contracts because the clone-,akers wanted to sign up Microsoft. Because of the applications. Hard to see a non-compatible OS/2 as a reasonable alternative as you describe it.

    (Semi OT)
    Historical note from 2001 encountered:

    I remember in 1997 when we were looking at the OS/2 revenue sales and realizing that NT 4.0 had killed OS/2. When Windows NT 4.0 came out, that pretty much did in OS/2, people migrated from OS/2 to NT incredibly fast. I don't think it would be an exaggeration to say that about half of the active individual OS/2 user base switched from OS/2 to Windows NT 4.0 within 6 months of its introduction. And IBM, unbeknownst to any of us, had decided to kill OS/2 before OS/2 Warp 4. Warp 4 was in the pipeline already. Gerstner, feeling betrayed by PSP (Personal System Products, a division of IBM) for the PowerPC debacle had ordered PSP eliminated and its assets split up amongst the other divisions, none of which particularly cared about OS/2.
    And finally, my last point is that from a software developer standpoint, if you want to stay in the desktop market, Windows really and truly is the only market still. I get email regularly from OS/2 and Linux users urging us to get off of Windows. Whenever there is a story about Microsoft lifting one of our features and putting it into the next version fo Windows (no matter how minor) the email flows in telling us we need to get onto Linux (or come back to OS/2). The reality is, there just isn't enough of a market. From an ISV standpoint, the only real options are either Windows, PocketPC, or Palm and Palm isn't exactly innovating right now. We write for Windows because there's not that much choice and because there are great market opportunities for developers who create things to "decrease the suckitude of the OS".
    Anton Philidor