X
Tech

From Chapter Three: The Windows Culture

Although Windows technologies haven't really evolved since the introduction of Windows NT, Windows management methods have - mostly through the development of methods aimed at coping with server and staffer sprawl and the gradual adoption of ideas from the predecessor data processing culture.
Written by Paul Murphy, Contributor

This is the 29th excerpt from the second book in the Defen series: BIT: Business Information Technology: Foundations, Infrastructure, and Culture

Roots (continued)

What has changed most substantially since 1992 is the general approach to managing servers. Originally users managed their own Windows for Work Groups servers but, as server functions became more complex and started to affect more people, server operations became the responsibility of Windows professionals -generally meaning either experienced managers from other computer cultures or people who had grown into the job as Windows technicians but had little or no actual technical education.

The release of Windows NT led to immediate and dramatic server proliferation as people tried to use small machines to run complex applications code. A 1997 Pentium 2 running at 233Mhz with 192MB of RAM could barely handle both NT 4.0 Server and a simple application like the Windows email server - Microsoft Exchange- for perhaps 20 to 30 users. In a company with 5,000 employees, that usually meant 150 or so small servers just for email.

Outlook services, Web services, firewalls, file and print services, SQL-Server, and related functionality all required similar server investments and companies quickly developed enormous server rooms with hundreds, and often thousands, of these little machines.

At a ratio of about one support technician for every 20 NT servers, PC staffing exploded along with server populations.

As processors got faster and memory became cheaper, it turned out that NT 4.0 wouldn't really let you run multiple concurrent applications on the same machine largely because of two problems:

  1. It is virtually impossible to install two or more applications without creating a situation in which a failure in one can not reasonably be blamed on the other; and,

  2. Since the most usual remedial action in response to a software failure is first a reboot and then a re-install, having two or more applications on the same machine either, or both, inconvenienced a larger number of users or caused the same users more annoyance than just shutting down one application.

In combination with scale issues these problems led to the evolution of a one application per box rule which exacerbated both server and staff proliferation.

The first solution to be widely adopted was the rackmount. These initially let a systems group put up to 8 PC servers in a single rack roughly as big as an oversize refrigerator. For obvious reasons rackmounts therefore quickly gained in popularity and consequently in density, achieving, by late 2002, a typical 42 machines per rack.

Since then the rackmount has evolved into the blade server in which a single rack is modified to resemble a single chassis and machines, now called blades and consisting of little more than CPUs, memory, boot ROMS, and network connections, are placed in the rack and centrally administered. Blade servers now routinely put 256 PC equivalents in a rack and higher densities are being advertised.

Rack and blade servers have limited room for disk drives, so another solution to disk problems associated with server proliferation is required to deal with this. That solution started out as the Storage Area Network or SAN.

Originally companies like EMC made stand-alone disk packs like this one that could be connected to mainframe, mid-range, or Unix systems via local cabling. These packs generally had their own CPU and memory and acted to emulate simple physical disk drives while internally combining the speed benefit of extensive local caching with the security benefits of RAID storage.

SANS were particularly attractive for PC server environments because of the interaction of three factors:

  1. Most PC servers have cheap, and therefore slow, buses connecting disks drives and memory. As a result a SAN usually offered faster disk response than an internal disk;

  2. One of the biggest problems facing PC server management is backup. Simply tracking internal disks in thousands of rack mounted machines is difficult; ensuring that all are backed up to tape or alternative run-time machines is nearly impossible; and,

  3. Most PC staffers are not deeply technical and hiring people qualified to manage large server farms is correspondingly difficult. SANS offered the opportunity to reduce staffing levels and work complexity by aggregating backup and OS reloads.

In general SANS were widely accepted and effective but it turned out that the investment in skill and effort needed to run them was compounded by both PC clustering (using several PC servers to handle one load) and the 1:1 links between the SAN devices and servers.

That made the next step obvious: turn the disk pack into a disk server with its own CPU, Memory, OS, and use the network connection in lieu of a bus connection.

With the right software, other computers could then access this machine as if it were a real disk drive and managers could set them up to serve as many machines as there were network connections for - thereby nicely replicating Novell 1.0 from 1979 for the Windows 2000 and later server environments.

Originally all such connections were proprietary to the companies that sold them but eventually they morphed into a new form, called the Network Attached Storage, or NAS, architecture in which the proprietary software is loaded over a standard TCP/IP connection with an upgraded switch.

By late 2001 most larger companies had so many NT or Windows 2000 servers in rackmounts and wiring closets scattered across their offices and other working premises that simply keeping track of them became difficult. At that time, too, Microsoft introduced a new licensing model (Licensing 6.0) that combined harsher penalties for failure to upgrade to new system releases as they become available, with higher overall costs and considerably more emphasis on finding and punishing unlicensed users.

The initial result was a strong move to server consolidation that peaked in late 2004 before giving way to today's (starting late 2006) fad: virtualization. Thus there now seem to be three working solutions:

  1. Eliminate the problem by converting to Unix servers and applications in order to consolidate many applications to a small number of machines;

  2. Convert to larger Intel based PC servers running Windows Advanced Server and consolidate some user loads or several applications to each remaining machine; or,

  3. Adopt virtual machine technology with Windows 2003/XP and later releases to isolate loads from each other.

Of these the first is generally unacceptable to Windows people and the third is now popular but was originally essentially limited to people from the mainframe culture. Consider, for example this from an article by J. Vijayan:

Until recently, Conseco Finance Corp., like many other organizations, was struggling to find a way to control the rapid proliferation of small Intel-based servers across the company.

The systems, which were being purchased at the rate of about one per week, were being used to run a variety of very small Windows-based applications, such as domain controllers and encryption and antivirus services.

In many cases, the Intel servers that were being purchased were severely underutilized because the applications required only a fraction of the available resources, says Rod Lucero, chief IT architect at St. Paul, Minn.-based Conseco Finance.

But because of the technical challenges involved in running more than one application on a single instance of the Windows operating system, Conseco had to buy individual servers for each new application, says Lucero. "We really wanted to find a technology that would not require us to go out and buy little pizza boxes every week," Lucero says. So nine months ago, Conseco started using virtualization technology from VMware Inc. in Palo Alto, Calif. VMware sells software-based partitioning products that allow users to take a single Intel box and carve it up into multiple smaller "virtual" servers, each of which can run a separate Windows or Linux application. (Computerworld, Nov. 25/2002)

In reality the licensing cost for VMware GSX edition, about $3,000 per instance at the time, easily exceeded the cost of buying a separate machine - but Conseco was an IBM reference site, the discussion took place in an IBM xSeries sales context, and Computerworld caters to the IBM community.

Some notes:

  1. These excerpts don't (usually) include footnotes and most illustrations have been dropped as simply too hard to insert correctly. (The wordpress html "editor" as used here enables a limited html subset and is implemented to force frustrations like the CPM line delimiters from MS-DOS).

  2. The feedback I'm looking for is what you guys do best: call me on mistakes, add thoughts/corrections on stuff I've missed or gotten wrong, and generally help make the thing better.

    Notice that getting the facts right is particularly important for BIT - and that the length of the thing plus the complexity of the terminology and ideas introduced suggest that any explanatory anecdotes anyone may want to contribute could be valuable.

  3. When I make changes suggested in the comments, I make those changes only in the original, not in the excerpts reproduced here.

Editorial standards