X
Tech

How and why to use Sun Ray

I've just moved from Ontario back to my home province of Alberta - where the Sun shines, the air is clean, and the economy is booming. During the move we ran a series of eight chapter summaries, or notes to myself, for a possible book on how and why to use Sun's Sun Ray thin clients and smart displays along with a guest blog written by frequent contributor Roger Ramjet.
Written by Paul Murphy, Contributor

I've just moved from Ontario back to my home province of Alberta - where the Sun shines, the air is clean, and the economy is booming.

During the move we ran a series of eight chapter summaries, or notes to myself, for a possible book on how and why to use Sun's Sun Ray thin clients and smart displays along with a guest blog written by frequent contributor Roger Ramjet.
sr270.jpg

The point of doing this was to request comment, guidance, and alternative ideas - and I got some interesting stuff, but I know more people have more to say and so this is being posted to askbloggie in a kind of reversal: I'm asking you for help, instead of having you ask me for help.

What I'm hoping you'll do is show me the errors of my ways - the counter arguments and as many of the things I've missed as possible. You do not have to be kind; what I'm looking for is reasoned argument, not necessarily support.

The biggest problem I want to address arises from the fact that corporate IT management is dominated by data processing people and their Windows successors - meaning that their instinct is to use smart displays as thin clients in an attempt to recreate the 1970s mainframe systems and therefore that users will fight tooth and nail to keep their PCs instead.

So the subtext on every page, in every paragraph, of every sentence is going to be that things don't have to go that way - that the right way to implement the Sun Ray in business, and by extension any smart display architecture, is to start with the centralized processing characteristic of the thin client approach but then adopt more flexible software and radically decentralize control to get to the full smart display architecture.

So every chapter will come in two main sections: one on just using Sun Rays as thin clients and the other on going ahead and benefiting from the other two parts of the smart display architecture: the power of Unix, and the devolution of IT control to users.

As a result this book, if it gets done at all, will be about making both transitions happen: the thin client one and the management one; about how to plan for this, how to implement it, how to justify it, and how to make sure users grab and hold control when it shifts to them - because otherwise the next guy hired will promptly take the whole organization back to the same 1970s data processing mode Wintel is evolving toward.

 

Chapter 1: Introduction

This chapter is intended to define and describe the thin client and smart display architectures as implemented using Sun Ray technology and either Linux or Solaris on either x86 or SPARC.

The key points to be made include:

 

  • Sun's hardware and software combination (servers plus Sun Rays) can be used in most existing organizations without making management or directional change. The Secure Global Desktop, for example, can be applied in a Windows environment as nothing more than a better, cheaper, Citrix. Doing this produces the usual thin client benefits - enhanced manageability, better auditability, and reduced heat and noise generation on the desktop.

     

  • There are sun ray laptops for use in WiFi environments.

     

  • The most common application for the Sun Ray is in providing multiple service access in high security environments. The combination of Solaris zones on the server side with the inability of the user to corrupt the display device make it a winner where national security is at stake and trusted computing an absolute requirement.

    The Sun Ray 2FS extends this by offering fibre to the desktop - making it much harder to disrupt or tap communications between the server and the user. This capability has a civilian use too: in production floor or other harsh environments where electro-magnetic fields can have unpredictable consequences for both wire based and wireless networking.

     

  • The "eco-friendly" conformant environment may provide an emerging market opportunity for the Sun Ray SMP/CMT combination.

    Energy companies, for example, might want showcase executive offices with high end desktop support at a small percentage (10 - 15%?) of traditional energy requirements and therefore costs -e.g. one Sun Pod with a T2000, an 8 way AMD, and a disk pack plus several hundred Sun Ray LCD desktops vs several hundred PCs and their servers.

    Companies that own and rent office towers may be an even better market - because for every dollar their tenants spend on PC power, they spend another fifty cents cooling the building. Thus a building manager who's looking at daylighting as a cost cutting technology might want to piggy back a Sun Ray deal for tenants - creating a shared smart display system for the building in which tenants take ownership of their pieces.

     

  • The smart display architecture has three layers: hardware including networking gear, software, and management. Thus thin clients are about cheap and reliable service delivery; but once you have that in place, going the the next step is about decentralizing IT decision making and empowering your sysadmins to act directly on user requirements.

     

  • From a hardware perspective the key issue is over-kill capacity. You want a lot of standby computing power available to provide the fastest possible response to user requests - and you do not want systems interruptions or resource limitations constraining what users can do or when they can do it.

     

  • From a software perspective you want to be able to maintain virtually perfect reliability, essentially eliminate barriers to user experimentation with existing or new software and eliminate spam, viruses, phishing, and related security issues as user concerns. That means Unix SMP, it means migrating application support into the user community, and it means allowing open source code into production environments.

    Notice that the absence of both the desktop PC and the networking complexities that go with it make it possible to entirely eliminate help desk functions by moving first level applications support into the user community. Your sysadmins will get, and field, home user and related questions but you should generally consider those as part of the user relationship, not as an IT organizational function.

     

  • From a desktop provisioning perspective the key thing about the Sun Ray hardware is that it uses almost no desk-space while producing no noise and little heat. An extreme example: where power rates are sufficiently different between business hours and the midnight demand trough hours, you could charge a typical PC sized UPS every night, and run your Sun Ray on it all day.

     

  • From a user productivity perspective the key thing is an odd interaction between reliability and larger screen sizes. What happens is that users unlearn the distrust the PC evokes in terms of reliability, and start to take advantage of the larger screens to do things like park monitoring applications in their personal visual spaces, start to do more cut and paste between applications they can see, or just start to make more effective use of the available information simply because they can see more of it.

     

  • The central executive issue here is that the smart display architecture requires the CIO to provide leadership - getting more brains focused on a job - not management: co-ordinating more hands on a job.

     

  • Success can be a career killer - because success equals corporate invisibility.

     

    • What happens here is that squeaky wheels get the grease - or at least the executive face time, the budget increases, and the staffing appropriations. Succeed with smart displays and you become invisible to senior management: no more IT crisis meetings means no more face time; no cost overages means no more budget increases, user control means no more span of control increases, and so on.

       

    • What often happens when the system works is that senior management assumes it's easy - and the next guy they hire as CIO will destroy the structure - and ultimately lead senior management to out-sourcing when budgets run amok and users revolt against the newly dysfunctional IT department.

     

  • The career enhancing response is to drive IT into the revenue side of the business. Once you achieve systems reliability and get your sysadmins acting as user advocates to drive day to day IT control out to user management, you become invisible to higher management -because IT no longer squeaks. But, you also get the opportunity to work with those user managers to find new ways to drive revenue -and that enhances your corporate profile in ways the traditional business of balancing on the thin edge of continuing failure cannot.

 

Chapter 2: Costs Relative to Client-Server

This section introduces the three cost scenarios to be used throughout the text and explains the idea that a big organization, like an FAA or a Fortune 5,000 company, consists of islands of IT opportunity that can't simply be forced into a single mold but have to be transitioned at different rates and different times depending on the people in place at the beginning.

Thus the example scenarios involve:

 

  1. a 500 user faculty system;

     

  2. a 5,000 user manufacturing company with three major sites; and,

     

  3. a 30,000 user international professional firm with about 500 offices.

Each cost comparison looks at cost issues relative to the Microsoft client-server alternative with respect to capital, maintenance, support, networking, telecom, HVAC, and business risk.

In addition each scenario will be discussed with respect to:

 

  • Minimum sensible networking options including telecom integration and appropriate administration tools;

     

  • Server sizing - AMD, USIV+, and T1 now; USV/APL and T2 next year.

     

  • Specifics of the Sun Ray 2[FS] and 270 displays. Resolution, power use, noise, reliability, wiring options, theft/vandalism issues and the importance of screen sizes and types with respect to readability, comprehension, and retention;

     

  • IT staffing and organizational charts for both the thin client stage and the full smart display architectures;

    The emphasis here is on the staffing consequences of cross training and job expansion with the smart display approach. The key difference is that the traditional IT organization is largely about what staffers cannot do: they have roles they cannot be allowed to go beyond. Thus someone with Solaris administration, RDBMS, and networking skills will be frustrated every day by the limitations imposed by the job description.

    The smart approach reverses this: using cross training within teams and ad hoc role assignments to ensure that all available skills are tapped for the common goal - with two key results:

     

    • greatly reduced turnover; and,

       

    • the virtual elimination of mistakes and omissions.

     

  • Organizational change - change processes for the pre-existing wintel organization as it adapts first to thin clients and then to smart displays.

     

  • Key implementation issues

     

    • for a 500 user academic system in a single location:

       

      • Windows support for visiting and change resistant faculty and students;

         

      • Windows support for Wingot proxy licensed services (like Lexis/Nexis access);

         

      • Home user integration, xDSL opportunities, and off-campus computing;

         

      • Campus wiring (high threat ==> use optical where possible)

         

      • Faculty or group specific zones with assigned primary/secondary admin responsibility for each one;

         

      • DHCP/wireless access issues;

         

      • Unusual functionality comparisons -e.g. on the fly compilation in classroom and session continuation across locations.

       

    • for a 5,000 user integrated manufacturing business with three major locations and a half dozen marketing offices:

       

      • no down time tolerable ==> redundant data centers (one at each site) and cross trained teams.

         

      • no down time tolerable ===> use of N1 to map resources to demands;

         

      • SOX auditors trying to re-impose organizational controls from the 1920s;

         

      • conflicting user agendas can lead to IT working against itself: solution is to use Solaris zones to isolate groups if and where necessary, support significant process variation between groups but maintain database consistency.

         

      • data center distribution, cross training requirements, and open source support as the answer to the odd problem that the better the staff get, the less work they'll have to do.

         

      • balance production users vs wintel support in finance (may require additional staff);

         

      • assume Oracle financials/ERP for costing and sizing purposes; assume 10% custom screens added internally over first three years.

       

    • for a 30,000 user world-wide professional services firm with 500 offices:

       

      • system sizing and layout: the company is big, the offices are small; but national groupings face locally unique professional practice and data control issues. Hence mix of small and large servers; rigid client/firm data differentiation; mix of Sun Ray device types to support portability; mix of data centers to support service continuity without forcing data across jurisdictional boundaries.

         

      • Extensive Wintel integration issues: no wireless, some DHCP, rigorous document and data security - emphasis on Sun Ray physical security for data - stolen devices become a non issue.

         

      • Extensive remote access issues (from client sites and other offices) including legal complications related to allowable data movement and the imposition of local procedural rules on remotely accessed data. E.g. Access from Canada to data in Germany proceeds under EU data access rules, not Canadian ones.

       

    • Draw summary conclusions applicable to smart display vs client-server in all three groups -look particularly at user productivity, risk-to-data issues (transfer of responsibility for meeting both legislated and actual responsibilities for client data from professional staff to system).

    Bottom line for thin client: cheaper, safer, faster.

     

    Chapter 3: Why change (IT perspective)?

    This section extends the three organizational scenarios from the previous piece to discuss the risks and benefits associated with the decision to adopt a smart display architecture.

    Part one looks at what happens if you use Sun Rays as thin clients - i.e. change your desktops but don't change your management style or software. In effect this becomes a recital of thin client benefits including:

     

    • data stays at the server no matter where the employee is - important for transnationals who need to comply with local information control legislation and for professional firms who need to ensure client data security.

       

    • lower support, power, and hassle (replacement, breakdown, space, cooling, noise) costs on the desktop;

       

    • auditability and process control (mainframe virtues without the cost)

       

    • multiple source service delivery
      • concurrent or easily switchable Unix and Wintel application access;

         

      • cheap large screen support - e.g. dual head Sun Ray;

       

    • ability to access both CICS/COBOL and Unix apps eases mainframe to Unix transition;

       

    • least likely to fail in nasty, noisy, EM field polluted production environments;

    Part two looks at what happens if you adopt the full architecture. In particular the focus is on what this does for the user and therefore on what it looks like to senior, middle, and line management.

    The difference is this: using the Sun Ray as a thin client generally nets out as making IT more productive while using it in a smart display architecture generally nets out as making users more productive.

    User productivity trumps all non security, non corporate risk, issues - and it doesn't take much to make a big difference: a 1% user productivity gain trumps a 50% IT cost decrease at most companies.

    The productivity benefits to thin clients come from easily measured things like reliability and systems performance - those from smart display use tend to be much larger but also much more nebulous. For example:

     

    • user ownership "now" signals user commitment "going forward".

       

    • system wide "unbreakability" and the absence of social costs for users needing help leads to experimentation with software - reversing the usual ERP/SCM decline in which incoming employees use ever shrinking fractions of total available functionality.

       

    • the most basic thin client benefit is that the absence of the PC OS and related networking complexities removes most of the ambiguities from desktop issues. Hardware fails, rarely but obviously; cables come loose, and some software changes have unexpected consequences - but when these things happen the source of the problem is usually obvious: no reboots, no desktop software, no network tracing, no issues with indirect server dependencies.

      And all this has a smart display corollary: it should quickly becomes obvious to everyone that IT's job is to deliver a working application, but it's the user's job to use that application.

      Within the smart display architecture, therefore, you move first line application support out of IT entirely and make that a user community responsibility. That reduces IT cost but also has the far more valuable effect of making more of the application known to more users more of the time - because help comes from peer group domain experts and social barriers to learning are reversed.

       

    • The use of large scale SMP provides very fast response no matter what the user throws at the system. Fast response breeds confidence - or, more accurately, systems which don't always produce results quickly, lose user confidence.

       

    • the IT/user relationship leads to throw-away applications as people try to implement new ideas either with new software or by using existing software differently.

      For example, the adoption of open source software, along with active IT management support for open source effort, gives the organization easy access to thousands of niche applications, many of which fit, or can be quickly made to fit, organizational niches.

      The key counter-argument, the traditional burgeoning support load, doesn't apply because this stuff is throw-away: i.e. if external support fails or change demands accumulate beyond what the package can easily be stretched to do, you look for a replacement. convert the data, and throw the original away.

       

    • The reward structure for the IT organization changes from a focus on delivering an acceptable service at the lowest possible cost to one of partnering with users to explore and deliver revenue generating ideas and services.

    The key to the change is that combining unified applications with the distributed management architecture turns IT into an "always on" shared information resource - and that, in turn, removes the social and organizational impediments to productivity that go with Wintel client-server.

    Bottom line for smart display: improved user satisfaction, greater application usage effectiveness; improved effectiveness per IT dollar; and, the potential for IT driven revenue growth.

     

     

    Alternative View: Distributed Computing - The promise unbroken

    By Roger Ramjet

    Note: I asked frequent contributor Roger Ramjet to write a guest blog presenting and defending his idea that distributed Unix makes more sense than my smart display approach. Here's his contribution - read it carefully and I think you'll see we're not so far apart even if he is right to describe distributed computing as the real client-server model.

    ---

     

    In the beginning, there were expensive mainframes with dumb terminals. Then evolution produced not-quite-so-expensive UNIX computers and mini and micro "frames", and in the end inexpensive PCs crawled out upon the land. Companies purchased these computers depending on their needs, and were always looking to lower their costs. What they ended up with were islands of functionality where mainframes, UNIX and PCs each had their own exclusive domains.

    Then came the network. It was a way to connect all of the islands together and leverage the strengths of all computers connected to it. It had great promise and companies like Sun recognized this fact in their "The Network IS the computer" mantra.

    UNIX took to the network very quickly, while mainframes and PCs lagged WAY behind. UNIX computers began popping up in all shapes and sizes (and prices). The mainframe-like UNIX server "box" was soon joined by smaller desktop/deskside units. These UNIX "workstations" had all of the abilities of their larger server brethren albeit with less capacity. So an idea was born where processing would be done across the network, where UNIX servers "on the platform" would do the heavy lifting, while the UNIX workstations would manage the requests and post-process the results. This is the genesis of the client/server philosophy and the distributed computing model.

    In my own "perfect world" this reality would have been joined by both the mainframe and PC camps, where you would be able to mix and match which computers to use for whatever purpose (see DCE). But that never happened. Fishkill had exclusive control over the mainframes, and their networking offerings really sucked. Redmond had exclusive control over the PC world and they took a long time to boost capability. Meanwhile, shared 10BaseT networks, UNIX forking/infighting and $75,000/seat 40Mhz UNIX workstations ($30,000 for a 21", 40Mhz, 16MB, SPARCII+ with 2 x 105MB disks - Murph) diminished the allure of client/server.

    Today we have 10gigE networks and a "free" Linux/Solaris (common) OS that runs on cheap PCs (and huge servers) yet client/server is all but dead. Just when the philosophy makes the most sense, people are sick and tired of waiting. Mounting file systems over the network via NFS certainly sucked 10 years ago with shared 10BaseT, so we went and created a whole new storage network (SAN). Never mind that NFSv4 over gigE today is reliable and secure and FREE (built-in) as opposed to building a whole new infrastructure of fiber, switches, HBAs and high-priced storage (not to mention additional cost software to handle things like dual-pathing and fancy filesystems).

    The way it should be

    Murphy is a strong proponent of utilizing thin client/server architecture to create the next IT paradigm. I have reservations about doing this as it can lead to:

    • Vendor lock-in - SunRays require Sun software and Solaris to work. Everyone else has their own thin clients and restrictions.
    • Difficult capacity planning - The old N+1 problem, how many SunRays per server (20) and what do you do when you need one more SunRay connected (#21)?
    • No off-line capabilities - When the network goes down, so does your SunRay.
    • Monoculture - It's not apparent to me how you would use this architecture in a heterogeneous environment with PCs and mainframes. It looks like an island to me.
    • Proprietary configurations - Modifying server failover software requires expertise.

    My own philosophy on the next IT paradigm starts with strong rules (standards, processes, procedures) with few exceptions. I have seen too many instances where people start getting accustomed to going around the system to get things done (read "cowboy" sysadmins/appdevelopers) - which leads to a host of problems. With the proper standard configuration and Namespace standards, each *NIX machine in a distributed environment can be set up in the same way. AIX, HP-UX, Solaris and Linux (Unicos, IRIX,Tru64,OSF1,Ultrix,...) machines can co-exist - and client/server can become a reality.

    Once this paradigm is implemented, if you need a high-powered UNIX workstation to do CAD/CAM/CAE at your desk - get one. If you only do some e-mail and document writing, use a cheap PC running Linux. If you need to create a 20 billion record database and need high performance - buy the big iron. But you log in the same way, have the same locations for files, leverage tools that work on every box, have a consistent backup policy, etc. etc. etc (the list of advantages is long).

    By utilizing the system automounter, you can have load-balanced, highly-available (served) applications and single user account directories (served) that follow you to where you log in - on any machine on the network. By caching your apps locally, you can do your work even when the network is down (although you would need a local user account - a design decision). In order to master this whole thing, you would need to do what I did - read the automount man page. This was the promise of client/server and distributed environments - and it can be realized. All it takes is people that are willing to follow the program. No new fancy technology required.

     

    Chapter 4: The Sun Ray Server

    The longest single section in that book, by page count, will be the technical reference chapter dealing with the specifics of connecting, configuring, and managing Sun Rays. Drawn mainly from Sun's own documentation and with the usual absurd levels of redundancy that go with covering this material separately for Linux on x86 and Solaris on SPARC (with asides for Linux on SPARC and Solaris on x86), this will get pretty boring.

    It's also essential to this kind of guide because the book's intended to sell to sysadmins who then get their bosses to read other bits of it....

    My problem, of course, is that just including the obvious isn't going to be enough. So if I give you the list below as stuff I've thought about including, can you tell me what I've missed or should emphasize?

     

    1. Stand alone server set up and admin issues including capacity guidelines;

       

    2. desktop display power and networking requirements, sparing recommendations, and related issues.

       

    3. N1 and alternative auto-failover server set up and admin issues including tools for log review;

       

    4. Sun Ray Server (software) integration with identity management and the Sun Enterprise Suite;

       

    5. RDP, Citrix, Secure Global Desktop, bandwidth, WiFi use, and related Wintel integration issues;

       

    6. Use of Solaris zones/containers to give ad hoc and organizational groups their own (virtual) computers;

       

    7. Smart card uses, programming, and global distribution;

       

    8. Issues affecting home ADSL/Cable user support.

    In addition this chapter is the place to put a "Tips" section that provides answers to a bunch of FAQs -like how to do follow me printing, sidestep DHCP, or manage multiple color spaces on the same device - as well as links to places people can get help - user groups, developer blogs, bigadmin, and so on.

    It's not clear as I write this whether Sun's recently announced re-entry into the blade/rack business offers an easy way to order up a 'Sun Pod' - a rack with the UPS, network gear, AMD, and T1 servers all nicely integrated. One way or the other, however, this chapter will have a subsection on setting up that type of pod.

     

    Chapter 5: Managing the Sun Ray business architecture

    There's an old saying to the effect that a fish rots from the head, but the reverse is also true: as a CIO you can't drive smart display adoption without first adopting key ideas yourself. This chapter is about that change.

    That makes it the hardest chapter to write because I'll be dealing with intangiables and challenging the beliefs people have built their careers on.

     

    • There's an introduction:

      The big thing here, the killer issue, the one big step most people never make, is the changeover from management to leadership. Think of management in terms of getting more hands on a job and then co-ordinating their activity: want a dozen workmen to lift a beam into place? that requires management: someone to assemble the workmen, tell them what to do, and then yell "heave" at just the right moment.

      Leadership is very different from that, it's about getting more brains focused on a job, and not at all about co-ordinating activity. If you're managing a group of workers they'll never get that beam in place without someone yelling "heave" - but a team leader doesn't have to be there to know that if a beam is needed, his people will get it in place.

      In fact a good leader is like a good sysadmin: someone everybody knows, but who seems to do nothing - except that things work after he's sat around doing nothing for awhile.

      Someone whose management style reflects a Wintel or data processing heritage can successfully implement a thin client solution for some or even a majority of desktops - but you cannot get to the smart display environment without changing the way IT works and therefore the way you work.

       

    • There are the thin client organization benefits:

      "Going thin client" has significant organizational consequences for the CIO: you'll spend less time worrying about licensing conformance, auditability, or what users are sneaking into the office. More significantly, at least from a budget perspective, the people who repair or replace PCs can go away along with a good chunk of your help desk staff - and if you're big enough to have management infrastructures in both groups you can simplify some of those right out the door.

      The whole PC management, DHCP, and authorized load-at-boot business goes away.

      Identity management can be standardized and SOX auditor concerns can be more easily and naturally addressed -i.e. without pretending you know or can control what users do on, or with, their laptops.

      Downstream you can also look, again depending on how big your IT operation is, at simplifying networking - because PC to PC local communication needs are gone and the mantra that faster is better translates in networking terms to simpler is better.

      Basically these are all standard thin client benefits - you reduce your risk, your system complexity, and therefore your front line staff and can ultimately thin out the lower level management ranks accordingly. You do not, however, change the way you do business: rollouts go away, but software change processes don't actually change except at the trivial end of physical desktop testing - and neither does your reporting structure or your relationship with the corporate IT steering committee.

      Think of a thin client transition as a way of meeting your fundamental services provision mandate at lower total cost and you've got it: the desktop changes, a few people go away, but services don't change and neither do you.

       

    • and then there's the change

      Things change dramatically, however, when user response to the IT controls that go with thin client and traditional thinking push you to adopt the smart display architecture - i.e. to counterbalance processing centralization with control decentralization.

      Right now, if you're a typical CIO your primary stance is that of the resource custodian or manager: discharging a corporate responsibility to provide a service at a contained cost. With a full smart display system that changes as you become a partner on the revenue generation side and cede directional control on the services side to user management. Basically this turns the traditional IT department inside out: instead of facing inwards as guardians of data and processing resources, you face outwards and push cycles and support at anything with a glimmer of user management support.

      Thus the nature of the CIO role changes: from custodian to evangelist; from management to leadership. When that happens your organization changes too: most of the people you have to manage now simply go away - no help desk, no PC networking, no PC repair or upgrade hassles; your middle management structure changes from one based on blocks of people filling narrow roles like "help desk manager" to one based on individuals working directly with other individuals. That changes your job too: you exchange "reports" for responsibilities as the people who substitute for you in direct employee supervision go away and your sysadmins morph into team leaders - meaning that your role becomes that of the facilitator smoothing their interactions with each other, with people selling you stuff, and with user management.

       

    • Basically it's a whole different ball game and everything you already know is probably wrong:

       

      • with respect to server sizing and positioning (low utilization is good!)

         

      • with respect to recruitment, cross training, and team building (you want to eliminate job boundaries)

         

      • in terms of disaster prevention and organizational controls (e.g. service level agreements and disaster recovery plans make no sense, but you can generate substitutes on fly when these are needed to satisfy external auditors);

         

      • and in terms of appropriate software and run time models (application code becomes a throwaway good but everything comes down to the database);

      This is the "break it" chapter for most people - because getting users to control IT is counter to everything "we" know for sure - especially if we were raised in the data processing tradition or learned the IT management trade through struggles with Windows. It's the IT role that changes first, the methods change as a consequence of that - thus ideas like helping users fill niche functions with niche software are absurd in the traditional organization, but an everyday thing in the smart display world.

    One major issue that probably should get introduced in this chapter involves the role of the external auditor as a drag on IT change. Auditors, particularly those with IT certification based on data processing standards, tend not to know much about IT delivery or technology and so try to impose processes that are decades out of date. Thin clients make it easier to meet these kinds of process expectations, but doing so sets up long term conflicts between IT and the user community and is incommensurate with smart display objectives.

     

    Chapter 6: Evolution, risks, controls, and strategies

    The thin client transition offers considerable benefits in terms of processing risk reduction, auditability and the imposition of relatively low level usage controls. Indeed the biggest organizational risk incurred by this transition results from the fact that it speeds up the move to centralized computing and allows the IT group to implement virtually the entire panopoly of mainframe community developed, and so data processing centric, CoBIT controls.

    That's a big plus if you're a traditional data processing manager, a SOX compliance officer, or a forward looking Windows manager who sees that this is the way Microsoft's client-server is heading. On the other hand, it's a complete disaster from a user management perspective because it centralizes IT control subject to Finance, distances user management from IT decision making through multiple layers of committee and budget processes, and reduces the credibility of the threat to "do it ourselves".

    The informed CIO's goal, therefore, in directing the thin client transition should be to prepare the way for the smart display world while ensuring that a reversion to data processing ideas doesn't happen. Among other things that means staging the transition - adopting thin clients for particular departmental areas or functions, letting decision making devolve to those users and starting that team down the path to revenue partnership before adjusting IT staffing and moving on to the next group.

    A big part of the problem here arises because accounting ideas are extremely stable over time while IT ideas are not. Thus our idea of a system wide control is fifty or more years ahead of theirs - From their perspective a control is simply a policy guaranteeing the predictability of some business process and therefore dependent on the organization chart and management action rather than technology.

    The narrow job roles in data processing started out, for example, in the 1920s as organizational design (read: org chart) ideas on making it hard for data processing to provide executives with falsified reports - something that happened as recently as the late 1980s when a leading Canadian business bank was driven into bankrupcy largely because its own people covered up a continuing systems failure.

    The primary and therefore most fundamental CoBIT controls today are purely paperwork based: the service level agreement, the disaster recovery plan, the waterfall documents on applications, and so on, can often pass all audit checks even though IT is completely out of control and within months of bringing the entire organization to a standstill - auditors, for example, were unable to see anything of significance going wrong as Canadian federal bureaucrats and their high powered consultants spent an estimated Two Billion dollars -including an estimated $300 million on software development- on a fundamentally trivial gun registry.

    Thus the thin client edge in accounting controls appears mainly at the secondary, or derived, control level -things like process logging, reporting heirachies, and server operator credentials.

    In reality, however, that's not where the security advantage is: it's in the fact that the Solaris/SPARC servers you mostly see are largely unaffected by external attacks and, for the paranoid, there's the benefit that it's simply impossible for a user to do something on a Sun Ray which can't be logged, tracked, or tied to an alert of some kind - something that only makes sense in a national security context because actually doing this in businesses operating at normal security levels tend to be severely disfunctional.

    Thus the most important internal control in reality is that most OS errors and almost all attacks are eliminated from consideration and IT can know, with certainty, that the only people accessing applications or data are the people who are authorized to do so.

    Unfortunately, while it's true that the CoBIT controls tend to be disfunctional, you will nevertheless need to have something from which the appropriate paperwork can be drawn up to keep the auditors happy. For this, your number one tool consists of public performance metrics. Thus you can't actually have a meaningful service level agreement where the IT/User split doesn't exist, but you can maintain a web site giving information like the current average load time for E-Mail and draw your SLA from that on demand.

    Similarly, you will not normally have a formal disaster recovery plan because the entire technology base is implemented as a disaster avoidance plan, but you will be able to provide one when asked by comparing desired performance (i.e. no effects on users) against the actual results of drills in which machines are unexpectedly shut down or people diverted.

    Basically what happens across the board with the concerns underlying the CoBIT controls is that the smart display approach is tied so closely to user management that IT simply can't get away with the document based control conceptualization taught today's auditors - but the reality of real time, user visible, metrics has the nice consequence that it's easy to boiler plate the expected documentation when asked.

    Unfortunately getting there from some "here" can be very difficult. The technology change itself is easy - rolling out Sun Rays and servers is a lot simpler than rolling out PCs, but the technology by itself combines with data processing tradition to exert a centralizing force - and that generally produces benefits to IT, but at a cost in flexibility and control to the user community.

    That's counter-productive at the corporate level because the big dollars are in making users more effective, not in reducing IT costs. As a result you need to do the opposite thing: decentralize IT management despite adopting centralized processing - and that means planning this transition from the start; it means educating, coaxing, or removing middle and line management: it means lining up IT behind service delivery, not budget management - and unfortunately it also means finnessing 1920s auditor requirements.

    This chapter, therefore, is to be about looking at what happens in each scenario when the appearance of thin clients tilts the technology balance in favor of those with control agendas -and SOX arguments.

     

    • The SOX argument, of course, doesn't apply to the 500 user faculty system - but even that has applications subject to external scrutiny and direct or indirect control by Finance traditionalists. The key to success, therefore, is likely to be the separation of internal (education, research, student use - stuff like moodle) and external (Finance, registration) applications. With that separation in place, the day to day operational systems can become more and more responsive to user needs as PC support and networking problems go away and increasingly more senior, and more technical, staff therefore become available to work directly on solving user problems.

      The external systems, meanwhile, can appear unaffected when seen from that outside perspective - external interfaces continue to operate as before, and external officials continue whatever access they had before. In this situation whether the school's budget officer accesses the centralized registration receipts system via a PC or a Sun Ray makes absolutely no difference - and sleeping dogs can continue sleeping essentially forever or until significant staffing change occurs.

       

    • None of this is true for the 5,000 user manufacturing business. SOX compliance is going to be the number one driver for IT change via the formal budget process and the right strategy, therefore, is to embrace the concept wholeheartedly while delivering on it via non traditional controls.

      In particular CoBIT controls like rigid job separation and formal qualification by role simply don't apply in situations where your sysadmins work with users and become jacks of all trades through cross training within your service delivery teams. Thus the right control here isn't based on inputs (roles and qualifications), but on outputs: service delivery measured in terms of reliability and change in response to user needs.

      The odd thing about this for the large public companies affected by SOX is that the smart display approach is probably the only one in which it is truly possible for a senior executive to testify, with confidence, to the integrity of the systems producing the financial numbers reported to the public - because it's the only business systems architecture in which information about systems integrity gets to the senior people without being filtered through a protective systems management lens first.

       

    • The control issues affecting the professional firm's CIO are easily the most complex of the group. The traditional route: setting up a controls and compliance regime within each national practice is terribly inefficient and almost equally untrustworthy - the lawsuits have been mostly American, but control failures among "big five:" accounting firms and their predecessors are legendary with every one of the current survivors having had to essentially write off entire national practices to systems and controls failures.

      The KMPG consulting group's inability to file full financials when spun out as Bearingpoint illustrates the problem - every traditional control known to man, but a complete disdain among the senior people in an IT management consulting firm for their own IT problems produced a multi-year disaster.

      The probable right approach, therefore, is to first pioneer the thin client approach and then the devolution to user managed services in specific practice areas or regions where the CIO has people in place who seem likely to succeed - and then gradually spread the resulting infrastructure and systems management ideas to other practices, other regions.

     

    Chapter 7: User management view

    From a user management perspective an IT initiative to implement thin clients in your area is a threat, but it's usually smarter to respond to this positively in hopes of pushing IT in the smart display direction than to fight it. There are two main reasons for this:

     

    • If this is being done for SOX compliance or other reasons, like cash savings in IT (typically supported by Finance), you're likely to lose and whether that happens in the first go-round or the twenty-third doesn't matter: you don't want to be pigeon holed as the loser who opposed change; and,

       

    • if you welcome the change, you get to build working relationships with the people involved and thus gain at least the opportunity to both direct the process and start the control shift heading your way instead of IT's way.

    What you need to remember throughout all this is first that smart display is not about technology, it's about who controls the technology and how that control is expressed - and second that working through the process of explaining benefits to your subordinates is the best way to ensure that you develop a deep understanding of what the benefits can be and how to realize them.

    Start with one certainty: there are real benefits to thin clients, and the stronger your working relationships with IT, the better positioned you'll be to realize on those benefit opportunities - there are situations, in other words, in which you should initiate thin client discussions with your IT group.

    Once serious discussions have started you will almost always, however, become the subject of a counter campaign by PC people in your own organization who fancy themselves technology experts and act as focal points and spokespeople for dissatisfaction. What they'll do more than anything else is spread disinformation to colleagues who have every reason to believe them and no independent sources of information.

    I don't know what the right answer is or even if one exists, but three clear steps in the right direction are:

     

    • set up, as early as possible in the process and preferably before news of the initiative leaks out, a small Sun Ray server system with one screen on your desk and several others in highly accessible places where people can see and try them.

      In your employee briefing on what's going on provide several anonymous accounts for people to try the Sun Rays and provide frequent, public, updates on the state of the process so no one is surprised as changes happen.

       

    • have someone come in, preferably from outside your own organization and IT, to provide both a demonstration and a discussion using your organization's mission critical software - and then make the discussion materials available to your employees.

       

    • Ensure that IT, with input from your people, creates and maintains a metrics website with actual, current or very close to it, performance information; room for untraced comment; and a request management area.

    In particular you will want to make it clear to users that:

     

    • the myths are myths: a Sun Ray is not a 327X terminal; access is not dead slow; IT will not be monitoring every keystroke; the driver here is efficiency and software access, not cost; they won't be doing Unix command line programming; departmental applications like spreadsheets or Access databases will continue to work; a Sun Ray System is more resilient than an MS client-server system; company policies on things like e-mail monitoring will not change; whatever home or laptop use is currently supported will continue (subject to security and data control issues); and so on.

       

    • there are benefits in cost, reliability, data security; and freedom from viruses and other attacks. In particular:

       

      • optical networking (if applicable to you) protects against both random electrical fields (i.e. in a steel plant) and purposefully generated electrical fields (i.e. for denial of service or data theft purposes);

         

      • spyware, viruses, and simple client software failures, simply go away as issues;

         

      • server based files are almost always fully recoverable even if trashed by the software - meaning that data or text re-entry due to PC hardware or software failure essentially goes away as an issue;

         

      • server room backup power and related emergency systems and procedures are more effective and designed to last longer than those used on the desktop. As a result minor outages or brownouts will no longer produce the risk of work or data loss and users can rely on the existence and execution of appropriate daily and weekly back-up procedures.

         

      • "security" in the SOX sense becomes easier in that server based computing is easy to audit, subject to easily defined and managed controls, and not at all subject to accidental data exposure of the kind associated with laptop loss or theft.

    Less obvious, but equally important, mid management drivers arise from cost and presence issues. Making things easier and cheaper for users is important to user management, but the structure has to make it clear that user management, not IT, is in charge of what the system does, for whom, and when. This subsection, therefore, will look at the usual user management concerns and discuss inteligent responses including:

     

    • consider asking IT to site the servers in your area - at least during the transition- so people don't lose the feeling of comfort that goes with having physical control of the processors;

       

    • building internal support, but ensuring that it is relationship based - meaning that IT/Finance cannot easily change the rules under which the service is delivered;

       

    • in the (unlikely) event that one of the internal PC experts is open minded enough and capable enough to support the role, start building a counter-balancing expertise inside your own group (otherwise, consider biting the bullet and shifting the person, or persons, responsibility - potentially right out of the organization - and bringing someone else in);

       

    • if you use an enterprise wide application suite, like an ERP/SCP combination, start getting key users into a position where they can be formally recognized as education team leaders - i.e. find a way to pay them a bit more, send them to user conferences, ask them to set up advanced usage seminars for co-workers, etc. Basically: build toward the replacement of the PC help desk with formal peer support.

     

    Chapter 8: Futures and Alternatives

    The Sun Ray's big strength: server based everything, is also its biggest weakness. For example, home use of Sun Rays works beautifully for most business applications because the bandwidth and server computing loads for those are relatively minor. What happens, however, when the consumer wants to watch a movie is that both bandwidth and processor requirements peak - meaning that consumer bandwidth costs quickly become prohibitive.

    The reason for this is that the Sun Ray really is just a remote console to processes happening on the server. This is great for high security applications and nicely facilitates Sun Ray's ability to preserve and resume sessions independently of what happens to the display, but makes it unsuitable for entertainment and related graphics intensive uses.

    The same problem applies to desktop jobs like 3D automation and modelling: bandwidth and processing burdens increase dramatically because almost everything has to be done on the server.

    The older smart displays, machines like NCD's X-terminals or Plan9's gnots, didn't have this weakness because display processing was done on the display, not the server.

    Sun has two technologies that could enable a new generation of 3D, and consumer video, capable gear: NeWS and MAJC. The Network Environment Windowing system [NeWS] downloaded postscript to the terminal which then did the final processing - meaning that displays based on this approach could could use a local array processor (i.e. a GPU) to handle home video or 3D display for engineering software used in the office.

    The MAJC CPU was intended specifically for desktop use with extensive GPU style scaleability and multi-media instructions.

    I believe NeWS fell victim to Adobe's licensing demands on PostScript, and MAJC fell to a perceived lack of market demand for a Sun desktop.

    However, there are three cost factors justifying a second look:

     

    1. Adobe is feeling increasing revenue presure from Microsoft and may now be amenable to a market expanding BSD style license on PostScript I and II;

       

    2. the MAJC CPU is essentially a free good for Sun developers because its costs have been fully written off; and,

       

    3. The SMP/CMT technologies now on the market as the T1 "Niagara" have demonstrated Sun's ability to offer dramatically improved user service at lower costs than ever before.

    As a result benefits like the ability to provide 3D engineering and home video user support with relatively low bandwidth and server utilization via a low security Sun Ray variant may now be "low hanging fruit."

    Since Sun hasn't yet announced anything of the kind, this section looks at the market and competition for such a product - including PCs, diskless workstation solutions like the LTSP, and specialized desktop appliances like those from Wyse and HP.

    This leads to an interesting hypothesis about chickens and eggs:

     

    • the thin client transformation is "in the bag" in the sense that doing it doesn't require much technical skill, imposes a minimum of organizational change, positions the organization for more positive change, increases both reliability and auditability, and saves money.

       

    • the missing factor in pushing forward to widespread smart display adoption is leadership - and presure for change by user management. If, however, the thin client approach were widely adopted, we should see at least some organizations evolve toward the smart display approach.

      In other words, if thin client gets you started, and some people will go the rest of the way by themselves, then the way to wide spread smart display adoption is through widespread thin client adoption.

       

    • whether Sun does it or someone else does, development of a consumer oriented, but less secure, smart display for use by telcos and other home internet services suppliers seems inevitable.

       

    • and once that happens, users who get thin client benefits at home, are going to drive experimentation at work.
Editorial standards