Key implementation issues
- for a 500 user academic system in a single location:
- Windows support for visiting and change resistant faculty and students;
- Windows support for Wingot proxy licensed services (like Lexis/Nexis access);
- Home user integration, xDSL opportunities, and off-campus computing;
- Campus wiring (high threat ==> use optical where possible)
- Faculty or group specific zones with assigned primary/secondary admin responsibility for each one;
- DHCP/wireless access issues;
- Unusual functionality comparisons -e.g. on the fly compilation in classroom and session continuation across locations.
- for a 5,000 user integrated manufacturing business with three major locations and a half dozen marketing offices:
- no down time tolerable ==> redundant data centers (one at each site) and cross trained teams.
- no down time tolerable ===> use of N1 to map resources to demands;
- SOX auditors trying to re-impose organizational controls from the 1920s;
- conflicting user agendas can lead to IT working against itself: solution is to use Solaris zones to isolate groups if and where necessary, support significant process variation between groups but maintain database consistency.
- data center distribution, cross training requirements, and open source support as the answer to the odd problem that the better the staff get, the less work they'll have to do.
- balance production users vs wintel support in finance (may require additional staff);
- assume Oracle financials/ERP for costing and sizing purposes; assume 10% custom screens added internally over first three years.
- for a 30,000 user world-wide professional services firm with 500 offices:
- system sizing and layout: the company is big, the offices are small; but national groupings face locally unique professional practice and data control issues. Hence mix of small and large servers; rigid client/firm data differentiation; mix of Sun Ray device types to support portability; mix of data centers to support service continuity without forcing data across jurisdictional boundaries.
- Extensive Wintel integration issues: no wireless, some DHCP, rigorous document and data security - emphasis on Sun Ray physical security for data - stolen devices become a non issue.
- Extensive remote access issues (from client sites and other offices) including legal complications related to allowable data movement and the imposition of local procedural rules on remotely accessed data. E.g. Access from Canada to data in Germany proceeds under EU data access rules, not Canadian ones.
- Draw summary conclusions applicable to smart display vs client-server in all three groups -look particularly at user productivity, risk-to-data issues (transfer of responsibility for meeting both legislated and actual responsibilities for client data from professional staff to system).
Bottom line for thin client: cheaper, safer, faster.
Chapter 3: Why change (IT perspective)?
This section extends the three organizational scenarios from the previous piece to discuss the risks and benefits associated with the decision to adopt a smart display architecture.
Part one looks at what happens if you use Sun Rays as thin clients - i.e. change your desktops but don't change your management style or software. In effect this becomes a recital of thin client benefits including:
- data stays at the server no matter where the employee is - important for transnationals who need to comply with local information control legislation and for professional firms who need to ensure client data security.
- lower support, power, and hassle (replacement, breakdown, space, cooling, noise) costs on the desktop;
- auditability and process control (mainframe virtues without the cost)
- multiple source service delivery
- concurrent or easily switchable Unix and Wintel application access;
- cheap large screen support - e.g. dual head Sun Ray;
- ability to access both CICS/COBOL and Unix apps eases mainframe to Unix transition;
- least likely to fail in nasty, noisy, EM field polluted production environments;
Part two looks at what happens if you adopt the full architecture. In particular the focus is on what this does for the user and therefore on what it looks like to senior, middle, and line management.
The difference is this: using the Sun Ray as a thin client generally nets out as making IT more productive while using it in a smart display architecture generally nets out as making users more productive.
User productivity trumps all non security, non corporate risk, issues - and it doesn't take much to make a big difference: a 1% user productivity gain trumps a 50% IT cost decrease at most companies.
The productivity benefits to thin clients come from easily measured things like reliability and systems performance - those from smart display use tend to be much larger but also much more nebulous. For example:
- user ownership "now" signals user commitment "going forward".
- system wide "unbreakability" and the absence of social costs for users needing help leads to experimentation with software - reversing the usual ERP/SCM decline in which incoming employees use ever shrinking fractions of total available functionality.
- the most basic thin client benefit is that the absence of the PC OS and related networking complexities removes most of the ambiguities from desktop issues. Hardware fails, rarely but obviously; cables come loose, and some software changes have unexpected consequences - but when these things happen the source of the problem is usually obvious: no reboots, no desktop software, no network tracing, no issues with indirect server dependencies.
And all this has a smart display corollary: it should quickly becomes obvious to everyone that IT's job is to deliver a working application, but it's the user's job to use that application.
Within the smart display architecture, therefore, you move first line application support out of IT entirely and make that a user community responsibility. That reduces IT cost but also has the far more valuable effect of making more of the application known to more users more of the time - because help comes from peer group domain experts and social barriers to learning are reversed.
- The use of large scale SMP provides very fast response no matter what the user throws at the system. Fast response breeds confidence - or, more accurately, systems which don't always produce results quickly, lose user confidence.
- the IT/user relationship leads to throw-away applications as people try to implement new ideas either with new software or by using existing software differently.
For example, the adoption of open source software, along with active IT management support for open source effort, gives the organization easy access to thousands of niche applications, many of which fit, or can be quickly made to fit, organizational niches.
The key counter-argument, the traditional burgeoning support load, doesn't apply because this stuff is throw-away: i.e. if external support fails or change demands accumulate beyond what the package can easily be stretched to do, you look for a replacement. convert the data, and throw the original away.
- The reward structure for the IT organization changes from a focus on delivering an acceptable service at the lowest possible cost to one of partnering with users to explore and deliver revenue generating ideas and services.
The key to the change is that combining unified applications with the distributed management architecture turns IT into an "always on" shared information resource - and that, in turn, removes the social and organizational impediments to productivity that go with Wintel client-server.
Bottom line for smart display: improved user satisfaction, greater application usage effectiveness; improved effectiveness per IT dollar; and, the potential for IT driven revenue growth.
Alternative View: Distributed Computing - The promise unbroken
By Roger Ramjet
Note: I asked frequent contributor Roger Ramjet to write a guest blog presenting and defending his idea that distributed Unix makes more sense than my smart display approach. Here's his contribution - read it carefully and I think you'll see we're not so far apart even if he is right to describe distributed computing as the real client-server model.
In the beginning, there were expensive mainframes with dumb terminals. Then evolution produced not-quite-so-expensive UNIX computers and mini and micro "frames", and in the end inexpensive PCs crawled out upon the land. Companies purchased these computers depending on their needs, and were always looking to lower their costs. What they ended up with were islands of functionality where mainframes, UNIX and PCs each had their own exclusive domains.
Then came the network. It was a way to connect all of the islands together and leverage the strengths of all computers connected to it. It had great promise and companies like Sun recognized this fact in their "The Network IS the computer" mantra.
UNIX took to the network very quickly, while mainframes and PCs lagged WAY behind. UNIX computers began popping up in all shapes and sizes (and prices). The mainframe-like UNIX server "box" was soon joined by smaller desktop/deskside units. These UNIX "workstations" had all of the abilities of their larger server brethren albeit with less capacity. So an idea was born where processing would be done across the network, where UNIX servers "on the platform" would do the heavy lifting, while the UNIX workstations would manage the requests and post-process the results. This is the genesis of the client/server philosophy and the distributed computing model.
In my own "perfect world" this reality would have been joined by both the mainframe and PC camps, where you would be able to mix and match which computers to use for whatever purpose (see DCE). But that never happened. Fishkill had exclusive control over the mainframes, and their networking offerings really sucked. Redmond had exclusive control over the PC world and they took a long time to boost capability. Meanwhile, shared 10BaseT networks, UNIX forking/infighting and $75,000/seat 40Mhz UNIX workstations ($30,000 for a 21", 40Mhz, 16MB, SPARCII+ with 2 x 105MB disks - Murph) diminished the allure of client/server.
Today we have 10gigE networks and a "free" Linux/Solaris (common) OS that runs on cheap PCs (and huge servers) yet client/server is all but dead. Just when the philosophy makes the most sense, people are sick and tired of waiting. Mounting file systems over the network via NFS certainly sucked 10 years ago with shared 10BaseT, so we went and created a whole new storage network (SAN). Never mind that NFSv4 over gigE today is reliable and secure and FREE (built-in) as opposed to building a whole new infrastructure of fiber, switches, HBAs and high-priced storage (not to mention additional cost software to handle things like dual-pathing and fancy filesystems).
The way it should be
Murphy is a strong proponent of utilizing thin client/server architecture to create the next IT paradigm. I have reservations about doing this as it can lead to:
- Vendor lock-in - SunRays require Sun software and Solaris to work. Everyone else has their own thin clients and restrictions.
- Difficult capacity planning - The old N+1 problem, how many SunRays per server (20) and what do you do when you need one more SunRay connected (#21)?
- No off-line capabilities - When the network goes down, so does your SunRay.
- Monoculture - It's not apparent to me how you would use this architecture in a heterogeneous environment with PCs and mainframes. It looks like an island to me.
- Proprietary configurations - Modifying server failover software requires expertise.
My own philosophy on the next IT paradigm starts with strong rules (standards, processes, procedures) with few exceptions. I have seen too many instances where people start getting accustomed to going around the system to get things done (read "cowboy" sysadmins/appdevelopers) - which leads to a host of problems. With the proper standard configuration and Namespace standards, each *NIX machine in a distributed environment can be set up in the same way. AIX, HP-UX, Solaris and Linux (Unicos, IRIX,Tru64,OSF1,Ultrix,...) machines can co-exist - and client/server can become a reality.
Once this paradigm is implemented, if you need a high-powered UNIX workstation to do CAD/CAM/CAE at your desk - get one. If you only do some e-mail and document writing, use a cheap PC running Linux. If you need to create a 20 billion record database and need high performance - buy the big iron. But you log in the same way, have the same locations for files, leverage tools that work on every box, have a consistent backup policy, etc. etc. etc (the list of advantages is long).
By utilizing the system automounter, you can have load-balanced, highly-available (served) applications and single user account directories (served) that follow you to where you log in - on any machine on the network. By caching your apps locally, you can do your work even when the network is down (although you would need a local user account - a design decision). In order to master this whole thing, you would need to do what I did - read the automount man page. This was the promise of client/server and distributed environments - and it can be realized. All it takes is people that are willing to follow the program. No new fancy technology required.
Chapter 4: The Sun Ray Server
The longest single section in that book, by page count, will be the technical reference chapter dealing with the specifics of connecting, configuring, and managing Sun Rays. Drawn mainly from Sun's own documentation and with the usual absurd levels of redundancy that go with covering this material separately for Linux on x86 and Solaris on SPARC (with asides for Linux on SPARC and Solaris on x86), this will get pretty boring.
It's also essential to this kind of guide because the book's intended to sell to sysadmins who then get their bosses to read other bits of it....
My problem, of course, is that just including the obvious isn't going to be enough. So if I give you the list below as stuff I've thought about including, can you tell me what I've missed or should emphasize?
- Stand alone server set up and admin issues including capacity guidelines;
- desktop display power and networking requirements, sparing recommendations, and related issues.
- N1 and alternative auto-failover server set up and admin issues including tools for log review;
- Sun Ray Server (software) integration with identity management and the Sun Enterprise Suite;
- RDP, Citrix, Secure Global Desktop, bandwidth, WiFi use, and related Wintel integration issues;
- Use of Solaris zones/containers to give ad hoc and organizational groups their own (virtual) computers;
- Smart card uses, programming, and global distribution;
- Issues affecting home ADSL/Cable user support.
In addition this chapter is the place to put a "Tips" section that provides answers to a bunch of FAQs -like how to do follow me printing, sidestep DHCP, or manage multiple color spaces on the same device - as well as links to places people can get help - user groups, developer blogs, bigadmin, and so on.
It's not clear as I write this whether Sun's recently announced re-entry into the blade/rack business offers an easy way to order up a 'Sun Pod' - a rack with the UPS, network gear, AMD, and T1 servers all nicely integrated. One way or the other, however, this chapter will have a subsection on setting up that type of pod.
Chapter 5: Managing the Sun Ray business architecture
There's an old saying to the effect that a fish rots from the head, but the reverse is also true: as a CIO you can't drive smart display adoption without first adopting key ideas yourself. This chapter is about that change.
That makes it the hardest chapter to write because I'll be dealing with intangiables and challenging the beliefs people have built their careers on.
- There's an introduction:
The big thing here, the killer issue, the one big step most people never make, is the changeover from management to leadership. Think of management in terms of getting more hands on a job and then co-ordinating their activity: want a dozen workmen to lift a beam into place? that requires management: someone to assemble the workmen, tell them what to do, and then yell "heave" at just the right moment.
Leadership is very different from that, it's about getting more brains focused on a job, and not at all about co-ordinating activity. If you're managing a group of workers they'll never get that beam in place without someone yelling "heave" - but a team leader doesn't have to be there to know that if a beam is needed, his people will get it in place.
In fact a good leader is like a good sysadmin: someone everybody knows, but who seems to do nothing - except that things work after he's sat around doing nothing for awhile.
Someone whose management style reflects a Wintel or data processing heritage can successfully implement a thin client solution for some or even a majority of desktops - but you cannot get to the smart display environment without changing the way IT works and therefore the way you work.
- There are the thin client organization benefits:
"Going thin client" has significant organizational consequences for the CIO: you'll spend less time worrying about licensing conformance, auditability, or what users are sneaking into the office. More significantly, at least from a budget perspective, the people who repair or replace PCs can go away along with a good chunk of your help desk staff - and if you're big enough to have management infrastructures in both groups you can simplify some of those right out the door.
The whole PC management, DHCP, and authorized load-at-boot business goes away.
Identity management can be standardized and SOX auditor concerns can be more easily and naturally addressed -i.e. without pretending you know or can control what users do on, or with, their laptops.
Downstream you can also look, again depending on how big your IT operation is, at simplifying networking - because PC to PC local communication needs are gone and the mantra that faster is better translates in networking terms to simpler is better.
Basically these are all standard thin client benefits - you reduce your risk, your system complexity, and therefore your front line staff and can ultimately thin out the lower level management ranks accordingly. You do not, however, change the way you do business: rollouts go away, but software change processes don't actually change except at the trivial end of physical desktop testing - and neither does your reporting structure or your relationship with the corporate IT steering committee.
Think of a thin client transition as a way of meeting your fundamental services provision mandate at lower total cost and you've got it: the desktop changes, a few people go away, but services don't change and neither do you.
- and then there's the change
Things change dramatically, however, when user response to the IT controls that go with thin client and traditional thinking push you to adopt the smart display architecture - i.e. to counterbalance processing centralization with control decentralization.
Right now, if you're a typical CIO your primary stance is that of the resource custodian or manager: discharging a corporate responsibility to provide a service at a contained cost. With a full smart display system that changes as you become a partner on the revenue generation side and cede directional control on the services side to user management. Basically this turns the traditional IT department inside out: instead of facing inwards as guardians of data and processing resources, you face outwards and push cycles and support at anything with a glimmer of user management support.
Thus the nature of the CIO role changes: from custodian to evangelist; from management to leadership. When that happens your organization changes too: most of the people you have to manage now simply go away - no help desk, no PC networking, no PC repair or upgrade hassles; your middle management structure changes from one based on blocks of people filling narrow roles like "help desk manager" to one based on individuals working directly with other individuals. That changes your job too: you exchange "reports" for responsibilities as the people who substitute for you in direct employee supervision go away and your sysadmins morph into team leaders - meaning that your role becomes that of the facilitator smoothing their interactions with each other, with people selling you stuff, and with user management.
- Basically it's a whole different ball game and everything you already know is probably wrong:
- with respect to server sizing and positioning (low utilization is good!)
- with respect to recruitment, cross training, and team building (you want to eliminate job boundaries)
- in terms of disaster prevention and organizational controls (e.g. service level agreements and disaster recovery plans make no sense, but you can generate substitutes on fly when these are needed to satisfy external auditors);
- and in terms of appropriate software and run time models (application code becomes a throwaway good but everything comes down to the database);
This is the "break it" chapter for most people - because getting users to control IT is counter to everything "we" know for sure - especially if we were raised in the data processing tradition or learned the IT management trade through struggles with Windows. It's the IT role that changes first, the methods change as a consequence of that - thus ideas like helping users fill niche functions with niche software are absurd in the traditional organization, but an everyday thing in the smart display world.
One major issue that probably should get introduced in this chapter involves the role of the external auditor as a drag on IT change. Auditors, particularly those with IT certification based on data processing standards, tend not to know much about IT delivery or technology and so try to impose processes that are decades out of date. Thin clients make it easier to meet these kinds of process expectations, but doing so sets up long term conflicts between IT and the user community and is incommensurate with smart display objectives.
Chapter 6: Evolution, risks, controls, and strategies
The thin client transition offers considerable benefits in terms of processing risk reduction, auditability and the imposition of relatively low level usage controls. Indeed the biggest organizational risk incurred by this transition results from the fact that it speeds up the move to centralized computing and allows the IT group to implement virtually the entire panopoly of mainframe community developed, and so data processing centric, CoBIT controls.
That's a big plus if you're a traditional data processing manager, a SOX compliance officer, or a forward looking Windows manager who sees that this is the way Microsoft's client-server is heading. On the other hand, it's a complete disaster from a user management perspective because it centralizes IT control subject to Finance, distances user management from IT decision making through multiple layers of committee and budget processes, and reduces the credibility of the threat to "do it ourselves".
The informed CIO's goal, therefore, in directing the thin client transition should be to prepare the way for the smart display world while ensuring that a reversion to data processing ideas doesn't happen. Among other things that means staging the transition - adopting thin clients for particular departmental areas or functions, letting decision making devolve to those users and starting that team down the path to revenue partnership before adjusting IT staffing and moving on to the next group.
A big part of the problem here arises because accounting ideas are extremely stable over time while IT ideas are not. Thus our idea of a system wide control is fifty or more years ahead of theirs - From their perspective a control is simply a policy guaranteeing the predictability of some business process and therefore dependent on the organization chart and management action rather than technology.
The narrow job roles in data processing started out, for example, in the 1920s as organizational design (read: org chart) ideas on making it hard for data processing to provide executives with falsified reports - something that happened as recently as the late 1980s when a leading Canadian business bank was driven into bankrupcy largely because its own people covered up a continuing systems failure.
The primary and therefore most fundamental CoBIT controls today are purely paperwork based: the service level agreement, the disaster recovery plan, the waterfall documents on applications, and so on, can often pass all audit checks even though IT is completely out of control and within months of bringing the entire organization to a standstill - auditors, for example, were unable to see anything of significance going wrong as Canadian federal bureaucrats and their high powered consultants spent an estimated Two Billion dollars -including an estimated $300 million on software development- on a fundamentally trivial gun registry.
Thus the thin client edge in accounting controls appears mainly at the secondary, or derived, control level -things like process logging, reporting heirachies, and server operator credentials.
In reality, however, that's not where the security advantage is: it's in the fact that the Solaris/SPARC servers you mostly see are largely unaffected by external attacks and, for the paranoid, there's the benefit that it's simply impossible for a user to do something on a Sun Ray which can't be logged, tracked, or tied to an alert of some kind - something that only makes sense in a national security context because actually doing this in businesses operating at normal security levels tend to be severely disfunctional.
Thus the most important internal control in reality is that most OS errors and almost all attacks are eliminated from consideration and IT can know, with certainty, that the only people accessing applications or data are the people who are authorized to do so.
Unfortunately, while it's true that the CoBIT controls tend to be disfunctional, you will nevertheless need to have something from which the appropriate paperwork can be drawn up to keep the auditors happy. For this, your number one tool consists of public performance metrics. Thus you can't actually have a meaningful service level agreement where the IT/User split doesn't exist, but you can maintain a web site giving information like the current average load time for E-Mail and draw your SLA from that on demand.
Similarly, you will not normally have a formal disaster recovery plan because the entire technology base is implemented as a disaster avoidance plan, but you will be able to provide one when asked by comparing desired performance (i.e. no effects on users) against the actual results of drills in which machines are unexpectedly shut down or people diverted.
Basically what happens across the board with the concerns underlying the CoBIT controls is that the smart display approach is tied so closely to user management that IT simply can't get away with the document based control conceptualization taught today's auditors - but the reality of real time, user visible, metrics has the nice consequence that it's easy to boiler plate the expected documentation when asked.
Unfortunately getting there from some "here" can be very difficult. The technology change itself is easy - rolling out Sun Rays and servers is a lot simpler than rolling out PCs, but the technology by itself combines with data processing tradition to exert a centralizing force - and that generally produces benefits to IT, but at a cost in flexibility and control to the user community.
That's counter-productive at the corporate level because the big dollars are in making users more effective, not in reducing IT costs. As a result you need to do the opposite thing: decentralize IT management despite adopting centralized processing - and that means planning this transition from the start; it means educating, coaxing, or removing middle and line management: it means lining up IT behind service delivery, not budget management - and unfortunately it also means finnessing 1920s auditor requirements.
This chapter, therefore, is to be about looking at what happens in each scenario when the appearance of thin clients tilts the technology balance in favor of those with control agendas -and SOX arguments.
- The SOX argument, of course, doesn't apply to the 500 user faculty system - but even that has applications subject to external scrutiny and direct or indirect control by Finance traditionalists. The key to success, therefore, is likely to be the separation of internal (education, research, student use - stuff like moodle) and external (Finance, registration) applications. With that separation in place, the day to day operational systems can become more and more responsive to user needs as PC support and networking problems go away and increasingly more senior, and more technical, staff therefore become available to work directly on solving user problems.
The external systems, meanwhile, can appear unaffected when seen from that outside perspective - external interfaces continue to operate as before, and external officials continue whatever access they had before. In this situation whether the school's budget officer accesses the centralized registration receipts system via a PC or a Sun Ray makes absolutely no difference - and sleeping dogs can continue sleeping essentially forever or until significant staffing change occurs.
- None of this is true for the 5,000 user manufacturing business. SOX compliance is going to be the number one driver for IT change via the formal budget process and the right strategy, therefore, is to embrace the concept wholeheartedly while delivering on it via non traditional controls.
In particular CoBIT controls like rigid job separation and formal qualification by role simply don't apply in situations where your sysadmins work with users and become jacks of all trades through cross training within your service delivery teams. Thus the right control here isn't based on inputs (roles and qualifications), but on outputs: service delivery measured in terms of reliability and change in response to user needs.
The odd thing about this for the large public companies affected by SOX is that the smart display approach is probably the only one in which it is truly possible for a senior executive to testify, with confidence, to the integrity of the systems producing the financial numbers reported to the public - because it's the only business systems architecture in which information about systems integrity gets to the senior people without being filtered through a protective systems management lens first.
- The control issues affecting the professional firm's CIO are easily the most complex of the group. The traditional route: setting up a controls and compliance regime within each national practice is terribly inefficient and almost equally untrustworthy - the lawsuits have been mostly American, but control failures among "big five:" accounting firms and their predecessors are legendary with every one of the current survivors having had to essentially write off entire national practices to systems and controls failures.
The KMPG consulting group's inability to file full financials when spun out as Bearingpoint illustrates the problem - every traditional control known to man, but a complete disdain among the senior people in an IT management consulting firm for their own IT problems produced a multi-year disaster.
The probable right approach, therefore, is to first pioneer the thin client approach and then the devolution to user managed services in specific practice areas or regions where the CIO has people in place who seem likely to succeed - and then gradually spread the resulting infrastructure and systems management ideas to other practices, other regions.
Chapter 7: User management view
From a user management perspective an IT initiative to implement thin clients in your area is a threat, but it's usually smarter to respond to this positively in hopes of pushing IT in the smart display direction than to fight it. There are two main reasons for this:
- If this is being done for SOX compliance or other reasons, like cash savings in IT (typically supported by Finance), you're likely to lose and whether that happens in the first go-round or the twenty-third doesn't matter: you don't want to be pigeon holed as the loser who opposed change; and,
- if you welcome the change, you get to build working relationships with the people involved and thus gain at least the opportunity to both direct the process and start the control shift heading your way instead of IT's way.
What you need to remember throughout all this is first that smart display is not about technology, it's about who controls the technology and how that control is expressed - and second that working through the process of explaining benefits to your subordinates is the best way to ensure that you develop a deep understanding of what the benefits can be and how to realize them.
Start with one certainty: there are real benefits to thin clients, and the stronger your working relationships with IT, the better positioned you'll be to realize on those benefit opportunities - there are situations, in other words, in which you should initiate thin client discussions with your IT group.
Once serious discussions have started you will almost always, however, become the subject of a counter campaign by PC people in your own organization who fancy themselves technology experts and act as focal points and spokespeople for dissatisfaction. What they'll do more than anything else is spread disinformation to colleagues who have every reason to believe them and no independent sources of information.
I don't know what the right answer is or even if one exists, but three clear steps in the right direction are:
- set up, as early as possible in the process and preferably before news of the initiative leaks out, a small Sun Ray server system with one screen on your desk and several others in highly accessible places where people can see and try them.
In your employee briefing on what's going on provide several anonymous accounts for people to try the Sun Rays and provide frequent, public, updates on the state of the process so no one is surprised as changes happen.
- have someone come in, preferably from outside your own organization and IT, to provide both a demonstration and a discussion using your organization's mission critical software - and then make the discussion materials available to your employees.
- Ensure that IT, with input from your people, creates and maintains a metrics website with actual, current or very close to it, performance information; room for untraced comment; and a request management area.
In particular you will want to make it clear to users that:
- the myths are myths: a Sun Ray is not a 327X terminal; access is not dead slow; IT will not be monitoring every keystroke; the driver here is efficiency and software access, not cost; they won't be doing Unix command line programming; departmental applications like spreadsheets or Access databases will continue to work; a Sun Ray System is more resilient than an MS client-server system; company policies on things like e-mail monitoring will not change; whatever home or laptop use is currently supported will continue (subject to security and data control issues); and so on.
- there are benefits in cost, reliability, data security; and freedom from viruses and other attacks. In particular:
- optical networking (if applicable to you) protects against both random electrical fields (i.e. in a steel plant) and purposefully generated electrical fields (i.e. for denial of service or data theft purposes);
- spyware, viruses, and simple client software failures, simply go away as issues;
- server based files are almost always fully recoverable even if trashed by the software - meaning that data or text re-entry due to PC hardware or software failure essentially goes away as an issue;
- server room backup power and related emergency systems and procedures are more effective and designed to last longer than those used on the desktop. As a result minor outages or brownouts will no longer produce the risk of work or data loss and users can rely on the existence and execution of appropriate daily and weekly back-up procedures.
- "security" in the SOX sense becomes easier in that server based computing is easy to audit, subject to easily defined and managed controls, and not at all subject to accidental data exposure of the kind associated with laptop loss or theft.
Less obvious, but equally important, mid management drivers arise from cost and presence issues. Making things easier and cheaper for users is important to user management, but the structure has to make it clear that user management, not IT, is in charge of what the system does, for whom, and when. This subsection, therefore, will look at the usual user management concerns and discuss inteligent responses including:
- consider asking IT to site the servers in your area - at least during the transition- so people don't lose the feeling of comfort that goes with having physical control of the processors;
- building internal support, but ensuring that it is relationship based - meaning that IT/Finance cannot easily change the rules under which the service is delivered;
- in the (unlikely) event that one of the internal PC experts is open minded enough and capable enough to support the role, start building a counter-balancing expertise inside your own group (otherwise, consider biting the bullet and shifting the person, or persons, responsibility - potentially right out of the organization - and bringing someone else in);
- if you use an enterprise wide application suite, like an ERP/SCP combination, start getting key users into a position where they can be formally recognized as education team leaders - i.e. find a way to pay them a bit more, send them to user conferences, ask them to set up advanced usage seminars for co-workers, etc. Basically: build toward the replacement of the PC help desk with formal peer support.
Chapter 8: Futures and Alternatives
The Sun Ray's big strength: server based everything, is also its biggest weakness. For example, home use of Sun Rays works beautifully for most business applications because the bandwidth and server computing loads for those are relatively minor. What happens, however, when the consumer wants to watch a movie is that both bandwidth and processor requirements peak - meaning that consumer bandwidth costs quickly become prohibitive.
The reason for this is that the Sun Ray really is just a remote console to processes happening on the server. This is great for high security applications and nicely facilitates Sun Ray's ability to preserve and resume sessions independently of what happens to the display, but makes it unsuitable for entertainment and related graphics intensive uses.
The same problem applies to desktop jobs like 3D automation and modelling: bandwidth and processing burdens increase dramatically because almost everything has to be done on the server.
The older smart displays, machines like NCD's X-terminals or Plan9's gnots, didn't have this weakness because display processing was done on the display, not the server.
Sun has two technologies that could enable a new generation of 3D, and consumer video, capable gear: NeWS and MAJC. The Network Environment Windowing system [NeWS] downloaded postscript to the terminal which then did the final processing - meaning that displays based on this approach could could use a local array processor (i.e. a GPU) to handle home video or 3D display for engineering software used in the office.
The MAJC CPU was intended specifically for desktop use with extensive GPU style scaleability and multi-media instructions.
I believe NeWS fell victim to Adobe's licensing demands on PostScript, and MAJC fell to a perceived lack of market demand for a Sun desktop.
However, there are three cost factors justifying a second look:
- Adobe is feeling increasing revenue presure from Microsoft and may now be amenable to a market expanding BSD style license on PostScript I and II;
- the MAJC CPU is essentially a free good for Sun developers because its costs have been fully written off; and,
- The SMP/CMT technologies now on the market as the T1 "Niagara" have demonstrated Sun's ability to offer dramatically improved user service at lower costs than ever before.
As a result benefits like the ability to provide 3D engineering and home video user support with relatively low bandwidth and server utilization via a low security Sun Ray variant may now be "low hanging fruit."
Since Sun hasn't yet announced anything of the kind, this section looks at the market and competition for such a product - including PCs, diskless workstation solutions like the LTSP, and specialized desktop appliances like those from Wyse and HP.
This leads to an interesting hypothesis about chickens and eggs:
- the thin client transformation is "in the bag" in the sense that doing it doesn't require much technical skill, imposes a minimum of organizational change, positions the organization for more positive change, increases both reliability and auditability, and saves money.
- the missing factor in pushing forward to widespread smart display adoption is leadership - and presure for change by user management. If, however, the thin client approach were widely adopted, we should see at least some organizations evolve toward the smart display approach.
In other words, if thin client gets you started, and some people will go the rest of the way by themselves, then the way to wide spread smart display adoption is through widespread thin client adoption.
- whether Sun does it or someone else does, development of a consumer oriented, but less secure, smart display for use by telcos and other home internet services suppliers seems inevitable.
- and once that happens, users who get thin client benefits at home, are going to drive experimentation at work.