The thin client transition offers considerable benefits in terms of processing risk reduction, auditability and the imposition of relatively low level usage controls. Indeed the biggest organizational risk incurred by this transition results from the fact that it speeds up the move to centralized computing and allows the IT group to implement virtually the entire panopoly of mainframe community developed, and so data processing centric, CoBIT controls.
That's a big plus if you're a traditional data processing manager, a SOX compliance officer, or a forward looking Windows manager who sees that this is the way Microsoft's client-server is heading. On the other hand, it's a complete disaster from a user management perspective because it centralizes IT control subject to Finance, distances user management from IT decision making through multiple layers of committee and budget processes, and reduces the credibility of the threat to "do it ourselves".
The informed CIO's goal, therefore, in directing the thin client transition should be to prepare the way for the smart display world while ensuring that a reversion to data processing ideas doesn't happen. Among other things that means staging the transition - adopting thin clients for particular departmental areas or functions, letting decision making devolve to those users and starting that team down the path to revenue partnership before adjusting IT staffing and moving on to the next group.
A big part of the problem here arises because accounting ideas are extremely stable over time while IT ideas are not. Thus our idea of a system wide control is fifty or more years ahead of theirs - From their perspective a control is simply a policy guaranteeing the predictability of some business process and therefore dependent on the organization chart and management action rather than technology.
The narrow job roles in data processing started out, for example, in the 1920s as organizational design (read: org chart) ideas on making it hard for data processing to provide executives with falsified reports - something that happened as recently as the late 1980s when a leading Canadian business bank was driven into bankrupcy largely because its own people covered up a continuing systems failure.
The primary and therefore most fundamental CoBIT controls today are purely paperwork based: the service level agreement, the disaster recovery plan, the waterfall documents on applications, and so on, can often pass all audit checks even though IT is completely out of control and within months of bringing the entire organization to a standstill - auditors, for example, were unable to see anything of significance going wrong as Canadian federal bureaucrats and their high powered consultants spent an estimated Two Billion dollars -including an estimated $300 million on software development- on a fundamentally trivial gun registry.
Thus the thin client edge in accounting controls appears mainly at the secondary, or derived, control level -things like process logging, reporting heirachies, and server operator credentials.
In reality, however, that's not where the security advantage is: it's in the fact that the Solaris/SPARC servers you mostly see are largely unaffected by external attacks and, for the paranoid, there's the benefit that it's simply impossible for a user to do something on a Sun Ray which can't be logged, tracked, or tied to an alert of some kind - something that only makes sense in a national security context because actually doing this in businesses operating at normal security levels tend to be severely disfunctional.
Thus the most important internal control in reality is that most OS errors and almost all attacks are eliminated from consideration and IT can know, with certainty, that the only people accessing applications or data are the people who are authorized to do so.
Unfortunately, while it's true that the CoBIT controls tend to be disfunctional, you will nevertheless need to have something from which the appropriate paperwork can be drawn up to keep the auditors happy. For this, your number one tool consists of public performance metrics. Thus you can't actually have a meaningful service level agreement where the IT/User split doesn't exist, but you can maintain a web site giving information like the current average load time for E-Mail and draw your SLA from that on demand.
Similarly, you will not normally have a formal disaster recovery plan because the entire technology base is implemented as a disaster avoidance plan, but you will be able to provide one when asked by comparing desired performance (i.e. no effects on users) against the actual results of drills in which machines are unexpectedly shut down or people diverted.
Basically what happens across the board with the concerns underlying the CoBIT controls is that the smart display approach is tied so closely to user management that IT simply can't get away with the document based control conceptualization taught today's auditors - but the reality of real time, user visible, metrics has the nice consequence that it's easy to boiler plate the expected documentation when asked.
Unfortunately getting there from some "here" can be very difficult. The technology change itself is easy - rolling out Sun Rays and servers is a lot simpler than rolling out PCs, but the technology by itself combines with data processing tradition to exert a centralizing force - and that generally produces benefits to IT, but at a cost in flexibility and control to the user community.
That's counter-productive at the corporate level because the big dollars are in making users more effective, not in reducing IT costs. As a result you need to do the opposite thing: decentralize IT management despite adopting centralized processing - and that means planning this transition from the start; it means educating, coaxing, or removing middle and line management: it means lining up IT behind service delivery, not budget management - and unfortunately it also means finnessing 1920s auditor requirements.
This chapter, therefore, is to be about looking at what happens in each scenario when the appearance of thin clients tilts the technology balance in favor of those with control agendas -and SOX arguments.
- The SOX argument, of course, doesn't apply to the 500 user faculty system - but even that has applications subject to external scrutiny and direct or indirect control by Finance traditionalists. The key to success, therefore, is likely to be the separation of internal (education, research, student use - stuff like moodle) and external (Finance, registration) applications. With that separation in place, the day to day operational systems can become more and more responsive to user needs as PC support and networking problems go away and increasingly more senior, and more technical, staff therefore become available to work directly on solving user problems.
The external systems, meanwhile, can appear unaffected when seen from that outside perspective - external interfaces continue to operate as before, and external officials continue whatever access they had before. In this situation whether the school's budget officer accesses the centralized registration receipts system via a PC or a Sun Ray makes absolutely no difference - and sleeping dogs can continue sleeping essentially forever or until significant staffing change occurs.
- None of this is true for the 5,000 user manufacturing business. SOX compliance is going to be the number one driver for IT change via the formal budget process and the right strategy, therefore, is to embrace the concept wholeheartedly while delivering on it via non traditional controls.
In particular CoBIT controls like rigid job separation and formal qualification by role simply don't apply in situations where your sysadmins work with users and become jacks of all trades through cross training within your service delivery teams. Thus the right control here isn't based on inputs (roles and qualifications), but on outputs: service delivery measured in terms of reliability and change in response to user needs.
The odd thing about this for the large public companies affected by SOX is that the smart display approach is probably the only one in which it is truly possible for a senior executive to testify, with confidence, to the integrity of the systems producing the financial numbers reported to the public - because it's the only business systems architecture in which information about systems integrity gets to the senior people without being filtered through a protective systems management lens first.
- The control issues affecting the professional firm's CIO are easily the most complex of the group. The traditional route: setting up a controls and compliance regime within each national practice is terribly inefficient and almost equally untrustworthy - the lawsuits have been mostly American, but control failures among "big five:" accounting firms and their predecessors are legendary with every one of the current survivors having had to essentially write off entire national practices to systems and controls failures.
The KMPG consulting group's inability to file full financials when spun out as Bearingpoint illustrates the problem - every traditional control known to man, but a complete disdain among the senior people in an IT management consulting firm for their own IT problems produced a multi-year disaster.
The probable right approach, therefore, is to first pioneer the thin client approach and then the devolution to user managed services in specific practice areas or regions where the CIO has people in place who seem likely to succeed - and then gradually spread the resulting infrastructure and systems management ideas to other practices, other regions.
[Editor's note: Although Paul Murphy is offline until August 7, he filed a series of chapter summaries for a book in progress on Sun Rays and the Smart Display Architecture.]