IT and its hosting enterprises have passed through monumental changes over the past decade. Through it all, CIOs have maintained a strategic eye on 'next thing' technologies. However, with relatively flat IT budgets, they have also looked for IT investments that are on the decline.
Some of these technology fadeouts are internal approaches to IT and general business operations and management that just don't seem to work well any more. Others involve a particular technology solution that has seen its day. In both cases, the end results will have dramatic impact on the technology choices that businesses will make. What are the likely technology fadeouts?
Historical functions like computer operations and low-level systems support, still a staple in many large data centers, will continue to wane. In the most progressive of data centers, even a mission-critical call like a disaster recovery and failover decision will be automated, although most CIOs are still afraid today to give up pressing the button.
We've already seen major strides in robotics and in the physical automation of storage cabinets and media to complement what is possible today on the software side -- automating nightly batch processing so on-site human 'babysitting' is no longer necessary. To adopt this new automation, many sites will have to modify and test their batch runtime code and operational procedures.
Virtual desktop computing, a plethora of cloud-based offerings, and the preference of business users for tablets, mobile phones, and other agile devices will further diminish the presence of top-heavy desktops with resident large storage and processing capabilities.
The progress of hardware and software self-healing mechanisms, and the ability of software and hardware to collaborate in automating systems, system health checks, and fixes, could reduce system monitoring by the IT staff to occasional responses to system-generated alerts and periodic status checks -- with automation performing much of the work of junior network analysts.
The need for applications developers to have solid understanding of the underlying system software, database calls and commits, the integration between application and system code routines, and the need to pre-size hardware resources to accommodate an application will decline as more DevOps tools with greater levels of sophistication hit the market. With DevOps technology, application developers can focus on meeting the needs of the end business faster because they are freed from having to configure supporting infrastructures for the applications they create. DevOps tools orchestrate best-practice mixes of applications, supporting system software, supporting data base(s), supporting communications software -- and the provisioning of hardware needed to run it all.
It is already possible for developers to use cloud-based DevOps tools that enable them to click on an icon and have the cloud auto-configure an entire system infrastructure for their applications. This eliminates the need for developers to acquire nuts-and-bolts knowledge of the systems and platforms they develop applications for -- as well as many of the traditional application utility and debugging tools that developers have used in the past.
End business users are going directly to technology vendors to address their business problems, often bypassing IT. New cloud-based technology and the growth of mobile devices, social media, and web applications have empowered these users to seek out solutions on their own. End users getting involved in technology decision making isn't necessarily a bad thing for IT. But IT needs to jettison outdated strategies that presumed its ultimate control over technology adoption and budgets. Too many shops haven't done this yet.
The popularity of DevOps and the growing importance of network and wide-area network performance as a part of overall application success are forcing many IT departments to rethink how they are structured and whether it makes more sense to redeploy personnel so that combinations of network, operations, and applications staff work in units that support specific groups of business systems. Bolstering this trend are system tools that use a common repository of data with the option of customizing views of that data to accommodate the needs of network, application, database, and system personnel. These technical disciplines used to work with their own toolsets and often obtained different results when troubleshooting a common system problem. By working with the same set of tools and data, there is less confusion over differing results and conclusions, which reduces the time to resolution for system problems. The net result of this? Older troubleshooting tools targeted to only one IT discipline (networks, for example) are falling out of favor.
The end-user and end-customer experience will loom larger than ever in IT service delivery. Some leading-edge IT departments are already taking steps to develop career paths in QA, service, and support that extend to the vice-president level and are on equivalent standing with areas like applications. In other cases, business users are demanding that IT install service-level agreements (SLAs) where service and technical excellence and applications are measured. Many of these IT departments are using what is learned about ease of use from QA and the help desk as direct inputs to facilitate improvements in how new applications will work. The result will be a change in IT internal politics and perceptions on which functions are deemed most important.