X
Tech

Microsoft's new genetic code

When you think about enterprise systems management, companies like Computer Associates, Hewlett-Packard and IBM come to mind. They offer comprehensive suites for monitoring and managing the behavior of the broad array of IT components that keep an enterprise afloat.
Written by Dan Farber, Inactive

When you think about enterprise systems management, companies like Computer Associates, Hewlett-Packard and IBM come to mind. They offer comprehensive suites for monitoring and managing the behavior of the broad array of IT components that keep an enterprise afloat. You don't necessarily consider Microsoft in the same vein, but with its characteristic resolve, the company is attempting to make comprehensive systems management part of the genetic material of Windows.

Bob Muglia, senior vice president, Windows Server Division, states the problem as follows: "The vast majority of software isn't built from the ground up with manageability and security in mind. As a result, we are in a quagmire, with software vulnerabilities and complexities that make management of networks and IT assets a costly nightmare. It's true for Windows and other platforms as well, and we have to change the way people develop software and make it manageable from the beginning."

Solving this problems is part of what I called "instrumenting the enterprise" in a previous column. Rather than integrating management into enterprise solutions as a costly, complex afterthought, it should be baked in from the beginning. In an ideal world, a distributed collection of IT resources harnessed for a specific task should be able to adapt and thrive even if conditions in the environment change. This capability requires high-level linking of the application design and the deployment environment so that systems at any level--operating systems, applications, network, servers--are instrumented to automatically maintain an optimal state. It should be as simple and elegant as a plant seeking sunlight, in which photosynthesis is part of the genetic code.

Given the complexity of IT environments, the notion of an IT equivalent to photosynthesis is a nice fantasy. There is no magic potion for automating the management of enterprise IT environments. But Microsoft's lengthy quest to revamp its software architecture and to bake in operational controls has significant implications for enterprises, especially those that are Windows-centric.

At the core of what Microsoft calls its Dynamic Systems Initiative is the System Definition Model (SDM). Using XML as an underlying data format, the SDM is designed to capture the basic structure and definitions of hardware and software, including information such as configuration schemas, "health" models and operational policies.

"The SDM is going to be way for us to express the desired state of individual components and the enterprise as a whole," Kirill Tatarinov, Microsoft corporate vice president of enterprise management division, said. "Through the SDM, developers have access to all of the components, and they are linked. This dynamic linkage is Microsoft's differentiation. Applications are dynamically aware and reconfigurable, but most others are doing it [reconfiguration] manually. The dynamic linkage between high level definition of an enterprise and topology, and linkage to lower level components is a key differentiation of the DSI."

The structured SDM approach allows the primary architects of an application or service to more accurately define the requirements and desired outcomes in schemas that are more easily parsed by developers. In addition, according to Tatarinov, the various layers (applications, network topologies, operating systems and servers) encoded with SDM information must be orchestrated. Conflicts among policies or constraints applied to the different layers could be resolved during the development process, rather than during testing or deployment phases.

Web services and various languages could also be used to describe and capture operational and administrative tasks and policies in a data center, imprinting the logic of otherwise manual tasks in software. The SDM can provide the information about the IT environment necessary for running those operational policies. Rather than bolting on intelligence, or operational awareness, it is integrated into the hardware and software resources during the development phase. If a parameter changes, such as network speed or server utilization, the components impacted by the change automatically know what procedure to follow based on the baked-in definitions and policies.

Basically, this concept is Microsoft's new plumbing for enabling utility or on demand computing.

It's a compelling concept--the DSI and SDM sound like photosynthesis for IT, at least in the Windows world. But, it's mostly theoretical at this point, and theory is most often the end point of ambitious schemes to wrestle IT environments to the ground.

I was curious to understand precisely how the theory translates into the real world and see the roadmap for getting to the promised land of instrumenting IT resources to be "manageable from the beginning."

According to David Hamilton, director of enterprise management division at Microsoft, the company's quest to move the intelligence from management software into the application itself will occur in several stages.

"Moving intelligence into applications comes in three stages," said Hamilton. "The first one is management products that come after a product release. All the intelligence is in the management software, and that has been the state of the industry. In this environment the level of application management is basic; you can manage Exchange through tools that look at the flow of data in and out of the box, but don't look inside the application or understand its context."

In stage two, vendors provide management information about their applications, which can be used by systems management software, Hamilton said. He gave the example of the Microsoft Operations Manager (MOM) Management Pack (MP) for Exchange. "The MOM MP for Exchange can interpret the behavior and events generated by the application, creating the appropriate alerts, tracking thresholds, and delivering context-sensitive reports," Hamilton said. A net effect of going from stage 1 to stage 2 technology was a significant reduction in the alert to ticket ratio, Hamilton noted.

The penultimate level, stage 3, is where the majority of management infrastructure becomes part of the platform, and it is shifted into development process. No more add-on management packs. Operational data can be input into the development process, and the behavior of an application stored according to application-specific management schemas. The major benefit to IT organizations of stage 3 compared to stage 2 is that cost and complexity are reduced, and manageable applications are more easily created, according to Hamilton. "This allows a true design for operations," Hamilton said.

Stage 3 doesn't mean that applications are self healing or that human-to-machine interaction to manage IT infrastructure is a vestige of the past. "To make an application self-healing it needs to be designed as self-healing; otherwise it's after the fact and like a black box," Tatarinov said.

I asked for an example of a stage 3, in which management was baked into an application from its inception, but they came up short. Hamilton cited the Management Pack for Microsoft SQL Server. "The database software includes code that tracks events at a deep level. Using that data, we can get ahead of the curve and make changes before any problems occur. It's still more stage two at this point, however."

Muglia told me that the timetable for reach a mature stage 3 is about a decade. "We'll see substantive improvements in the next two to three years, and major improvements in five years. It will take another five years to roll it out because that how long it take for companies to roll it out."

A major forthcoming step is the release later this year of the next version of Visual Studio.Net, code-named Whidbey. It will include an application modeling tool, code-named Whitehorse, that allows developers to provide information in SDM format that helps with the administration of applications once they are deployed. In order to do this, the applications will hopefully advertise their management data and methods as Web services, leaving them open to XML-based access and, naturally, choice of management consoles. Thus, Microsoft's management software, as well as other consoles, will be able to interpret and act upon SDM information gleaned from an application.

According to Hamilton, the SDM goes further than UML (Unified Modeling Language) by linking application design with infrastructure design and validation as part of the DSI. "With the SDM, an infrastructure architect can specify what the data center looks like, an application architect can design a service-oriented application, and a tester/operations manager will ensure that the designed application will actually work in the specified data center," Hamilton said. "Reducing the confusion between architects and operations managers by enabling them to communicate will save organizations time and money in deploying their mission-critical services."

This effort to get all the stakeholders across infrastructure, application design and operations, on the same page is a critical step toward reducing the failure rate of IT projects, but that won't make code more reliable. Too often the original intent of the application or service is lost or corrupted as the various participants collaborate on a solution. Charles Simonyi, a former Microsoft executive and creator of seminal applications such as Word, is developing tools to help maintain the value specified in the design throughout the development process. His company, Intentional Software, is leveraging software development innovations, such as aspect-oriented programming, to create tighter linkage between the software design and program code.

Standards bodies, such as the Distributed Management Task Force (DMTF), are working on standard specifications for management. Hamilton noted that Microsoft is working with the DMTF, but characterized the SDM as a more focused on the complete lifecycle.

"We are working closely with the standards body to build and evolve the DMTF's Common Information Model (CIM) specification to deliver a platform-independent, industry standard server hardware management architecture across diverse IT environments in the data center," Hamilton said. "The SDM participates at a layer above the work; instead of focusing on the individual hardware components, the SDM takes a distributed system view and is used to describe how all of the various components of that IT system come together to function as one complete system and what the relationships are between those various components."

Of course, most enterprises have millions of lines of legacy code. Hamilton said that the SDM can be used to model existing systems, retrofitting them into systems developed natively with the SDM. However, a substantial amount of legacy and custom code is poorly documented, which would make building SDM schemas a serious challenge. In regards to working with non-Windows environments, Hamilton said that Microsoft would work closely with industry partners. "Technically, there is nothing Windows specific about the SDM--the schema can be used to describe IT systems that are non-Windows. While we will primarily focus on developing products and solutions for the Windows platform, we will work with those partners to ensure the creation of solutions for non-Windows environments that interoperate with Microsoft's SDM based products and solutions for Windows."

I asked Muglia whether Microsoft would build a cross-platform management platform to compete with Unicenter, OpenView and Tivoli. "We have partners who have worked with MOM to do J2EE management on Linux, for example. Our goals are very different-our objectives are different from IBM or HP and that's the reason we can partner with them. We just want to make sure customers can manage Windows very effectively."

I asked Andrew Mulholland, chief technology officer at Cap Gemini Ernst & Young, for his opinion on Microsoft's DSI and SDM efforts. "Microsoft has started to think about its architecture in a very different mode," Mulholland said. That different mode makes a great deal of sense in theory. In practice, however, the company has a long way to go to prove that its dynamic management schemas will work effectively across Windows and non-Windows platforms.

For now, Microsoft is playing catch-up with utility computing initiatives, such as HP's Adaptive Enterprise concept, IBM's On Demand, and Sun's N1. In addition, Linux is now viewed as a legitimate alternative to Windows in the data center. IBM is moving in similar directions with its autonomic computing initiatives and WebSphere platform. And, IBM and Cisco are promoting a common reporting format for correlating application failures in the disparate parts of a corporate data center.

Nonetheless, Microsoft's tenacity, bankroll, history, focus on development tools and talent indicate that the DSI and SDM will play a critical role in the future of enterprise Windows, and perhaps bridge the gap in heterogeneous environments.

You can write to me at dan.farber@cnet.com. If you're looking for my commentaries on other IT topics, check the archives.

Editorial standards