X
Business

A Deeper Approach to Transaction Performance Management

As a journalist, I always take a slightly guilty glance over my shoulder when I have to look up a new word. But when working on a performance management editorial project this week, I had to make sure I knew how the term “substrate” was being used in no uncertain terms.
Written by Adrian Bridgwater, Contributor

As a journalist, I always take a slightly guilty glance over my shoulder when I have to look up a new word. But when working on a performance management editorial project this week, I had to make sure I knew how the term “substrate” was being used in no uncertain terms.

I had been looking at transaction performance management paradigms and the differing means of data communication as they have evolved over the years.

In this context, ‘substrates’ refer to platforms if you like; bedrock foundations for a means of data exchange, essentially something that is acted upon by the data that forms business transactions. So defining and identifying differing substrates is a pre-requisite if you are going to extend upwards and examine, refine and improve performance management of the applications in the IT ecosystem.

As substrates have been developed they have, in some cases, evolved to become branded stand-alone transaction systems such as FTP, DBMS Remote Procedure Calls, standard ‘request-response’ protocols such as IBM’s DRDA or Sybase’s TDS and commercial replication mechanisms. On a higher level, these were accompanied by CRM/ERP software packages such as Siebel, PeopleSoft, JD Edwards, SAP etc.

Some argue that as corporate IT shops move towards a single data distribution mechanism in order to try and accommodate for growth, they must now look for a unified way to execute transactions along a single pipeline. A unified platform (or substrate) allows, so the argument goes, the ability to both merge and separate disparate office systems in a more natural way.

Companies that talk about transaction performance management (and it’s hard to find too many other than CA and IBM Tivoli) say that it is replacing the costly ‘Build/Deploy/Tear-Down/Rebuild model that was common in the late 80s and early 90s.

Experience has shown, we’re told, that it is more desirable to break the cycle of re-designing architecture to fit the current business model and instead develop a common re-usable infrastructure.

I asked for a contact of mine who works for a database company that operates with this type of technology for some guiding words on this subject and here's what I got.

“Self-managing transaction environments are a great idea, but to some extent these are still a fantasy. What is needed in modern data environments today is the ability to see into the inner workings of transactions so that we can track down and correct performance problems at a glance,” said Patrick Enright, director, Sybase 365.

DBMS vendors are reportedly building more ‘smarts’ into their database engines and Forrester Group predicts DBMS total automation such as seamless tuning of SQL statements at runtime (Forrester Group, April, 2008).

However, we’re a long way from self-tuning and self-managing transaction systems. Until that time arrives, it doesn’t seem like an unreasonable argument to suggest that IT resources need to manage and tune their respective environments as effectively as possible with a view to the above transaction performance issues – does it?

Editorial standards