Worth noting: The Open Group (you remember them) this week spurred on work to add additional (and needed) standardization to the interoperability of different but semantically equivalent data. The San Francisco-based vendor-neutral organization is working to create a registry that holds descriptions and identifiers of the venerable Universal Data Element Framework (UDEF). A forum has been set up to chew this all over the put it into practice. The more, the merrier.
By using structured unique identifiers, UDEF will enable enterprises to categorize similar information in a standard way, thus easing the process of data transformation. Making translations between different data description standards will itself be standardized. At least that's the plan.
Data transformation is a very big deal right now. Many enterprises moving toward services oriented architectures (SOA) are recognizing that they need to get their content and data acts together. The content, in essence, needs to abstracted out of native formats and descriptors. By doing so, enterprises promote the access of more actionable content by more applications and services -- even as those services themselves are abstracted from native environs and become available for use as standards-based composite applications and advanced business processes.
This is a chicken and egg kind of thing -- the apps as services need the content freed up to be able to gain full business value, but there's less reason to do the data transformation if there are not the apps to take advantage of it. I'm advising my clients to do the data work at least on par with, and whenever possible in advance of, the infrastructure and applications services-enablement.
IBM has been doing a lot in this space, and recently vowed to make open source (another route to standardization) its UIMA capabilities.