X
Tech

Commentary: XML on a chip?

Hardware that accelerates at the application layer may sound impossible, but a new solution mooted by some vendors may suggest otherwise.
Written by Larry Seltzer, Contributor
commentary After my "horror of XML" Halloween column, I was contacted by vendors selling a solution of which I had never dreamed: hardware acceleration for XML.

It's an idea that seems to me at once obvious and unnatural. Back when I used to follow the processor industry closely I was interested in the debate between those who believed in the future of custom logic and those who believe in general-purpose processors. Almost anything can be done in a general-purpose processor, and some believe it's best to do as much as possible on them and optimize the speed of those processors to the benefit of all programs on the system.

The practical world has embraced custom acceleration hardware to varying degrees; for instance, even though you can turn a general-purpose computer into a router--and indeed that is how routers began--the industry knows how to make high-performance economical routers that just do routing, and they're a better deal. Further up the network stack we have plenty of hardware products on the market to accelerate load-balancing and cache site data. Large, modern sites like ZDNet probably couldn't handle their loads without these devices.

But hardware that accelerates at the application layer? Surely this is going too far!

Well, maybe not. There's already precedent for this in the form of SSL acceleration hardware such as the IBM 4758 PCI Cryptographic Coprocessor.

And I've already said it in my "horror" column: In a world based on XML Web services, XML processing will consume enormous amounts of CPU, memory, and network bandwidth. Especially in a transactional environment, the performance burden imposed by this new class of overhead can make applications unusable.

Companies like DataPower are delivering XML processing hardware not only to accelerate time-consuming tasks like XSL transformations and schema validation, but also security-related features like encryption and filtering. There are a few other companies in the business, like Sarvega, but XML acceleration hardware is still sort of under the radar of the big guys.

XSLT (XSL Transformations), a W3C specification for the creation of documents of arbitrary types from XML and style sheets, seems to be a main thrust of the XML hardware business, although there are other products and other opportunities.

There are a few ways that programs running on conventional computers can interface with XML acceleration devices. For example, they can respond to SOAP messages, probably through HTTP (SOAP can operate on almost any transport, but XML accelerators need to limit their support options). The box might be sitting as a reverse proxy on a Web site, responding to Web browsers with XML transformed through XSLT into HTML.

The DataPower boxes also have what they call a coprocessor mode, in which an application server can call into the device via JAXP (Java API for XML Processing) to perform a transformation for the purpose, for example, of mediating between a Web server and a database with XML.

The whole point of using hardware like this is to accelerate performance. Of course, as I said last week, theoretical performance arguments are nice, but you need to demonstrate how it operates in the real world. Sarvega has a noteworthy paper with a proposed benchmark for XSLT transformation. The paper has some interesting and intelligent discussions of the challenges of designing a benchmark for a complex, loosely coupled distributed computing environment such as in a Web services application. Sarvega left its own products off the benchmark, since the results would skew the geometric mean used to show relative performance in the chart. Suffice it to say that dedicated hardware far outperforms software solutions on a moderately priced server. (The fastest software XML processor was Microsoft's MSXML.) But Sarvega claims that its boxes operate essentially at wire speed, so they bring the overall performance impact of some complex XML applications almost to 0.

The other big performance problem with XML verbosity is the increase in network bandwidth consumption. According to DataPower their products support using gzip, a popular open source compressor/decompressor, in conjunction with HTTP, to compress data moving across the wire. The problem with that is it's not exactly a standard, although you could say it's based on standards. But it's probably a smart way to address the problem, so I'm curious to see if other vendors make provisions to implement gzip by default. But, I don't think so, and I seriously doubt that Microsoft does with their Web services offerings. Sarvega sees this as a future problem, although an inevitable one, and plans to deal with it later.

Maybe we should expect hardware acceleration to continue to move further up the application stack, even if it seems out of sorts with the way we've done things. As Sarvega claims, they can make a cost level reduction argument to CIOs, and nothing overcomes old habits like the promise of saving money.

Larry has written software and computer articles since 1983. He has worked for software companies and IT departments, and has managed test labs at National Software Testing Labs, PC Week, and PC Magazine.

Editorial standards