I'm always on the look-out for ways to slash spending on application servers. Here's a new one: utility processing for VM-based applications. Azul Systems says that its "unbound compute" appliances can help clear out racks and racks of servers by diverting Java VM calls to the company's proprietary SMP boxes, which are specifically designed and optimized for virtual machine-based applications.
"The historical model of how we deliver computing really needs to go away," the company's president and CEO Stephen DeWitt told me a pre-release briefing last week. "We need to architect away that inefficiency, as we have in networking and storage."
The inefficiency he's talking about is the tendency to over-specify application servers to be sure of handling peak loads. Because each application runs on a separate instance, each server has to be specified to sustain the peak load, even though that means most of the capacity remains idle the rest of the time. To make matters worse, the peak loads are often unknowable in advance. "Capacity and planning around processor and memory has always been a challenge," said DeWitt. "What IT has historically done is they've overprovisioned."
The problem is getting worse with growing adoption of service-oriented architectures, which make it even more difficult to project when and where those peak loads are going to present themselves. At the same time, there's pressure to keep a lid on data center space and reduce running costs. "In order for SOAs to realize their huge financial and business goal impacts, these costs need to be taken away," he said. "People are out of space, they're out of power, they need to be able to do more with less."
The best that companies like Sun and HP can come up with as a solution to this problem is the notion of outsourcing processing to utility computing centres . That may work for batch jobs, but it doesn't address the peak load scenario, which is when you least need to have your response times slump because the processing is suddenly taking place in someone else's data center, several router hops away from your own. Server virtualization within the data center gets a lot closer to what's needed, but as DeWitt says, "at the end of the day a partition is a partition." Azul's dedicated processors not only offload the processing, they also speed it up and expand its capabilities, with 64-bit hardware that delivers a huge 96GB memory heap and other performance-enhancing features (explained in more detail on the company's website) such as pauseless garbage collection and optimistic thread concurrency.
As DeWitt puts it, "The solution is to allow the applications to tap into a basket of unbound compute." Although the Azul compute pool and appliances are proprietary technology, one of the things I like about the solution is that it works by simply declaring a path variable that intercepts calls to the virtual machine and diverts them to the compute pool. In principle, that means you're not locked into Azul if you choose in the future to substitute an alternative from some other vendor.
Azul this week announced new capabilities including utility chargeback and transactional quality of service, designed to fit in with a service provider model. The primary market is among IT organizations that want to allocate costs according to resources consumed (easy when each application runs on a separate server, more difficult in a shared services environment). But there's also a market among managed services providers and on-demand providers -- for those who can afford Azul's $200k- to $800k-per-box price points.
The economics are such that Azul really has no choice but to target the J2EE server market initially (versions 1.4 and 1.5). However anyone who's keen to trash some of their Microsoft servers will be pleased to learn there's a version in the works for .NET, which also uses a VM architecture and so is equally amenable to the Azul treatment.