X
Business

Will a faster BEA JRockit expand J2EE's realtime appeal?

<digression>For the record J2EE (Java 2 Enterprise Edition) is no longer the acronym that Sun and Java licensees are using to describe the server-side implementation of Java -- otherwise known as a Java-based application server (Java isn't your only choice for an app server; there's .NET too).
Written by David Berlind, Inactive

Play audio version

<digression>For the record J2EE (Java 2 Enterprise Edition) is no longer the acronym that Sun and Java licensees are using to describe the server-side implementation of Java -- otherwise known as a Java-based application server (Java isn't your only choice for an app server; there's .NET too). Going forward, "they" want us to use JEE (simply Java Enteprise Edition) instead.</digression>

As is often the case with middleware -- a type of platform that inserts a special interpretation step into software execution so that applications developed for it can run unchanged across multiple operating systems -- one complaint about Java over the years is that it's slow when compared to software that's designed to interface directly with the operating system (eg: software written in C or C++).  In other words, there's a performance penalty that's associated with the overhead that it takes to make interpretation possible (in realtime).  Over the years, where Java's cross-platform appeal and applicability to Web sites that needed some business logic stitched into their pages made that overhead worth the compromise, IT shops were primarily limited to brute force techniques such as application server clustering or simply using faster systems in order to speed things up. 

One other one-time improvement they could make, if they weren't running it already, was to swap-in BEA's JRockit Java Virtual Machine (JVM) for Sun's JVM.  The JVM is the software component inside implementations of Java that does most of Java's heaving lifting (including interpretation). Fundamentally, the architecture behind the desktop (J2SE) and server (J2EE) Java Runtime Environments (JRE) is the same.  Both can involve the same JVM from Sun but are surrounded by different software accoutrements.  For enterprises looking to improve the performance of their server-side applications, BEA has offered JRockit -- a speedy JVM that's optimized for server-side Java (what Sun often calls "Big Java") and that can be substituted for Sun' JVM. 

But even after replacing JVMs and throwing more hardware at "the problem," J2EE was still coming up short on peformance when it came to applications that have little or no tolerance for software-induced latency.  For companies involved in the Java ecosystem like Sun and BEA, the net net was that J2EE simply wasn't a candidate for certain projects -- particularly in sectors like telecommunications and finance where certain forms of data processing must be realtime, all the time. 

In moves that both Sun and BEA officials claim will improve J2EE's viabilty in those sectors -- to literally go where Java hasn't gone before (due to performance concerns) -- both companies have attacked the problem from their respective strengths.  Earlier this week, Sun rolled out two new servers -- the T1000 and T2000 -- that are based on the company's latest thread monster of a multi-core chip: the UltraSparc T1 (formerly codenamed "Niagara").  I say thread monster because in its highest-end configuration, an 8-core T1 can support 32 simultaneously executing pipelines or "threads" of instructions.  Sun claims the new servers are record breaking and that they're inherently well matched to J2EE applications because of the multi-threaded nature of Java. 

In addition to the gains that could accrue to Java applications by way of Sun's newest servers, BEA this week is claiming a Java performance breakthrough of its own -- having found a way to give its JRockit JVM more thrust  -- a way that somehow, until now, has escaped JRockit's engineers.  In my interview with BEA's Developer Relations vice president Franz Aman and its new chief technology officer Rob Levy, the "discovery" seemed akin to suddenly realizing your car had dual exhaust pipes but never took advantage of them (spreading a car engine's exhaust across two pipes will often improve that engine's performance).  The interview is available as an MP3 that can be downloaded or, if you’re already subscribed to ZDNet’s IT Matters series of audio podcasts, it will show up on your system or MP3 player automatically. See ZDNet’s podcasts: How to tune in.

What BEA claims to have discovered is a more efficient way to keep a system's memory clear of artifacts and vestiges from prior executions runs of software.  The technique is known as garbage collection and, as Levy described the state of the state prior to the new version of JRockit, there was no telling how long it would take to clean up the mess that was left behind by previous software runs.  Now, BEA claims to have found a way to essentially clean up the mess as it happens which means that, instead of pausing every so often to clean up a mess that could be any size (a process that could introduce unpreditable amounts of latency), now, garbage collection is done in realtime.  As a result, not only do BEA officials say that fastest just got faster, they say it's more predictable -- an important issue for realtime processing where there's no room for unpredictable fits and starts. 

Here, from the interview, are a couple of noteworthy quotes:

BEA CTO Rob Levy on detecting memory leaks: If you're actually trying to figure out why your application is running slowly, and where you have a memory leak, it can be a pretty difficult task. In production, no one typically allows you to load in additional detection software because it takes a lot of overhead.  And in a test enviromment, you'll never find a memory leak.  So people typically reboot.  Our memory leak detection software runs at no overhead and it can pinpoint a memory leak down to the line of source code; something that no operating system vendor can provide you.

Levy on JRockit's improvements in garbage collection: Think of the performance of a Java application inside the JVM. There's always a need at some point in the underlying engine to sort of stop all processing, pull all garbage out of the JVM, clean it up, and then allow processing to resume. Up until now, that process had been sort of intermittently based on the amount of garbage that had been collected and what happens is that it is not that more or less garbage gets collected. It's about when the collection happens, the latency of the time that it takes to go through the process creates an environment where the application stops from 100 up to 150 ms to allow the process to happen. What we've really done is develop a very smart level of algorithm that allows us to continuously pull garbage from underneath the application that's running on the JVM.  So we're deterministically guaranteed that the load that is required to pull garbage out of the machinery is stable which is where you can start guarannteeing, regardless of thestrength of the machine, a stable [,preditable] garbage collection environment.

Near the end of the interview, given that Sun has often spoken of Java's superior garbage collection techniques, I asked why it took so long to discover what seems to me to be something so obvious.  Answer? Listen to the interview!  It really does seem like it was an accidental discovery.

Editorial standards