Scalability? Don't worry. Application complexity? Worry.

Application complexity is something that lots of hardware — whether from the cloud or internal data center — cannot fix.

I remember about a decade ago, as Microsoft’s Windows NT Server system was ascending in the market as a new player for the data center, competing vendors were attacking its scalability. In one presentation, Oracle Chairman Larry Ellison even brought up a Microsoft Word document and typed in the word “scalability,” which got redlined for spell check — he then quipped how Word didn’t even recognize the term. The amount of transactions that could be handled per second was a boasting point for vendors, and the basis for countless benchmarking studies.

Credit: HP

Credit: HP

Of course, all if not most of today’s systems — including those built on Intel and RISC-based chipsets — can manage pretty big workloads, especially considering all the graphics and videos we have surging through our enterprises and networks.

But perhaps we have reached a point where the juice you have in your data center may even not even matter, as scalability may be virtually unlimited, thanks to the proliferation of cloud and grid-based computing.

That’s the point made by Dan Woods in a recent Forbes commentary. To illustrate how far we’ve come, he observes how Twitter is able to command massive resources to manage millions of messages. “If Twitter started today or a year ago, would we have seen a fail whale? I suspect not,” he speculates.

There are two reasons why scalability has exploded in recent times, he explains:

“The first is that the rise of cloud infrastructure has made it possible to massively scale a business with on-demand resources. The second reason is less well-known. The standardization of the cloud and a new crop of systems and application management software is fleshing out …massive cloud-based scalability. The tools needed to grow from viability to scalability are now here.”

While the heavy lifting of scalability appears to off the table as a pressing concern, Woods points to the next challenge on enterprise agendas: application complexity. This is something that lots of hardware — whether from the cloud or internal data center — cannot fix. “Configuring applications and making changes is something that must be done very carefully. In most cases, there is no shortcut to understanding the moves that are safe to make…. Managing such application complexity is a big part of the undifferentiated heavy lifting.”

Jake Sorofman, chief marketing officer for rPath, and SOA advocate from his Systinet days, weighed in on Woods' post, observing that vendors are pondering this challenge as they move their cloud formations forward into the enterprise. "I would take his story one important step further," he relates. "These economies aren’t just about new innovations and making startups more capital efficient; they’re also the catalyst for the fundamental transformation of enterprise IT that’s happening today."

He adds that the IT automation movement is gaining momentum. "Why? Because the world has changed in profound and fundamental ways." For example, he says," system scale is compounding by orders of magnitude, IT is under pressure to become rapidly responsive, and op ex budgets are adjusting down to the 'new normal.'"

This is fueling efforts to adopt IT automation "to manage the complexity of software—to abstract away from the muck of enabling infrastructure and focus on applications and business services that deliver real and measurable value.... IT is being forced to change—to rise from the muck, once and for all."