Dell World 2014 has been an interesting experience. I've had the opportunity to speak with a number of Dell executives about the company's efforts in cloud computing, big data and analytics, end-user computing support and the topic of "converged computing solutions."
During the event, Dell launched the PowerEdge FX, part of the company's 13th generation of PowerEdge systems. Dell's goal for these systems is to offer an architecture that makes in-box growth simple and also offers customers a great deal of configuration flexibility. Here's how the company describes its new offering:
The next-generation PowerEdge FX architecture is a 2U enclosure with six new PowerEdge server, storage and network IOA sleds built specifically to fit into the FX2 chassis and support varying workloads. Designed with integrated management capabilities, the FX architecture enables customers to easily configure, manage and add capacity to complete workload-specific blocks of IT resources.
This converged infrastructure approach affords efficiencies of shared power, I/O and management, integrated switching, and unsurpassed overall density capabilities at up to 40 2-socket servers in 10U . Building on Dell’s recent PowerEdge 13th generation server portfolio announcement, the next-generation PowerEdge FX2 also includes advanced systems management capabilities to reduce operational complexity and simplify datacenter management.
This system appears to offer a great deal of power and flexibility at a reasonable price.
Hyper converged catch phrase
While the systems are indeed impressive, the constant use of the word "converged" during the presentation at Dell World forced me to think about how systems suppliers are increasingly bandying terms like that about, e.g., "hyper converged," and even "super hyper converged." What does this really mean?
A bit of history
In the beginning of today's world of computing, all of the functions needed to support workloads, including user interaction, application processing, data management, storage management and network management, were housed in a single enclosure. This was called the main frame or mainframe. This meant that each of the functions could be found all in one place and were straightforward to manage. Problems could be quickly isolated and resolved.
In the 1970s, companies such as Digital Equipment (now part of HP) and Data General (now part of EMC) developed smaller, much less costly computers that were sold to smaller companies and business units of larger companies to address their needs and to interact with the organization's mainframes.
Industry standard x86 servers came next
In the late 1980s and early 1990s, industry standard, x86-based systems emerged and the concept of a server started to decompose into separate systems that supported each function.
Functional or appliance servers
Interaction with workload users were managed by systems supporting client/server communication or front end web services, application processing was managed by other systems, data management was managed by still other systems, storage management was managed by storage servers living on special purpose storage area networks and, finally, network management was managed by stand-alone network routers and other networking equipment.
The complexity is killing us
Over the years special purpose appliance servers took on the functions of managing corporate email, security, database management, and many other functions. The corporate datacenter became an ever more complex herd of systems, each doing a specific function. It became increasingly difficult to manage this environment and the pendulum started to swing back towards bringing the components of a "system" back together.
The industry saw many different approaches to offering the flexibility and power of the multi-system approach combined with the straightforward management and administration of the mainframe. Terms such as "blade server," "unified computing server," and a few others became marketing catchphrases for hardware suppliers.
The newest generation of marketing-speak for mainframe-like systems-based industry standard components is "converged." Depending upon how many functions have migrated back into the main enclosure, the phrase "converged" might be enhanced by adding words such as "hyper," "super," or even things such as "hyper super converged."
Suppliers such as Cisco, Dell, HP and IBM are all playing the game.
What does this all mean?
Suppliers, such as Dell, want to offer the simplicity of a mainframe-like system but don't want to be seen as going back to the 1960s for answers. No, they are always looking to the future and want to be seen as offering industry leadership.
So, we can expect the systems to function more and more like the mainframe of the 1960s and 1970s, but will be described using more and more adjectives such as "hyper" or "super" converged.
All in all, the trend makes sense. Converged systems can be easier to purchase, install and manage than addressing the same business requirements with an ever-growing herd of appliance servers flying in formation.
Dell's PowerEdge FX series appears to demonstrate what is best about this trend.