In the first part of this article, we explained some HPC fundamentals and described how HPC is expanding beyond research and academia into mainstream businesses. In this second, we examine the nature and future of the HPC industry, and at HPC applications to selected business scenarios.
From laboratory to industry, High Performance Computing (HPC) clusters of commercial off-the-shelf (COTS) components - originally developed by researchers - are now finding their way into business and industry for mainstream technical computing. Powerful and scalable clusters are making inroads into industries like manufacturing, energy, digital content creation, healthcare and financial services amongst others. New processor capacity, the freedom of open industry standards, and the opportunity to make fundamental advances in scienceand technology have produced an industry-wide focus on HPC development.
Thriving in a framework of open and vigorous competition, the vast community of equipment manufacturers, software developers, systems integrators and service providers invest, innovate and collectively provide a wide range of increasingly sophisticated and affordable solutions.
A key contributor to the extraordinary power of COTS clusters has been the relentless increases in compute capacity. The impact of these advances has been nothing short of dramatic. In the "old" days when supercomputers came customized, a GFlop (Giga Floating-point Operations per Second) of computing capacity could cost up to US$5 million, but with the advent of SMP systems, this fell to about US$200,000 per GFlop. Today, you can build a TFlop (Tera Floating point operations per second) for less than US$500,000 (one TFlop is 1,000 GFlops).
Companies such as Intel are working to advance price/performance of HPC clusters through active participation in industry efforts to develop open standards and essential middleware. Intel fellows and scientists are deeply involved in the ongoing work of the OpenMP Architecture Review Board, the Open Cluster Group, the MPI Forum and the Global Grid Forum. Performance wise, Intel-powered systems presently make up one-third of the world's top 500 supercomputers.
From a utilization standpoint, there are essentially three broad adoption "paradigms" or stages of growth: early adopters, high-end research and engineering, and commercial business adoption.
Let's take a look at a few commercial scenarios.
HPC in Energy
HPC systems are essential in the energy industry. It plays an important part in improving economic returns by reducing cycle times during the exploration and appraisal phases.
HPC in Life Sciences
Life Science is an adolescent industry that is maturing and expanding with breathtaking speed from its origin at the confluence of biology and high technology. To understand the pace and urgency of investment, consider just one fact in the pharmaceuticals industry: it currently takes twelve years and US$800 million to bring a new drug to market. If applied bioscience can reduce that cost, and accelerate the pace of new product introduction, the returns on investment will be formidable.
Whether in basic science, biotechnology or pharmaceutical applications, these fields share a deep reliance on computational techniques and infrastructure to store, manage and process vast amounts of data. Life Science research is now inseparable from large-scale data management, analysis and numerical modeling. Any growth in this field will inevitably drive an increased demand for IT infrastructure products.
HPC in Digital Content Creation
The making of a major movie is a vast undertaking. It can take millions of dollars to produce. Apart from creating winning storylines these movies consume a massive amount of compute time to render and deliver the 90 or more minutes of movie magic that increasingly sophisticated audiences expect.
As video, film, print, and web technologies continue to merge, content is distributed in ever-increasing volumes and new formats to growing audiences. In addition, major studios are increasingly outsourcing the rendering of digital effects to "render farms" that use large banks of servers.
Large studios may deploy hundreds or even thousands of processors in high-performance desktops, workstations, and servers. HPC render farms provide massive compute capacity to meet the enormous demand for high-resolution content in a variety of formats.
HPC lies at the heart of groundbreaking, high-risk innovative work that provides the ideas and methods for new disciplinary paradigms. Our greatest needs lie towards improving systems, software and algorithm-level software support at the high end, and exploring innovative architectures and devices at the front-end. All this is making it possible for the research and rapidly growing commercial industries to deliver premium solutions by pushing the limits of available computing performance.
Side Bar - Daqing: Exploratory Power
The Company: Producing some 50 million tons of crude oil per year, China's Daqing Oilfield Company Ltd. is one of the world's leading producers of oil and natural gas, specializing in the exploration and production of crude oil and natural gas.
The Challenge: Daqing and its competitors needed to find hidden oil and gas reservoirs quickly and accurately. In 2002, Daqing spent some US$180 million on oil and gas exploration. In changing its seismic data processing application, the company deployed cutting-edge software from Paradigm based on an Intel architecture-based cluster. The System: 129 dual-CPU Xeon systems from Legend. The Result: Daqing estimates savings of some 70 percent against a proprietary supercomputer solution. In 2002, the discovery of new strata bearing reserves of 30 billion cubic meters of natural gas made in northeast China's Heilongjiang Province was the largest ever in the region.
Side Bar - SERC: Frontiers of Science
The Company: The Supercomputing Education Research Centre (SERC) provides state-of-the-art computing facilities to the faculty and students of the Indian Institute of Science (IISc). IISc has more than 2000 active researchers working in almost all frontier areas of science and technology.
The Challenge: SERC researchers are working to further discoveries in the areas of gene sequencing, gene mapping, computational fluid dynamics and other cutting-edge research projects.
The System: Silicon Graphics (SGI) Altix 3000 system, powered by 32 Intel Itanium 2 processors. These systems are scalable from 64 processors today up to 1,024 processors in the future.
The Result: Availability of highly scalable computing resources.
Side Bar - Weta: Visual Magic
The Company: New Zealand's Weta Digital, along with its sister outfit, Weta Workshop, is the special effects powerhouse behind The Lord of the Rings trilogy.
The Challenge: With production on the second movie, The Two Towers, following closely on the heels of the first, the company had to quickly deploy a high-performance, reliable computing environment that could handle demanding digital content creation applications - and millions of hours of rendering.
The System: Most of Weta's magic was done on 2,000 Intel Xeon servers and workstations. Using a balanced computing environment, it distributed the job of rendering visual effects across this cluster.
The Result: Weta's processing farm created the incredible effects that are still unfolding in The Lord of the Rings trilogy.
Vijay Keshav is the Industry Solutions Manager for the Asia Pacific region
at Intel® looking after their High Performance Computing and Life Sciences
His key role is to bring together the various Strategies and Solutions of Intel® and its ecosystem, towards offering customers a compelling value proposition of deploying such solutions on Intel® Architecture.
An Electrical engineer with a Masters Degree in Management, he has worked in senior Sales, Marketing & Business Development roles with leading IT companies across the Asia Pacific region before joining Intel® Asia Electronics Inc in 2000.