Scalable architectures give supercomputing new life
The end of the Cold War was seen by many as the beginning of the end of the supercomputer in the federal government but that has not been the case. Instead sparked by new challenges associated with the end of the nuclear arms race and increasingly complex military confrontations such as the Persian
The end of the Cold War was seen by many as the beginning of the end of the supercomputer in the federal government but that has not been the case. Instead sparked by new challenges associated with the end of the nuclear arms race and increasingly complex military confrontations such as the Persian Gulf War demand remains strong and steady.
What is changing is the way the government buys its supercomputing power. Traditionally that has meant investing tens of millions of dollars in high-end ultrafast systems and the software and tools that came with them. Now the emphasis is on scalable supercomputer architectures that allow agencies to buy in at the inexpensive end and build their systems up over time according to user demands.
The High-Performance Computing and Communications (HPCC) program is the umbrella organization that helps coordinate the high-performance computing research and development efforts of federal agencies. Its estimated fiscal 1996 budget was $1 billion and President Clinton has requested the same amount for fiscal 1997.
The funding at individual agencies shows much the same kind of stability. Of the 12 agencies participating in the HPCC six fiscal 1997 budget requests dropped compared with fiscal 1996 but by less than $5 million in all but one case. For the other half dozen requests either increased or stayed the same. And in lock step with the funding picture the main government applications for supercomputers - weather and environmental modeling energy management weapons systems development and biomedical imaging - mirror those of recent years.
Cray Research Inc. which introduced the world's first true supercomputer in the mid-1970s is also at the forefront of the switch to scalable computing. In November it announced its newest high-end supercomputer the T3E-900. It is the first commercially available system capable of performing 1 trillion calculations per second and can build from just a few processors to 2 048.
The U.S. list price begins at $500 000 and allows users an easy upgrade path to add new modules or replace old ones according to company officials. Cray says it has tapped "near linear scalability offering low entry prices and pay-as-you-go prices as well as the ability to achieve extreme levels of performance" as one of six attributes that describe ideal supercomputing.
The T3E-900 is the successor to the company's T3E which also is scalable and which Cray will install under contracts this year at the Energy Department's Ernest Orlando Lawrence Berkeley National Laboratory in California Los Alamos National Laboratory in New Mexico and the Pittsburgh Supercomputing Center in Pennsylvania.
At Berkeley the T3E will be the centerpiece of the lab's National Energy Research Scientific Computing center a facility that provides high-performance computing to thousands of DOE researchers worldwide. The $110 million Los Alamos deal was part of DOE's Accelerated Strategic Computing Initiative (ASCI).
Brett Berlin an industry consultant and president of Virginia-based Brett Berlin & Associates characterized the transition to scalable architectures as "the age of the user " meaning that users recognize the need for extremely high-speed computing to develop modeling techniques that literally represent the real world.
"Everyone is now busy porting their codes to be able to use some kind of scalable platforms " he said. "We have to be able to represent real phenomenology to the degree that we can be confident we will learn more than if we were doing tests. How do you test missile defense? You certainly don't call your enemy and ask him to lob 20 missiles."
DOE has been one of the main drivers of this move toward scalable systems through ASCI which began in 1995 and is worth more than $1 billion over 10 years. ASCI will use supercomputers to simulate underground nuclear tests and monitor the aging of the country's nuclear stockpile it is looking to boost supercomputing speeds to 1 000 times beyond those of any existing system.
One of the ASCI program winners IBM Corp. was awarded a $93 million contract in July from DOE's Lawrence Livermore National Laboratory Livermore Calif. to build a supercomputer capable of processing 3 trillion calculations per second. IBM's RS/6000 SP a general-purpose scalable parallel computer will be the system installed there.
At the heart of IBM RS/6000 architecture is the most powerful microprocessor ever developed by the company the Power2 Super Chip (P2SC) which contains more than 15 million transistors on a single chip smaller than a postage stamp. The RS/6000 can grow to accommodate as many as 512 processors - scalability needed to reach the goal of 3 trillion calculations per second.
Michael Henesey program director of high-performance technical computing for IBM's RS/6000 division said ASCI program directors consulted with industry and molded the procurement to foster the emerging scalable architecture which DOE is betting will take it at least through the next 10 years of supercomputing needs.
"What researchers are looking for is a technology path that they can get on and ride for several years " Henesey said. "The driving factor is the return on investment in designing high-end technology. The core group of high-performance computer users will stay fairly consistent but there are other areas that are starting to look at supercomputers to solve complex problems such as command control and communications defense modeling and simulations."
Such scalable systems are also being employed by the Defense Department as it transitions from the intensive intelligence gathering prompted by the Cold War years to gearing up for the increasingly complex arena of warfighting. DOD's High-Performance Computing Modernization Office which is the consolidated supercomputing research arm of the four armed services increasingly is relying on the modeling and simulation that supercomputers provide in order to design test and evaluate weapons systems.
During the Persian Gulf War this office used high-performance simulation to develop in a matter of weeks an entirely new munition that was used for deep penetration of enemy bunkers. More than 4 000 users at 62 labs and four major shared-research centers are part of the office's high-performance research and development team said Kay Howell director of DOD's High-Performance Computing Modernization Program."The goal of the program is to modernize DOD's high-performance computing capabilities so that we can use high-performance computing to provide technological advances to the warfighter " Howell said.
"Things that are happening in the battlefield are much more complicated than they used to be " she said. "You never know from day to day who the enemy is. You have to be able to respond very quickly. Simulation allows us to reduce risk reduce cost and reduce the amount of time that's required to deliver a weapons system."
In high-performance computing researchers always have a larger appetite for performance than is available. Howell's program is no exception. "As soon as something better comes along you want to take advantage of that to increase the size of the model you're developing " she said. "We're certainly taking advantage of the scalable systems. Typically you don't start out scaling your project up to hundreds of processors. Once you've done the numbers work then you may look at scaling up."
Howell said the emergence of the new systems has been prompted by the overall small market for very-high-end supercomputers which always has been a minuscule piece of the computing pie. But the scalable systems do have their limitations she said including memory access input and output of data and other performance hits prompted by communications between processors after scaling up.
Several other agencies have begun tapping the high-speed computational power of these emerging scalable systems. For example researchers at the National Oceanic and Atmospheric Administration (NOAA) are moving to scalable systems to foster the advancement of weather forecasting using modeling.
The Geophysical Fluid Dynamics Laboratory (GFDL) is at the center of NOAA's efforts to understand the Earth's climate. Researchers at GFDL used the Cray T-90 and the Cray T3E to develop the hurricane modeling system that became operational in 1995.
The system - which NWS uses for its primary hurricane forecasting model - has improved the accuracy of hurricane forecasting by 40 percent and has been an integral part of predicting every hurricane's course during the last couple of seasons.
In addition to the emergence of scalable systems the high-performance computing market was marked for the first time in 1995 by an increase in sales of midrange high-performance systems. According to a 1996 research report by International Data Corp. several vendors posted midrange system revenue increases from 1994 to 1995 from sales to classified Defense sites.
For example Silicon Graphics Inc. (SGI) saw revenue from midrange sales to classified Defense sites increase from $54 million in 1994 to $75 million in 1995. Cray's revenue from this segment also jumped - from $12 million in 1994 to $30 million in 1995. The vendor with the third largest sales in this market Digital Equipment Corp. saw its revenue increase from $17 million to $23 million during the period.
SGI has made inroads in the federal high-performance computing arena with its servers and visualization products said Lynne Corddry the company's manager of federal business development. The four major shared-resource centers in DOD's modernization program use servers from SGI's Origin family of systems. The Origin200 and the Origin2000 are based upon scalable shared-memory multiprocessing.
The key component of the Origin2000 which is at the high end of the server family is again the building-block approach to computing that allows users to expand up to 128 processors according to company officials.
One of the truly explosive areas of growth for the company though has been visualization. High-performance computing projects generate infinite amounts of data and users are looking for ways to display that information graphically. Corddry said every high-end federal deal for which Cray bids -SGI and Cray merged their operations earlier this year - has users clamoring for visualization products.
Taking satellite and other imagery data and displaying it in real time has proven especially useful to military users including tank commanders she said.
"If you're a tank commander you can visually see the environment [in real time] before you go into battle " Corddry said. "You don't have to just sit there and look at your data. You can see your data. The visualization aspect and the simulation aspect are two areas that are making this market grow."As scalable supercomputing takes more of the federal market the whole area could expand to take on a much greater range of capabilities.
Some industry watchers already have dubbed these midrange applications "desktop supercomputers " though for most the definition of a supercomputer is still that of a high-end machine.
But change probably is inevitable. According to Karl Freund vice president of marketing at Cray people tend to use whatever level of performance fits and call that a supercomputer. For him a supercomputer is a "computational system designed to solve a problem that cannot now be solved by any general- purpose workstation." Which probably makes it a certainty that in a few years what is a supercomputer today will be a desktop computer.
At A Glance
Status: Driven by post-Cold War requirements demand for supercomputing performance while not increasing has been remarkably resilient and steady.
Issues: Scalable supercomputers already a factor in the commercial market are quickly catching on among government buyers who are attracted by their "pay-as-you-use" facility.
Outlook: Good. While overall demand is unlikely to increase for the foreseeable future scalable systems and more flexible multiprocessor programming could increase the range of applications for supercomputers.
NEXT STORY: 200 MHz Pentium Desktops: Is Faster Better?