Feds spark high-end computing resurgence
Interest and money from federal agencies could spark a resurgence in demand for supercomputers
Interest and money from federal agencies could spark a resurgence in demand for supercomputers.
But rather than just focusing on power and speed, the new group of high-end computers must work with a range of basic applications and data repositories, and incorporate networking functions, according to participants at the Supercomputer 2002 conference held last month in Baltimore.
Rita Colwell, director of the National Science Foundation, outlined the agency's plans for a major cyber infrastructure initiative that will be crucial to 21st century scientific research and focused on both traditional supercomputer architectures and grid computing.
NSF, whose budget is set to double in fiscal 2003, already has announced it will fund the TeraGrid, a distributed facility that will give scientists nationwide access to the most advanced computational facilities available. NSF also is focusing on the Grid Physics Network (GriPhyN), which links U.S. and European researchers and could lead to the development of petascale virtual data grids, a network on which data-intensive applications can be shared. The agency also is working on the Network for Earthquake Engineering Simulation, which Colwell described as a "21st century model for collaboration, a laboratory without walls or clocks."
But Colwell stressed that computational power for high-end computing is not enough anymore, echoing one of the conference's major themes. There needs to be at least as much emphasis on research into basic information technology and applications, she said.
"We will need to expand our network capabilities and our large data repositories and develop new computational, analytical and visualization tools," she said.
The agency recently released the second version of free middleware software developed under its NSF Middleware Initiative (NMI), and the agency is drafting a strategic plan for the program to be released in April 2003, according to Alan Blatecky, NMI's program director.
A report from NSF's Advisory Committee on Cyberinfrastructure will lay out the agency's future direction on this effort. It will be published soon, Colwell said.
Following Colwell's lead, Energy Department Secretary Spencer Abraham announced that his department has awarded IBM Corp. a $290 million contract to build what would be the world's two most powerful computers. IBM will offer a 100-teraflop supercomputer based on a massive cluster of POWER-based IBM eServer and storage systems, and a 360-teraflop computer that will run Linux on 130,000 processors. One teraflop is 1 trillion floating-point operations/sec.
The 100-teraflop ASCI Purple system will be the primary supercomputer at DOE's Advanced Simulation and Computing Initiative and will be used primarily on virtual testing of nuclear weapons. The 360-teraflop Blue Gene/L supercomputer will be based on a new system architecture that IBM and DOE are developing. The system will be used to simulate complex phenomena such as turbulence, biology and the behavior of explosives.
The government is also a primary backer of Cray Inc.'s new X1 supercomputer, which has a theoretical peak performance of 52.4 teraflops and a massive ceiling of 65.5 terabytes of memory. Its basic compute element is capable of 12.8 gigaflops, giving a single 64-processor chassis configuration a peak performance of 819 gigaflops.
The Army, Defense Department, DOE and the National Security Agency are reportedly early customers of the Cray X1.
Cray's ultimate target is a system that will run at sustained, not just theoretical, petaflop, or massive computing, speeds, according to James Rottsolk, the company's chairman, president and chief executive officer.
"Our road map is focused on this commitment," he said.
A current debate in the high-end computing community deals with the applicability of cluster systems vs. that of traditional, single architecture designs such as the Cray machine.
The attraction of clusters is evident, given the rise of systems based on Intel Corp. processors in the TOP500 list of supercomputers, which is released annually at the conference. Two of the top 10 systems — at the National Oceanic and Atmospheric Administration and Louisiana State University — are Intel-based clusters. The list includes a total of 56 Intel-based systems vs. two just three years ago.
According to Tom Gibbs, director of industry marketing at Intel's Solutions Market Development Group, it's the processors' cost effectiveness, coupled with the inherently scalable performances that can be extracted from them, that make Intel-based cluster systems so attractive. The Aberdeen Group, a Boston-based consulting firm, has predicted that these clusters could make up 80 percent of the total high-performance computing market by 2005.
However, Cray and its backers believe that the tightly coupled architecture that its system is based on, which is better suited to complex scientific and engineering tasks, actually works out to be less expensive when both systems are subjected to total life cycle cost analyses.
Defense Advanced Research Projects Agency officials, however, said that cluster computers will not be able to handle the types of national security applications that will arise in the future.
The goals of the program — to develop the next generation of high- performance computers — takes DARPA back to its role during the Cold War, when it was a major influence on the design of U.S. supercomputers and several panels devoted to the High Productivity Computing Systems program drew large crowds.
This time around, DARPA's focus is also on such things as ease of programming, application performance and portability, scalability and reliability, rather than just gigaflop or petaflop ratings.
It's a focus that will define this new resurgence in high-end computing, most industry participants on the panels said.
Without a doubt, "programmability is the payoff," said Burton Smith, an executive with Cray.
Robinson is a freelance journalist based in Portland, Ore. He can be reached at hullite@mindspring.com.
***
High-end activity
Growing interest among federal agencies could help spur the demand for supercomputers. Who's doing what:
* National Science Foundation is funding the TeraGrid to give scientists nationwide access to advanced computing facilities, the Grid Physics Network to link U.S. and European researchers, and the Network for Earthquake Engineering Simulation.
* Energy Department awarded IBM Corp. a $290 million contract to build the world's two most powerful computers — a 100-teraflop supercomputer for virtual testing of nuclear weapons and a 360-teraflop computer running on Linux to simulate complex activity such as turbulence.
NEXT STORY: Locals wary while Homeland forms