DARPA resumes super research role

Contracts target high-performance computing

The Defense Advanced Research Projects Agency recently awarded several contracts in the first phase of a seven-year program created to provide the next generation of high-performance computing deemed essential to national security.

The three-phase High Productivity Computing Systems (HPCS) program also returned DARPA to a leadership role it essentially dropped in the late 1980s, when the end of the Cold War made government involvement in high-end computer development less urgent.

Up until then, DARPA had been a primary instigator of research and development into advanced computing as a way of keeping the United States ahead in the development of major weapons systems, with ultra-fast supercomputers always in demand for the design and testing of nuclear bombs and other weapons.

But the collapse of the Soviet Union dramatically lessened the threat of conflict, and with the rapidly expanding commercial computing industry seemingly prepared to fill the gap, the bleeding-edge research that DARPA fostered was seen as unnecessary.

The events of Sept. 11, though not the only impetus, have given DARPA a big boost. The sense of urgency is only increased by the perception that trends in commercial high-performance computing, coupled with the limitations of current chip technology, will lead to an inevitable gap in high-end computing that threatens national security applications.

"The program is targeted toward those kinds of security applications that are unlikely to be satisfied by grid computers and by some of the computer clusters," said Robert Graybill, manager of the HPCS program. "They may be satisfying them today, to a degree, but we don't see them as scaling up to support the kinds of efficiencies that will be needed in the future."

Contracts of around $3 million each for the first 12-month phase of the program, which is aimed at developing a range of HPCS concepts and "productivity metrics," were awarded to IBM Corp., Cray Inc., SGI and Sun Microsystems Inc. A three-year second phase will focus on R&D and risk reduction engineering, and the final four-year phase will be geared toward the development of full-fledged production systems.

The goal is to increase the emphasis and visibility of different criteria for high-performance computing, Graybill said, and to address the "time from idea to solution," or the time it takes for a scientist to write a program, run it on the computer and come back with results.

Future high-end computing could be taken up by such tools as quantum computers, but Graybill said there will likely be a "significant" lag between current laboratory experiments and product systems development. Also, quantum computing has the potential to address particular problems, but whether it will replace general-purpose computers is unknown.

It remains to be seen whether the "aggressive" HPCS program develops to the extent that DARPA officials anticipate, said James Rottsolk, Cray chairman and chief executive officer. But the high-performance computing industry should nevertheless be encouraged that DARPA is back in the game.

"There's been very little movement in the industry in the 10 years since DARPA abandoned its program," he said. What incremental progress there has been has come from improvements in the speed and performance of processors, "but it's difficult to build large systems entirely from commodity components. So I think the DARPA push is encouraging."

DARPA's approach is also a break from the past focus on supercomputer ability as measured by how high vendors could ratchet peak megaflops (millions of floating point operations per second) and gigaflops performance.

The HPCS program will instead focus on a more holistic approach to high-end computing that examines how such features as ease of programming, applications performance and portability, scalability and reliability of systems, and tamper resistance fit into the high-end computing scenario.

The challenge for the program, DARPA officials say, is to combine the elements of system architecture, programming models, software and hardware to provide "productive" systems that double in value every 18 months, paralleling Moore's Law, which is based on a statement from Intel Corp. co-founder Gordon Moore that the number of transistors on chips will double every 18 months as technology advances.

The goal is to improve the computational performance and efficiency of critical national security applications by a factor of 10 to 40 by the end of the decade.

Major technical hurdles face high-performance computing in the near future, said Steve Miller, chief scientist for SGI. As users push system processor counts beyond the limit of 1,024 in today's systems, they run into major problems with moving data, such as messages, around the system, because there is so much more data to manage, he said. And they need to build something to handle the "multi-terabytes of main memory" that high-performance systems will have within the next few years.

"So we are going to have to find ways of dealing with that much memory and with the size of the datasets that will be going through this," he said. "That's what we have to start getting into now."

The DARPA program comes none too soon in Rottsolk's view.

"The effects of Sept. 11 showed that we really don't have the abilities we should have to fuse very large pieces of information" and derive results from that, he said.

Robinson is a freelance journalist based in Portland, Ore. He can be reached at hullite@mindspring.com.

***

From programming to hardware

The High Productivity Computing Systems program aims to create high-end programming environments, software tools, architectures and hardware components. HPCS program objectives cover:

Performance — Improve the real (not peak) performance of critical national security applications by a factor of 10 to 40.

Productivity — Reduce the cost of developing, operating and maintaining application solutions.

Portability — Insulate research and operational application software specifics from system specifics.

Robustness — Develop techniques to improve the reliability of systems by protecting them from outside attacks, hardware faults and programming errors.

NEXT STORY: City making wireless connection