NSF ups scale of supercomputing

The National Science Foundation added the most powerful supercomputing system yet to its offerings for science and engineering researchers nationwide

The National Science Foundation added the most powerful supercomputing system

yet to its offerings for science and engineering researchers nationwide

a $45 million award to the Pittsburgh Supercomputing Center on Thursday.

This one can do up to 6 trillion calculations per second.

The National Science Board, NSF's governing body, approved the three-year

Terascale Computing System award following a national competition that started

in late 1999. The Pittsburgh Supercomputing Center, a partnership of the

University of Pittsburgh and Carnegie Mellon University through MPC Corp.,

emerged as the winner from five proposals with its system from Compaq Computer

Corp.

The Pittsburgh system will join the National Center for Supercomputing

Applications at Urbana, Ill., and the San Diego Supercomputer Center as

part of NSF's Partnerships for Advanced Computational Infrastructure program.

A PACI peer review team will decide what researchers will be given use of

the new system, but NSF officials said only those that use a significant

portion of the system's computing power will be given access to it.

"As I have said in the past, IT is a national imperative," NSF director

Rita Colwell said at a press briefing to announce the National Science Board's

decision. The Pittsburgh proposal also involves researchers at the Energy

Department's Oak Ridge and Sandia national laboratories.

The need for additional high-end computing resources for the national

research community was highlighted as a recommendation from the President's

Information Technology Advisory Committee.

The Pittsburgh system is six times greater in peak performance than

any other machine in the PACI program, said Robert Borchers, director for

advanced computational infrastructure and research at NSF. Potential applications

for the Terascale system are tornado forecasting, research on protein folding,

high-energy physics and turbulence simulation, he said.

Pending negotiations between the Pittsburgh team and NSF, the new system

is expected to begin operation in February 2001. Researchers will access

the system through high-performance communications networks such as the

WorldCom Inc.-managed very high-speed Backbone Network Service (vBNS) and

Abilene.

The system's peak performance is expected to reach 6 teraflops, or 6

trillion operations per second. The award includes $36 million for the Compaq

machine and $9 million for its three-year operation.

NSF's budget request for fiscal 2001 includes funding for a second Terascale

computing center, but Congress did not finalize the bill before it left

for recess.

"I view this as a starter kit," Borchers said. NSF's supercomputing

resources are typically oversubscribed by a factor of two or three, he said.

NSF has not decided whether the Pittsburgh team will be permitted to

compete for the second award, but it is likely they will be discouraged,

Borchers said.

The new system will offer the civilian science and engineering community

the capabilities it needs, Colwell said.

The system will feature 2,728 Alpha processors from Compaq. The chips

will be organized into 682 four-processor "nodes," each with a gigabyte

of random access memory, for 2.7 terabytes of total RAM. The system's hard

disk array will feature 50 terabytes of primary storage, with a further

300 terabytes of disk or tape storage available as needed. The system also

will have high-performance software for system administration and job scheduling,

along with compilers and other tools for programmers. The operating system

will be Tru64, Compaq's version of Unix.

NEXT STORY: Avoiding 'digital Exxon Valdez'