Users await results of Tera road test

On the cutting edge of computing, where hardware designers think of their machines as race cars, a new highperformance engine is undergoing a test drive. The experimental computer architecture created by Tera Computer Co., Seattle, promises to solve the most daunting problem of supercomputing toda

On the cutting edge of computing, where hardware designers think of their machines as race cars, a new high-performance engine is undergoing a test drive.

The experimental computer architecture created by Tera Computer Co., Seattle, promises to solve the most daunting problem of supercomputing today: ''latency, '' or the amount of time processors idle waiting for data from memory. If it works, Tera's Multithreaded Architecture (MTA) could be a revolutionary tool for running scientific models used in climate research, weapons design and other federal government applications.

Early results from the San Diego Supercomputing Center (SDSC), which is conducting the first government-funded trials of the Tera MTA outside the company's laboratory, show the new technology appears to be competitive with the fastest processors in production today. But in the cost-conscious federal marketplace, buyers and users of the most powerful computers in the world want more proof— not only that a full-fledged system will work as well or better than what they have, but also that if they invest in one, the company will be around to support them in the future.

''Tera does have the programming paradigm that is becoming 'the quiet bet' of the computing industry,'' said George Lake, project scientist with NASA's High Performance Computing and Communications Earth and Space Science Program. But after watching numerous supercomputer vendors fold or shift their focus to mass market technologies in the past five years, ''everyone will be waiting to see Tera sell lots of machines to 'everyone else.' "

They have been waiting for just more than a decade already. The Defense Advanced Research Projects Agency (DARPA) has contributed more than $18 million to developing the MTA computer since Tera was founded in 1987. Potential customers first glimpsed its wave-shaped gray chassis inside in a San Jose, Calif., convention hall in late November, when SDSC computer scientists reported the performance of a single Tera processor on industry benchmark tests.

The results, said Wayne Pfeiffer, deputy director of SDSC, were ''encouraging, but we weren't able to tune all of them to get the optimal performance.'' To date, they have been unable to run complete versions of real-world applications because they have not been able to add the horsepower of multiple chips.

''Everyone is waiting to see when subsequent processors are installed,'' said Steven Wallach, adviser with the venture-capital firm CenterPoint Ventures and member of a presidential advisory committee on high-end computing and networking. ''Then they will be under the microscope.''

''We're simply having difficulty getting production-quality [network] boards from our vendors,'' so scientists in San Diego can add a second processor to the system, explained John Rottsalk, Tera's president. ''They're very, very close,'' he said. Company officials declined to identify specifically who is making the boards, although Tera's World Wide Web site lists several board manufacturers as partners.

The tests are being paid for with a $4.2 million grant from the National Science Foundation and a $1.9 million contract from DARPA, awarded last summer.

Currently, there are two basic types of high-performance computers on the market: vector supercomputers and ''scalable parallel'' systems. The processors in vector supercomputers execute one instruction at a time until a program is complete. In scalable parallel machines, instructions are divided among multiple processors. With both kinds of systems, there are physical barriers to how fast processors can retrieve data from memory.

Over the past few years, agencies have begun replacing their older vector machines with scalable parallel systems because these models scale up more cost-effectively. But they are harder to program, said Robert Borchers, director of Advanced Scientific Computing with NSF. ''We've got parallel systems running in our program with as many as 500 processors, and people have been successful in using them. But on an application-by-application basis, it takes a lot of hard labor and rethinking'' to make the switch, he said.

Here is where Tera executives believe they have made a breakthrough. ''We find that we are quite routinely able to take full production codes and recompile them so they run on the machine fast, essentially untouched,'' Rottsalk said.

They accomplished this by using a specially designed chip that is capable of executing 128 tasks at a time, without waiting for one to be completed before proceeding to the next. The system's compiler is designed to distribute instructions to chips that aren't busy. This approach, called multithreading, means processors spend less time idling, and memory latency is diminished.

''I guess what people are really hoping is that it's going to be a good, general-purpose machine,'' Borchers said, not only for scientific simulations but also for high-volume transaction processing or data-mining applications that need powerful servers. ''You can dream five years out. Put the architecture on a chip and put it in workstations.''

Competition Welcome

After watching so many high-performance computing companies merge or bow out of the marketplace recently, federal agencies are eager to see another competitor enter the fray.

''If there's only one methodology, one type of architecture, one type of system, it must be a compromise for some jobs,'' said Charles Nietubicz, director of the Army Research Laboratory (ARL) Major Shared Resource Center, in Aberdeen, Md., one of four primary sites for supercomputing at the Defense Department. ''We need some diversity because all of our problems aren't the same.''

Only five vendors— Silicon Graphics Inc./Cray Research, IBM Corp., Digital Equipment Corp., Hewlett-Packard Co. and Sun Microsystems Inc.— are actively selling parallel processing systems to the government. SGI/Cray, meanwhile, is the only U.S. manufacturer of vector supercomputers. NASA's Lake said that after a trade dispute last year involving Cray's main competitors in the vector market, Fujitsu Ltd. and NEC Technologies Inc., quashed a purchase of the Japanese-made machines by the National Center for Atmospheric Research, other agencies considered it a sign to buy domestically.

''Competition lends itself to more creativity,'' said Don Fitzpatrick, president of High Performance Technologies Inc., a Reston, Va.-based integrator. ''I remember early on when Cray [vector systems were] the only solution, and we started wiring together workstations to compete against it,'' with parallel processing as the result.

Whether that means agencies are ready to pay $5 million to $40 million for a Tera computer is not clear.

Brett Berlin, a consultant who advises DOD on high-performance computing technologies said Tera has missed ''several windows of opportunity'' to sell to agencies in the years it has taken to design its system. ''If this is a better computer for a better cost, there will be a market,'' he said, but ''I don't know what procurement cycle Tera will be equipped to meet.''

The company faces a tough sell even to its most promising potential customers. ''Our stated policy is we will build the computers...out of the volume units that the industry produces, so it's very important that we have companies that are very profitable,'' said Gil Wiegand, deputy assistant secretary for strategic computing and simulation with the Energy Department. Wiegand oversees the Accelerated Strategic Computing Initiative, a multibillion-dollar program to build computers capable of simulating tests of nuclear weapons.

Tera's competition comes mainly from SGI/Cray, which supplies 75 percent of the government's most powerful supercomputers, according to a list maintained by Mannheim University in Germany of the world's top 500 machines.

Tera is ''trying to position this as an alternative or replacement for vector [systems] but there's no real proof the technology will be able to be deployed in that sense,'' said Bill Minto, director of platform product management with SGI/Cray.

Although at SDSC the Tera processor performed comparably to the processor used in SGI/Cray's current vector supercomputer model, the T90, ''until they have a fairly large-scale processor system there and can demonstrate sustained performance, I see no reason to credit them with their marketing claims,'' Minto said.

''I'm certain our competition will do everything they can to add to skepticism,'' Rottsalk said. ''If we can demonstrate we have the capability to ship production systems that do as we say, we'll have very good customer support. I think by the end of the year we will be able to say that.''