Los Alamos' self-assessment sets the bar high
Officials at the Energy Department lab spent a decade hammering out requirements for their storage environment.
The typical organization may spend weeks or months going through a storage assessment.
Los Alamos National Laboratory isn't the typical organization. Officials at the Energy Department lab spent a decade hammering out requirements for their ideal storage environment. Part of their work involves simulating nuclear weapons tests. As officials threw more and more teraflops of processing power at the task, they realized that the input/output channels to fetch data needed to keep pace with the growing processing power.
Specifically, the lab needed a scalable, parallel file system and storage solution. "Ten years ago, we looked for solutions, but there weren't any at the time," said Gary Grider, scalable input/output team leader with the High Performance Computing Group at Los Alamos.
In the late 1980s and early 1990s, Los Alamos officials were using proprietary file systems specific to a given supercomputer. At one point, they were running 48 smaller supercomputers with their own file systems. They were able to get by with the proprietary file systems but discovered that the situation wasn't sustainable.
Los Alamos' computing experts meet annually to see where the organization stands according to its five-year planning cycle. They forecast their performance needs over that period and identify barriers that could get in the way. The lab's input/output mavens met a few years ago and concluded that they would encounter acute scalability problems by the late 1990s.
Officials then began to define requirements for scalable file system storage. One key finding was that they would need 1 gigabit/sec throughput for every teraflop of computing power. The assessment resulted in a 37-page requirements document.
While working on the requirements document, Los Alamos officials began to wonder about the value of putting out a request for proposals to help solve their problem. At times, the lab's requirements were too far ahead of industry to generate interest in an RFP. But Los Alamos hosted a vendor conference to test the waters.
"We invited companies in [input/output] and storage in 1998," Grider said. About 40 companies were represented at the meeting. Of those, three expressed interest in Los Alamos' scalable file system storage vision. Lab officials wrote an RFP based on the requirements document.
Although the main objective was scalability, the new storage environment also provided an opportunity for data sharing. The scalable file system would be cross-platform as opposed to proprietary. Then the common file system could be shared rather than tied to an individual supercomputer.
The Los Alamos RFP set the stage for a project involving both scalability and consolidation — supercomputing style.
Next week: Los Alamos taps Panasas Inc. for storage deal.
NEXT STORY: Is wireless LAN gear right for you?