National lab, IBM team up on supercomputer initiative
Aim is to crunch huge amounts of data to improve private sector competitiveness.
IBM and the Lawrence Livermore National Laboratory are working together on a project to leverage supercomputing to help industry better spot trends and more quickly develop new technologies, the pair announced Wednesday.
The new supercomputer, called Vulcan, will use some of the same ultrafast technology underlying Lawrence Livermore’s Sequoia, recently named the world’s fastest supercomputer. Sequoia monitors, evaluates and tests the nation’s nuclear stockpile.
Vulcan, which is due in 2012, will focus instead on crunching mostly unclassified data to speed the development of new technologies in applied energy, green energy, manufacturing, data management and other fields, IBM said.
Sen. Dianne Feinstein, D-Calif., announced the Livermore deal during an IBM event at the Capitol on Wednesday.
Feinstein, who leads the Senate Intelligence Committee, praised Sequoia for “maintaining the safety, security and reliability of nuclear weapons stockpiles without underground testing.
“This machine will be an important tool to extend the life of aging weapons and anticipate future problems resulting from aging, leading to smaller nuclear weapons stockpiles,” she said.
The United States currently maintains a large stockpile of nuclear weapons so it can have spares in case some weapons don’t function, a Feinstein spokesman said. If the government had greater confidence that weapons in the stockpile were operating correctly, fewer spares would be needed, he said.
Vulcan could use similar data crunching power to Sequoia to help U.S. businesses be more competitive in emerging fields, Feinstein said.
The IBM event focused largely on how advances in supercomputing and big data analysis can benefit government and the private sector.
Big data typically refers to large amounts of unstructured information -- anything from scanned books to satellite feeds to social media posts to data outputs from the Large Hadron Collider at CERN -- which were once too complex for existing computers to sift through and extract meaningful patterns.
The Large Hadron Collider, for instance, produces 40 terabytes of data per second. That’s nearly three times the data comprising all the books in the Library of Congress.
The White House invested $200 million in March in research and development initiatives related to the mining, processing, storage and use of big data.
The best thing the government can do to support the development of big data technology is to set standards for how data should be organized and invest in basic research in computer science and mathematics to tease patterns out of massive data troves, Steven Ashby, deputy director for science and technology at the Pacific Northwest National Laboratory, said during the IBM event.
The government also can invest in proofs of concept in particular fields such as financial analysis to show the value of big data analysis to the private sector, Ashby said.
David McQueeney, vice president for software in IBM’s research division, compared big data to a natural resource like coal or oil -- there’s a lot of it out there, but it’s difficult to extract and must be refined before it’s useful.
“It’s underexploited because we haven’t had computer power at the right price point to do the rather complicated extraction of unstructured data to provide real insights,” he said. “It’s been underappreciated because the size of the systems that can hold and manipulate the data haven’t been present.”
NEXT STORY: Thousands flee Colorado Springs fire