Major vendors are positioning Linux as an alternative for high-end systems
The open-source Linux operating system is just one of several choices for desktop computers, enterprise servers and other common implementations. Sometimes it is chosen, but often it is not.
But at the high end of the computational power range — in supercomputers built by national laboratories, NASA or the Defense Department from clusters of processors — Linux is rapidly gaining ground on Unix as the operating system of choice.
Linux is becoming increasingly adaptable to large computing systems, said Jonathan Eunice, president and principal analyst at Illuminata Inc. Because the high-performance computing market is somewhat specialized, the makers of proprietary operating systems are less aggressive in marketing to such users.
"There is minimal competition from Microsoft [Corp.] in this high-performance arena," he said. "There is tremendous competition if you're talking about the desktop or general server applications, but almost none in the high-performance space. It's mostly a wide-open field for Linux."
A new Linux kernel — the lines of code that make up the heart of an operating system — is likely to accelerate Linux's presence in the high-
performance computing space, Eunice said. Version 2.6, already running in some test settings, improves memory management, input/output control and other elements compared to Version 2.4, he said.
"Linux is happening," said Brian Connors, vice president in charge of the use of Linux on IBM Corp.'s Power platform. "It's not something anyone can stop. No one owns it. Anything you do to it
you feed to the open-source [community] so anybody can use it."
Because Linux is open source and distributed cheaply, it's more cost-effective for organizations running clusters of processors, Eunice said. "If you're going to build [high-performance computing] clusters, licensing Windows gets very expensive quickly," he said, adding that Linux can run on hardware from different vendors, reducing the need to standardize on a single brand.
A few other operating systems, including Sun Microsystems Inc.'s Solaris brand of Unix and another open-source system called OpenBSD, are sometimes chosen for high-
performance computing applications, but Linux has the edge, Eunice said.
"I'm sure you can find a cluster here or there that is run on OpenBSD," he said. "But it isn't a major force. Linux and Linux clusters are dominant, have many more installs and are much more a focus of user attention. This isn't because of any particular functional lack of OpenBSD. Linux just got there first, developed a much larger and richer ecosystem and grew into the leader."
There is a widely held perception that Linux, with the Version 2.6 kernel, can only scale to 16 processors, said Andy Fenselau, director of product marketing for high-
performance computing servers at Silicon Graphics Inc. But the language is rapidly becoming more robust than conventional wisdom holds, he said. "Everyone 'knows' Linux can't scale," he said. " 'It only scales out, it can't scale up.' That's the mythology."
Scaling out refers to the ability to harness multiple nodes on a single project. It is useful for processing chores that can be easily broken into smaller tasks that the nodes can work on independently. Scaling up refers to the number of processors that can work together in each node.
On the rise
At the Pacific Northwest National Laboratory in Richland, Wash., Linux has all but taken over, said Scott Studham, associate director for advanced computing there.
"When I got here three years ago, there were circa 1,000 processors here, of which four ran Linux," he said. "Now there are circa 2,000 processors, and maybe 64 of them don't run Linux."
Linux became the default, pushing out proprietary Unix systems as lab scientists built two supercomputers, he said.
Part of the reason for Linux's ascendancy was necessity, Studham said. The lab, in building the two supercomputers, was "pushing the bleeding edge, hard," and some of the necessary drivers were only available in Linux, or were easily ported to Linux from Unix.
As necessity pushed, improved software tools pulled, he said. A file system called Lustre, for example, which is open-source software developed by Cluster File Systems Inc., has recently become more scalable than it once was, he said. The file system governs the interactions between the operating system and storage media, such as hard drives, in which files are stored.
"The tools that exist around [Linux] are more mature," Studham said. "In my opinion, since we're pushing the bleeding edge of the operating system, they're all going to have problems."
Scientists are counting on the new kernel to help Linux realize its potential, said Bob Ciotti, leader of the terascale systems group at NASA's Ames Research Center at Moffett Field, Calif. Although scientists at the center are running Linux on a 512-processor SGI Altix cluster, it only works fairly well, he said.
"There are some significant limitations in the version of the kernel that we're running," he said. The lab is running the
2.6 kernel in a test system and expects to offer needed control over the buffer cache, memory allocation, thread management and job management.
For example, scientists at the Ames center want to create logical partitions on the computer so that they can confine a particular job to specific processors and allocate memory to them while being able to terminate the process if it threatens to take up memory needed by other simultaneous tasks. With the 2.4 kernel, they can't do that.
In about six months, when Ames scientists are able to implement the 2.6 kernel, "I think we'll have a better chance of fixing things," he said.
The computer is used primarily for part of NASA's Return to Flight program, in which scientists model various aspects of shuttle flight in the wake of the 2003 Columbia explosion.
It also runs the modeling duties of the Estimating the Circulation and Climate of the Ocean (ECCO) research program, a joint project of NASA's Jet Propulsion Laboratory, the Massachusetts Institute of Technology and the Scripps Research Institute. The computer is called Kalpana, in honor of Columbia astronaut Kalpana Chawla, who also worked at Ames.
Ames officials previously used a 1,000-processor system running SGI's Irix brand of Unix.
"Linux hasn't really caught up with Irix yet," Studham said. "In 2.6, we'll have the features in Linux that we need to really bring it up to speed."
Vendors lend support
Vendors recognize the value of Linux for the high-performance computing market and are developing systems to run and improve it.
It "was a market that grew primarily because of the embrace of Unix as an operating system," said Dave Turek, vice president of deep computing at IBM. "But in the last four or five years, what we've seen is a radical growth curve for Linux."
Linux is easily portable from one hardware vendor to another, unlike proprietary operating systems, which adds to its appeal, he said. Because it is open source, users can modify it to fit their individual needs, another plus.
Linux is on the cusp of becoming the market's dominant platform, he said.
IBM officials recognized this potential and have committed resources to help develop Linux and cultivate the high-performance computing market, Turek said. Until recently, much of the development was concentrated on desktop PCs and servers, he said.
"Our view was that Linux had all the ability and opportunity to propagate, but if it remained low-end, would not resonate as it was capable of," he said.
IBM has not abandoned its work on AIX, its brand of Unix, he added. The choice of operating systems is "really a parochial decision that each customer has to make, depending on a variety of factors," Turek said. "Two customers could look at the same set of facts and come up with two different conclusions. Our customer set is not a monolithic body of like-minded individuals."
IBM officials recently enhanced the Power microprocessor platform to run Linux and AIX in response to the growing interest, Connors said. The platform can run both systems simultaneously, he added.
The growth of Linux is well established in the government's high-performance computing market, IBM's Connors said, though it is now spreading to commercial settings. Computer companies would be smart to embrace the operating system, he said.
SGI's line of Altix servers and supercomputers are made for Linux. Company officials have added extensions to enable the system to scale to more processors.
"I think the industry in the commercial world is profoundly aware that Linux has affected the [Windows] NT market share in the low end," said Jeffrey Greenwald, SGI's senior director of product marketing and product management. "First they denied it, then they fought it and today they kind of acknowledge that they have to co-exist with it. That same process happened on the high end, except instead of it being Microsoft, it was the proprietary Unixes."
SGI officials chose to support Linux in 1999, he added.
"Linux is a mind-shift across the desktop, across embedded [systems], across midrange and across high-end" high-performance computing, Greenwald said. "It's something the world is embracing."
Reservations
Even strong Linux advocates admit the operating system isn't perfect. Studham said he is not certain the 2.6. kernel will fulfill its potential.
"Most [operating systems] are far more mature than Linux," he said. "For example, building large file systems — greater than 2 terabyte file systems — that's been common on every [version of] Unix for the last five years."
Creating such large file systems — software that manages interactions between the operating system and the storage media — is a problem Linux has not yet fully solved, even in the new kernel, he said.
If Linux does become the operating system of choice for high-performance computers, it may not hold that position for long, Studham added.
Computers will have hundreds of thousands of processors by the end of the decade, he said. "That's something that Linux isn't prepared to do. Linux doesn't scale up as well as most standard Unixes. If I were a bank, I'd probably be using Unix. But I'm a national lab pushing the bleeding edge. There isn't anything else for supercomputing right now, at the extreme edge," Studham said.
NEXT STORY: Cisco, IBM team up