Punching up performance

Price-friendly server clusters offer big boost ? and management challenges

Clusters that link inexpensive, commodity servers are finally coming into their own.

United through software to solve a particular problem, these systems are taking hold throughout the government. At one end of the spectrum, modest clusters of two or four servers, or nodes, are used to promote business continuity. At the other end, government research labs are harnessing clusters of 32 nodes or more for scientific and technical applications. Both aspects of the market continue to push the performance envelope.

But clusters aren't just about power. This computing model holds a major cost advantage over supercomputers based on reduced instruction-set computing processors and the Unix operating system. Clusters of Intel Corp.-based systems running Microsoft Corp. Windows or Linux may be had for a fraction of the cost of a symmetrical multiprocessing machine, industry executives contend.

The list of the world's fastest computers provides evidence of how pervasive cluster computing has become. The Top500.org's June listing of the fastest supercomputers reveals that the number of systems using Intel processors has grown from 56 to 119 in the past six months.

The combination of performance, scalability and lower cost would appear to make clustering an easy decision. Yet clustering has its challenges. One issue is managing clusters as they grow. Organizations are cobbling together a mix of homegrown, open-source and commercial products to get the job done.

Another consideration is how to interconnect clusters. Options include Fast Ethernet, Gigabit Ethernet, proprietary interconnect products and InfiniBand (see box, below).

Despite those questions, clustering technology appears ripe for exploration.

"The market for Intel Corp.-based clusters is definitely heating up and getting a lot of attention from [hardware] vendors and independent software vendors that are developing applications for these environments," said Charles King, a research director at the Sageza Group Inc., a market research firm. "It's a good time to be looking seriously at clustered solutions."

Small clusters of two to four nodes have typically been used to ensure availability. This is particularly true for Windows clusters. Since Microsoft introduced Wolfpack clustering for Windows NT 4.0 in 1997, business continuity has been the main concern for customers.

"In the Windows market, that's how most people think about clustering today," said Jason Buffington, director of business continuity at NSI Software.

Customers use clusters to "provide high availability" for Microsoft Exchange messaging servers and other application servers, added Quazi Zaman, technology manager for platforms at Microsoft Federal.

But small clusters are beginning to take on other roles. For example, the government's shift toward Web-based application development has sparked demand for network load balancing, Zaman said. Load balancing lets organizations boost the performance of Web- based applications. As IP traffic increases, client requests are distributed across multiple servers within a cluster, according to Microsoft officials.

Microsoft Windows 2000, Windows 2003 Advanced Server and Datacenter Server operating systems offer load balancing.

Clusters also are deployed to boost database performance and availability. For Microsoft's SQL Server database software, Zaman said he's seen requirements in the government market for two to four nodes or more.

But Microsoft isn't the only player in the commodity cluster space. Some organizations are opting for Linux.

That's the case for the 19th Judicial District Court in Baton Rouge, La. The court has deployed two Linux clusters: a four-node application server cluster and a four-node database server cluster. The cluster replaced the old Wang VS technology.

"We wanted to have maximum throughput and scalability," said Freddie Manint, criminal justice information system director with the court. He said the court's load-balanced clusters address both availability and performance.

The court uses Dell Computer Corp. 2650 PowerEdge machines for the application server cluster and Dell 6650 PowerEdge servers for the database cluster. Cluster-resident applications include case management and criminal history tracking. Both clusters employ Red Hat Inc.'s Linux.

The court's system uses Oracle Corp. 9i as the database. "We see a lot of traction for two- or four-way servers configured with Oracle [Real Application Clusters]," said Reza Rooholamini, director of enterprise solutions at Dell.

The court's clusters have a 10 gigabits/sec backbone featuring 3Com Inc. Gigabit Ethernet switches and XRN Interconnect Kits, which link the routing switches.

Manint said one attraction of cluster computing is the "bang for the buck" Intel processors afford. The court continues to tweak its clusters and expects to add more nodes in the next couple of years.

When it comes to clustering for sheer performance, federal laboratories and research organizations provide the most notable examples in government. Here, clusters based on Intel or Advanced Micro Devices Inc. systems, typically running Linux, take on a number of computationally intensive challenges.

At the Energy Department's Lawrence Berkeley National Laboratory in California, the humble fruit fly is the target of clustering's computing power. In 2000, the lab's Berkeley Drosophila Genome Project deployed a Linux cluster from Linux Networx Inc. The initial deployment of 20 nodes has since grown to 72.

The lab had the option of adding to its collection of Sun Microsystems Inc. Enterprise servers, but decided the Linux cluster would be more cost-effective. Erwin Frise, systems manager and bioinformatics scientist with the genome project, said the clustering approach was the only way the group could have conducted its research within budgetary limits.

Frise said the lab spent roughly the same amount on its cluster that it would have spent on adding more Sun boxes — and got more computing power to boot.

Eric Pitcher, vice president of product marketing at Linux Networx, said clusters offer a "price/performance improvement over traditional systems of at least a factor of 10."

Cluster Management

The task of running a cluster is fairly straightforward for smaller installations; the operating system does a great deal of the work. In Windows, Microsoft Cluster Server offers a basic set of services. But third-party offerings can provide additional capabilities.

For example, Windows clusters typically involve an active/passive node arrangement, in which nodes share storage and one node takes over if another fails.

NSI's software adds additional protection. The company's GeoCluster uses replication to split the storage so that each node has its own copy of the data.

But the management challenge grows with the size of the cluster. Even routine management tasks become complicated as clusters get larger, said Dave Turek, vice president of deep computing at IBM Corp. And developing software to keep tabs on an organic environment is no walk in the park either.

"Making software scale up and down is extraordinarily hard," Turek said.

Nevertheless, a number of vendors are having a go. IBM's Cluster Systems Management tool is one example. It is included in IBM's Linux and Unix cluster products.

In general, cluster management software aims to facilitate the creation of clusters, monitor the health of nodes and schedule jobs to be distributed among nodes.

Linux Networx's ClusterWorX software configures clusters and monitors nodes. The software keeps tabs on CPU and memory usage while tracking disk input/output and network bandwidth. As for job scheduling, the company plans to integrate open-source software into a unified product, according to Pitcher. For now, Linux Networx products work with Platform Computing Inc.'s Platform LSF and Altair Engineering Inc.'s PBS Pro job-scheduling products.

Open Source Cluster Application Resources (OSCAR) is an open-source take on cluster management. The OSCAR umbrella includes Portable Batch System, from which PBS Pro is derived, among other tools for building and managing clusters. OSCAR contributors include IBM, Intel, Dell and the Oakridge National Laboratory.

OSCAR's goal is to simplify cluster management. Still, information technology managers who seek the benefits of clustering should be mindful of its potential complexity.

Moore is a freelance writer based in Syracuse, N.Y.

NEXT STORY: Post office has new pay system