Cloud computing: mainframe redux
Large IT vendors are using interest in cloud computing to angle for a much bigger share of your IT spending.
Thanks in part to a directive from Federal CIO Vivek Kundra, federal agencies are starting to buy various technology services from cloud providers.
Meanwhile, more than a few agencies also want to make their data centers more cloudlike, which is to say they want the centers to be more efficient in terms of hardware and energy use and more flexible in responding to users’ needs. Kundra’s governmentwide data center consolidation initiative — his other high-profile plan for IT reform — is also fueling those plans. As a result, the predominant vision for the consolidated government data center of the future is basically a recipe for a private cloud.
Government technology executives might find themselves working with some new terminology and concepts as they chart this cloud-heavy future. But the road map vendors will offer them might have a familiar ring, especially for those who have been around IT for a while.
The new product offerings go by names such as converged infrastructure, unified computing or the whimsical-sounding cloud in a box. The idea is that a vendor or small team of vendors combines server, storage, network, virtualization and management products into one pretested, seamless package or stack. Drop it in your data center, load up your software, and away you go. No muss, no fuss.
The Vblock infrastructure package from EMC, VMware — owned primarily by EMC — and Cisco Systems is one example that has been getting a lot of attention. Similar options are available or forthcoming from most large vendors, including IBM, Hewlett-Packard, Oracle, Dell, Hitachi and Fujitsu.
The integrated stacks are reminiscent of the mainframe model, in which one vendor supplied the chips to the disks and everything in between. For customers, the old model meant they spent a huge portion of their hardware acquisition and support budget with one company.
Fast forward to today. Although it’s unlikely any single company will reign as IBM did with mainframes, it is quite possible that you could again be tying a big chunk of your IT infrastructure spending to one or two companies.
That’s not necessarily bad.
“The cost savings from what might be superior performance, ease of operation and the ability to move in-house staff from low-level break-fix to more strategic tasks will over time probably far outweigh what [organizations] give up or what they fear about vendor lock-in,” said Lauren Jones, a principal analyst at Input.
Jones has written that those converged packages could help agencies accelerate efforts to consolidate data centers and build private clouds. She and others also say the approach will lead to a handful of mega-vendors that own not only the cloud infrastructure market but also the IT market as a whole.
Assuming they are right, what should CIOs look for in such an important supplier? The integration of components is achieved through the system orchestration software, so the quality of that particular piece will be crucial, said Chris Evans, director of storage consulting firm Brookend.
Portability, or the ability to move workloads from one infrastructure stack to another, will also be an important feature. But don’t count on it being a priority for suppliers. “Many vendors will fight against offering portability or [will] offer features that allow a customer to easily move into but not out of a particular vendor's cloud offering,” Evans said.
For those who want more flexibility and are willing to shoulder the integration work themselves, there is another option taking shape. NASA built its private cloud-computing platform, called Nebula, using various open-source programs and commercial hardware. Now the agency is contributing some of its engineering work to an open-source project called OpenStack. Some organizations are already using it to assemble what one writer has referred to as a poor man’s cloud in a box.