10 technologies to watch in 2010
Government policies on information sharing, mobility, more efficient networking and, of course, security help define the technologies that will be hot this year.
Predicting the future is a mug's game, but when it comes to government information technology, a lot of what happens in one year is an extension of ongoing policies and the work done in the previous year.
For instance, the White House's open-government initiative fostered a lot of Web 2.0 applications in 2009 because of agency efforts that are likely to expand and improve. Virtualization is, in many ways, just getting started in government. And the continued focus on information sharing and mobile applications will create a need for higher throughput and the means of making data available anytime, anywhere.
So what will be hot in government information technology in the new year?
Faster pipes are one thing, as networks migrate to 10 Gigabit Ethernet and high-speed science networks explore the 100G boundary. Web 2.0 applications figure to become more prevalent and, perhaps, more dazzling. With IPv6 backbones in place and IPv4 addresses running out, we're likely to see more networks employ the new IP. Virtualization could be extended to the desktop. The new, faster generation of USB connections promises great increases in transfer speeds, and 4G wireless networks could expand the availability of data to mobile devices. And, of course, we'll see a sharpened focus on security, from efforts to continue locking down the Domain Name System to more thorough inspection of incoming data packets and ways to apply security to all those virtual machines.
Several of the technologies we've listed aren't new but figure to play a big part in government IT this year. Others are emerging and are likely to have a big impact this year and beyond.
So without further ado, here are 10 technologies to keep an eye on in 2010.
100G networking on the horizon
Research and other high-speed networks await standards
The Institute of Electrical and Electronics Engineers is expected to release standards for 100 Gigabit Ethernet networking in June, and they should spur deployment of high-speed networks outside pilot programs and testbeds.
"This 100G has a lot of folks drooling about what they can do with it," said Steve Cotter, head of the Energy Department's Energy Sciences Network at the Lawrence Berkeley National Laboratory.
The folks at DOE who operate and use ESnet are among those drooling over the possibilities. ESnet has received $62 million in stimulus funding to build 100G links between supercomputing centers at the Argon and Oak Ridge national laboratories, and an additional $5 million has been allotted for near-term high-speed research projects, such as a program to create 100G network interface cards.
The highest rate for standardized Ethernet is 10 gigabits/sec — which is fast — but networks already are bundling 10G circuits to achieve needed bandwidth in backbones. Native 100 gigabits/sec circuits offer the potential for significantly higher data rates that will be more economical and easier to manage than bundled circuits. It can contribute to green computing by enhancing bandwidth without increasing power and cooling needs for equipment, and the engineering rules for deploying it should be straightforward, said Glenn Wellbrock, director of backbone network design at Verizon Communications.
Government research and education networks such as ESnet are among the drivers for high-speed networking, and requests for contracting proposals routinely ask for information about future support for 100G. DOE is working closely with vendors that have 100G on their product road maps."Without the standards being ratified, there is still a lot that is up in the air," Cotter said. But "most of them feel that enough progress has been made that they can go forward with it."
Verizon recently deployed a 100G link in Europe between Paris and Frankfurt using Nortel Networks equipment, which is called the first commercial optical system capable of delivering that speed across 1,000 kilometers. Wellbrock said the company plans deployments in this country when its North American vendors, Alcatel-Lucent, Nokia Siemens Networks and Ciena, develop products.
Another early adopter of the technology will be DOE.
"As soon as we can get our hands on it, we will be doing our best to break it," Cotter said. "If we're not pushing the vendors, we're not doing our jobs."
Because it is easier to manage a single pipe than a bundle of pipes, 100G networks should be easier to manage than 10G networks, said Craig Hill, distinguished systems engineer for the federal space at Cisco Systems. That's true at least until they start bundling 100G pipes. Then it will be time to move to Terabit Ethernet.
"They are already talking about that in the 2015 time frame," he said. — William Jackson
10G Ethernet for the masses
Bandwidth demands to spur the move to faster connections
While high-powered research networks look to jump onto the 100 gigabits/sec track, the next leap for most networks is to 10 gigabits/sec via the 10G Ethernet standard.
The 10G Ethernet standard has been around for almost a decade, having been approved in 2002. "It's almost past ready," said Jeff Lapak, manager of the 10 Gigabit Ethernet Consortium. But it offers data transfer rates about 10 times faster than the widely deployed Gigabit Ethernet.
"It's a pretty mature technology," Lapak said, "but it's really just coming into the adoption phase."
What changed to make 2010 the right time to implement 10G Ethernet?
Experts cite two primary factors: the growing need for bandwidth in enterprises and broader availability and falling costs for 10G technology.
"As the technology has matured, people have found ways to reduce the cost point," said Brad Booth, chairman of the board of the Ethernet Alliance. "At the same time, we continue to see bandwidth explode and grow in data centers. There's a desire on the part of companies to be able to do more with less, to be able to put multiple technologies on a single transport infrastructure, in this case Ethernet."
Although 10G Ethernet interface cards have been available for several years, Lapak said annual sales of 10G Ethernet ports have cracked 1 million only in the past two years .
"We're just getting to the point in the curve where you expect to see much greater adoption," he said.
Lapak and Booth said they expect a number of factors to drive greater adoption of 10G Ethernet.
For starters, 10G Ethernet's higher bandwidth makes server virtualization more efficient. "If I'm going to be doing virtualization on three or four servers, I'm really going to need a high bandwidth link," Booth said. "And instead of using multiple gigabit lines and doing aggregation, I'm better off spending the money on 10G."
Coincidentally, another factor that should drive 10G adoption is the emergence of a 40G Ethernet standard, Lapak said. "The 40G Ethernet should be coming out probably at the tail end" of 2010, he said. "Historically, what we've seen is that when the next speed standard comes out, that is when the speed below really starts selling because there is an ability to aggregate those high-speed links to an even higher-speed link, so it becomes really functional."
Lapak also cited the widespread adoption of Fibre Channel over Ethernet as a factor for greater 10G Ethernet adoption. Although many enterprises use Ethernet for TCP/IP networks and have turned to Fibre Channel for data storage using storage-area networks (SANs), FCOE allows Fibre Channel to work as another network protocol running on Ethernet, something that becomes feasible with the greater bandwidth of 10G Ethernet.
If you're looking to deploy 10G Ethernet in your data center, be aware that your primary cost will be the 10G Ethernet switches and interface cards — in addition to the staff time required for the upgrade — because 10G can run on cabling already installed in most data centers. Until somewhat recently, 10G required the use of fiber-optic cables, but it is now possible to run 10G Ethernet over twisted-pair cables. — Patrick Marshall
Web 2.0 aims for ‘awesome’
What was pioneering for agencies in 2009 could be average in 2010
The two-way conversations that Web 2.0 enables are the reason the technology has gained popularity so quickly, experts say. Instead of the Environmental Protection Agency publishing a press release about proposed rule changes — as in the static, Web 1.0 days — the information is shared via an interactive blog. The blog lets the public comment on the proposals and others' comments. It also lets EPA officials respond directly to the public.
In 2009, agency leaders began to see that the technology is easy to use, effective and inexpensive.
During the height of the H1N1 flu epidemic, for example, the Centers for Disease Control and Prevention created Web 2.0 widgets to share information about the virus with the public. Widgets are chunks of computer code that can be added to any Web site. The widget remains connected to the agency that provides it, so CDC is able to update the information in real time.
For CDC officials, the technology gets important information to Web sites where people want it, rather than forcing people to come to CDC's site.
Rank-and-file government workers will almost certainly need to get more familiar with Web 2.0 technology in 2010, said Mark Drapeau, an adjunct professor at the School of Media and Public Affairs at the George Washington University.
"I think the average employee probably is still getting used to hearing about blogs, wikis, etc., never mind using them as part of their workflow," Drapeau said. "In 2010, government employees will probably incorporate Web 2.0 into their work, which will take all sorts of different forms."
Federal agencies were encouraged to use the technology in 2009, but the pressure will be turned up in 2010, said Dan Mintz, chief technology officer of the Civil and Health Services Group at Computer Sciences Corp.
"2010 will be a year where every government agency will be expected to have a robust 2.0 presence just to get to average," he said. "The culture changes needed to allow the exposure of increasing amounts of information, even in intermediate form, will take energy to overcome. But the result is extremely powerful, allowing external interested parties to create mashups and produce much more interesting and often more user-friendly versions of the data, which the government might never have achieved."
To be successful in using Web 2.0 to share information and be transparent, agencies need to make sure the content is compelling, Drapeau said. Agency leaders should provide guidance while still allowing flexibility.
"Encourage people to be personable, authentic and even awesome," Drapeau said. "When was the last time you looked at a government Web 2.0 video or something and said, 'Oh my God, that was awesome?' People come up with awesome stuff every day as just...people. But a lot of government stuff ranges from bland or boring at the low end to good documentary material. It is a good start, but where's the totally awesome stuff?" – Doug Beizer
4G wireless technologies take root
Carriers bet on LTE but don’t count WiMax out
Wireless giants Verizon, AT&T and Sprint continue to battle over whose third-generation (3G) network is broader, faster and more reliable. But another rivalry over which technologies will dominate the next generation —- 4G — of wireless networks reached a turning point in December, and it will likely have a domino effect on federal information technology planning in 2010 and beyond.
The rivalry is between two standards. One is called Long-Term Evolution (LTE), supported by many of the world's telecommunications players; the other is WiMax, an open-standards approach, based on IEEE standards 802.16 and 802.16m.
Both wireless approaches transport voice, video and data digitally via IP rather than circuit-based switches used in 3G and older networks. The new approaches reduce delays, or latency, among hops across multiple networks and result in a tenfold increase or more in transmission speeds compared with 3G networks. That translates to download rates of 20 to 80 megabits/sec for LTE networks, depending on bandwidth and network capabilities, said Johan Wibergh, executive vice president of Ericsson.
WiMax appeared to have the early lead. It was the first all-IP technology suited to carrying data. It worked flexibly with a variety of spectrum, bandwidth and antenna options. And it got a big boost when Sprint and Clearwire Communications began deploying WiMax 4G mobile broadband in the United States in 2009. Sprint 4G reaches 30 million people in 27 markets and plans to be in "80 markets, covering 100 million by the end of 2010, including Washington and New York," said Bill White, Sprint's vice president of federal programs.
But LTE, which took longer to develop, also works well across multiple bandwidths, including the 700 MHz public-safety radio band, which transmits signals through buildings more effectively than the higher bands WiMax uses.
LTE got a further boost when telecom industry heavyweights Alcatel-Lucent, AT&T, Ericsson, Nokia-Siemens Network, Nokia, Samsung Electronics, Telefonica, TeleSonera, Verizon, Vodafone and others announced in November that they have committed to invest in LTE, making it the de facto standard among 80 percent of the world's carriers. TeleSonera upped the ante Dec. 14 when it unveiled the world's first commercial 4G LTE network in Stockholm and Oslo.
"LTE has hundreds of companies that are behind it," said Doug Smith, chief executive officer of Ericsson Federal. "The size of that ecosystem is what determines the amount of investment."
LTE would move the wireless business beyond 4 billion cell phone users to a world of "50 billion devices — with a lot of machine-to-machine devices, designed around IPv6," Smith said.
Verizon Wireless' Bernie McMonagle, associate director of data solutions, agreed.
"It's not just about moving information faster but moving tiny bits of information with a lot lower latency for machine-to-machine apps, sensors in roadways — every person, place or thing having an IP address," he said. That will "allow near real-time video or applications [to reach] a remote device you couldn't do before," he said. — Wyatt Kash
USB 3.0 is nearly at hand
New standard will boost download speeds tenfold
If you think USB 2.0 (Hi-Speed) was a major improvement over USB 1.0, the emergence of USB 3.0 (SuperSpeed) will knock your socks off. The new USB standard, which was announced in November 2008, promises 10 times the data transfer speed of USB 2.0. One major result is greater convenience for consumers.
"Say you want to download a 25G HD movie from your home server to your notebook to take on the plane," said Jeff Ravencraft, a technology strategist at Intel and chairman of the USB Implementers Forum. "If you do that with Hi-Speed USB today, it would take 15 minutes. If you have SuperSpeed USB end-to-end, you can do that same transaction in about a minute."
Although that means less waiting time, it also means a more trusting relationship between users and their devices. "We have data that tells us that after a minute to 11/2 minutes of the consumer waiting to do a transaction that they get extremely frustrated," Ravencraft said. "And if it goes much longer than 11/2 minutes, they think the application might be broken or that something is wrong, and they terminate because it has just taken too long."
The first USB 3.0 devices are just now hitting the market. For example, Buffalo Technology has just launched its DriveStation USB 3.0, an external hard drive solution that includes a USB 3.0 PCI add-in card and a USB 3.0 cable.
Ravencraft said at least one flash drive manufacturer also is shipping a USB 3.0 device. And more USB 3.0 devices made their appearance at this month's Consumer Electronics Show in Las Vegas.
USB 3.0 achieves its higher performance from an additional physical bus running in parallel to the existing USB 2.0 bus. However, that means that it requires a new cable that contains eight wires, four more than in USB 2.0 cables.
USB 3.0 ports and cables are backward-compatible to the extent that they support devices that use earlier versions of USB, but USB 3.0 performance will only be possible when two USB 3.0 devices are connected to each other using a USB 3.0 cable.
USB 3.0 also promises better power efficiency. In part, that's because 3.0 transmits data so much faster. But USB 3.0 designers also changed to an interrupt architecture from USB 2.0's polling architecture. That means power isn't wasted polling devices that don't have data to transfer. And instead of a USB 3.0 controller sending data to all connected devices — as USB 2.0 controllers do — data is only sent to devices that request the data.
With the growing use of laptop computers and other battery-powered devices, Ravencraft said, "we knew going into SuperSpeed USB that we had to optimize it for power efficiency.” — Patrick Marshall
IPv6 moves closer to center stage
As IPv4 addresses run out, it’s time to get experience in Version 6 networking
IPv6 is not new. Version 6 of the Internet protocol has been standardized for about 10 years, and most networking equipment has supported IPv6 for five years. But with the pool of IPv4 addresses nearing depletion, future growth in the Internet will come increasingly with the new protocols.
The last IPv4 address will not be doled out for another year or more. But because of the shrinking pool, IPv6 already is the only option available for large address assignments, said John Curran, president of the American Registry for Internet Numbers, one of five regional registries responsible for handing out Internet addresses. With the Internet experiencing an annual growth rate of 30 to 40 percent, much of that growth soon will consist of IPv6 users.
"It will very quickly become a major source of traffic, and then the dominant one," Curran said.
Will you need to move to IPv6 to remain online? No, Curran said. IPv4 works fine and has the functionality to be around for a long time. It might never be turned off. "You can stay on IPv4, and it will work fine. Kind of."
The problems likely will show up in the gateways that will be required on IPv4-only networks for protocol translation and tunneling. The gateways might not be a problem for static content, but they are liable to become bottlenecks as growing numbers of surfers use IPv6 to access dynamic real-time and streaming content.
"It might work; you just don't know what the performance is," Curran said.
That means we could essentially end up with two Internets, dividing the online world into the haves and the have-nots.
The equipment to handle IPv6 on government networks is in place, and major carriers are preparing for the new protocol. "They are getting the addresses, they are working in the labs and they are doing the internal testing necessary," Curran said.
Most of the activity so far is coming in closed systems with internal applications using large numbers of devices that can take advantage of the large IPv6 address space. IPv6 traffic might not show up on the public Internet in large volumes today or even tomorrow. But it will come soon, and networks should get ready for it now.
"We need to be enabling IPv6 on some of our public systems to build our experience and see where the problems are, so if something doesn't work, it can be fixed," Curran said. — William Jackson
Virtualization moves to the desktop
Hypervisors could spur adoption of virtual PCs
In that scenario, virtual desktops are stored on a remote central server, where all of the programs, applications, processes and data reside, so users can access information from almost any secure client device.
“Client virtualization is an area that a lot of federal agencies are moving into," said Anil Karmel, a solutions architect at the Energy Department's Los Alamos National Laboratory. Two years ago, the laboratory created a virtual environment in which officials decommissioned 100 physical servers and deployed 250 virtual machines on 13 physical host servers.
Karmel said he anticipates a significant uptake in client virtualization in the public sector, as agencies launch pilot projects to test the waters. "Many agencies that I speak to are definitely walking the pilot road in 2010," he said.
One thing that could spur adoption of client virtualization is the arrival of bare-metal client hypervisors. These hypervisors, from companies such as Citrix and VMware, are expected to hit the market this year.
A hypervisor acts as a control layer between the hardware and operating system and allows multiple operating systems to run on the same physical hardware. Most client hypervisors reside on top of existing operating systems and, as a result, are affected by the management and security tasks associated with those operating systems.
A bare-metal client hypervisor resides in the kernel of the processor so only the RAM and processor handle the virtualization suite, said Rue Moody, strategic products technical director at Citrix Systems.
"It is a lot easier to lock down and secure, and you don't have an OS to manage and patch and for someone to hack into," he said.
Citrix is expected to start offering a bare-metal client hypervisor with its Xen Client product in January or February. Xen Client will work with Intel virtualization and Intel vPro technology. VMware has also partnered with Intel on a bare-metal client hypervisor, known as the Client Virtualization Platform, slated for release some time during the first half of 2010.
"We are very excited to see what vendors are working toward with bare-metal client virtualization," Karmel said. Client hypervisors should spur agencies to investigate desktop virtualization on a greater scale, he said.
Another movement that will spur deployment of client and desktop virtualization is the emergence of zero-client offerings, Karmel said. With zero-client computing, a client device has no operating system, CPU or memory. Instead, it connects only a monitor and peripherals, such as a keyboard, mouse or USB device, back to a virtual desktop infrastructure in the data center.
With the release of View 4, VMware uses the PC-over-IP protocol to deliver virtual desktops to endpoint devices. Many thin clients require a patching methodology for their embedded operating systems. Zero-client computing effectively reduces the number of patchable endpoints distributed throughout an enterprise, moving such work to virtual desktops centrally located in a data center.
Security-conscious government agencies will certainly appreciate the delivery of a desktop to a stateless device, Karmel said.
Government agencies should be able to deliver a manageable and secure platform by coupling client or desktop virtualization with security enhancements gained by implementing a virtual firewall along with intrusion detection systems and intrusion prevention systems that give users greater visibility into virtual environments, Karmel said. — Rutrell Yasin
DNSSEC gets automated
Agencies lead adoption of Domain Name System security extensions
Because the Domain Name System underlies so much of the Internet, attacks against DNS threaten the stability of the global online environment. The DNS Security Extensions (DNSSEC) will not solve all security problems, but they add an important level of assurance. And progress will be made this year in implementing this complex technology.
“The process of DNSSEC adoption has started," said Sandy Wilbourn, vice president of engineering at Nominum. "It will by no means culminate in 2010. However, Nominum believes a meaningful number of domain owners will take steps to migrate to DNSSEC through 2010."
DNS translates easy-to-understand names in the form of Uniform Resource Locators, or URLs, into numerical IP addresses. It was not designed to provide security, so this basic service is vulnerable to spoofing and manipulation that could allow hackers to redirect traffic to fraudulent sites. DNSSEC counters this by digitally signing DNS queries and responses so they can be authenticated using public signature keys. DNSSEC protocols have been around for about 15 years, but implementation has been slow, in part because DNS has worked so well and nobody wants to fix what has not yet appeared to be broken and also because implementing digital signatures can be complex and time-consuming.
Leading by example, the U.S. government has helped to spur adoption. Following disclosure last year of a serious vulnerability in the DNS protocols, the Office of Management and Budget mandated that the dot-gov top-level domain be signed in 2009 and that agencies sign their secondary domains by the end of that year.
"Within dot-gov, it certainly is happening, and dot-mil is starting to adopt," said Branko Miskov, director of product management at BlueCat Networks. "I think that is where the majority of the implementation is going to occur."
Other top-level domains also have been signed, including dot-org, the largest so far to be signed, and the dot-US country code. The root zone is expected to be signed this year, and the Big Casino of top-level domains, dot-com, is expected to be signed in 2011. Domain owners can then sign their zones, completing chains of trust that will allow DNS requests and responses to be easily authenticated.
Vendors including Nominum, BlueCat and F5 Networks, among many others, are introducing tools that automate much of the process of digitally signing and managing keys, simplifying adoption of DNSSEC. — William Jackson
Deep packet inspection adds a layer of defense
DPI’s examination of incoming packets can help counter complex threats
Networked information technology environments are becoming increasingly complex, extending beyond the boundaries of many organizations' networks. At the same time, cyber threats are becoming more sophisticated and harder to detect.
Those factors are forcing agencies and enterprises to turn toward advanced security technologies, such as Deep Packet Inspection, which gives security administrators greater visibility into the packets of formatted data being that their networks are transmitting.
Shallow packet inspection — also called Stateful Packet Inspection — checks only the header portion of a packet. Deep Packet Inspection searches for protocol noncompliance, viruses, spam, intrusions or predefined criteria.
Why are agencies and those who provide services to the government sector ratcheting up deployment of technologies such as DPI?
"The types of threats that our networks — and ultimately our systems and data and applications are under attack from — are changing," said Curt Aubley, chief technology officer of CTO Operations and Next Generation Solutions at Lockheed Martin Information Systems and Global Services.
DPI offers new capabilities to help organizations identify and stop those threats, he said.
Not every environment would require DPI's advanced capabilities, Aubley noted. Its use would depend on the agency and its mission.
To successfully deploy DPI technology, agency IT managers must first consider which assets they need to protect. Then they must determine what kinds of threats could affect those assets and whether they are capable of deploying DPI.
"If they have trouble covering basic compliance [issues], going into Deep Packet Inspection might be a challenge," Aubley said. "So it depends on what their mission is and the maturity of their enterprise."
Agency managers also would need to decide whether to do just in-line DPI with certain types of algorithms to define trends or look at long-term attacks that can occur over years. In that case, DPI would then spool that data into a data warehouse, where network security administrators could use analytic tools to look for advanced and complex threats, Aubley said.
DPI can be used for many different tasks, from hunting for viruses and worms to helping administrators decide what traffic is mission critical and needs top priority on the network, said Tim Le Master, director of systems engineering at Juniper Networks. Administrators can make policy decisions about offering lesser or higher qualities of services. For instance, they might allow peer-to-peer traffic but set a threshold for it to be the first communication links dropped during network congestion.
"The core point is that you have to identify the traffic and determine what it is first," Le Master said. "Then you have the ability to make decisions on it."Administrators can handle the data in various ways. For instance, they might decide to store or record it using a data recording device or duplicate it to other ports on a device for further inspection, Le Master said.
There is interest among agencies involved with the Trusted Internet Connection (TIC) initiative to replicate data so a device, such as the Homeland Security Department’s Einstein system can inspect it further, Le Master said.
Mandated by the Office of Management and Budget in 2007, TIC is an initiative to reduce the number of Internet points of presence that federal agencies use. It includes a program for improving the government's incident response capability through a centralized gateway monitored at selected TIC Access Providers.
Einstein is an intrusion detection system that monitors the network gateways of government agencies. It was developed by the U.S. Computer Emergency Readiness Team (US-CERT), the operational arm of DHS' National Cyber Security Division.
However, attacks will likely continue to move up the network stack, affecting applications this year. Moreover, experts expect to see more customized malware and sleeper software deployed to compromise systems and a rise in attacks on mobile platforms. DPI can help counter these threats, experts say. — Rutrell Yasin
Next step for virtualization: Security
Inter-virtual machine security systems and kernel-based firewalls to come into play
Virtualization is real.
Server virtualization projects, which run multiple instances of operating systems concurrently on a single hardware system, are in full swing at many federal and state agencies. Some agencies are moving beyond servers to apply virtualization to applications, desktop PCs and network infrastructures.
As a result, organizations need better visibility into what is going on in this virtual world to protect systems from being attacked and data compromised. Firewalls and intrusion prevention systems inspect network traffic and the exchange of information between servers in a physical network. But those security tools are blind to traffic between virtual machines.
Virtual machines can be separated in virtual local-area networks (VLANs), but that requires routing inter-virtual machine communications through the firewall, and security could be hampered by VLAN complexity. Or agency managers could deploy virtual machines with software firewalls but might encounter performance degradation and management issues associated with maintaining hundreds, if not thousands, of virtual machines.
Inter-VM, or kernel-based firewalls, are an example of an approach to correct this problem, said Tim Le Master, director of systems engineering at Juniper Networks. "I think there will be a lot of interest in it because of all of the interest in securing the data center in general and in cloud computing," Le Master said.
Juniper has partnered with a company to offer an intervirtual machine firewall capability, he said, and is virtualizing security solutions such as firewalls and intrusion protection services so IT managers will require fewer of them to secure a data center.
Officials at the Energy Department's Los Alamos National Laboratory have used virtualization technology to address issues of cooling, limited floor space and power consumption as they sought to ramp up capacity in data centers on the sprawling, 36-mile campus.
"In 2010, there will be a push toward implementing virtual firewalls and [intrusion detection systems] and [intrusion prevention systems] solutions into virtual environments," said Anil Karmel, a solutions architect at Los Alamos.
For instance, virtualization software provider VMware is offering vShield Zones within the company's Vsphere 4 product that offers some semblance of control from a virtual firewall perspective. That will mostly likely be enhanced this year with third-party products using VMware's VMsafe to offer more visibility into the virtual infrastructure, Karmel noted.
Additionally, VMsafe will allow for tighter integration with existing intrusion detection and prevention systems to give IT administrators a single pane of glass to monitor what is going on in their respective physical and virtual environments, he said.
Other vendors to pay attention to in 2010 include Altor Networks, Catbird and Citrix. — Rutrell Yasin