I/O Virtualization (IOV) Revisited

Is I/O Virtualization (IOV) a server topic, a network topic, or a storage topic (See previous post)?

Like server virtualization, IOV involves servers, storage, network, operating system, and other infrastructure resource management areas and disciplines. The business and technology value proposition or benefits of converged I/O networks and I/O virtualization are similar to those for server and storage virtualization.

Additional benefits of IOV include:

    • Doing more with what resources (people and technology) already exist or reduce costs
    • Single (or pair for high availability) interconnect for networking and storage I/O
    • Reduction of power, cooling, floor space, and other green efficiency benefits
    • Simplified cabling and reduced complexity for server network and storage interconnects
    • Boosting servers performance to maximize I/O or mezzanine slots
    • reduce I/O and data center bottlenecks
    • Rapid re-deployment to meet changing workload and I/O profiles of virtual servers
    • Scaling I/O capacity to meet high-performance and clustered application needs
    • Leveraging common cabling infrastructure and physical networking facilities

Before going further, lets take a step backwards for a few moments.

To say that I/O and networking demands and requirements are increasing is an understatement. The amount of data being generated, copied, and retained for longer periods of time is elevating the importance of the role of data storage and infrastructure resource management (IRM). Networking and input/output (I/O) connectivity technologies (figure 1) tie facilities, servers, storage tools for measurement and management, and best practices on a local and wide area basis to enable an environmentally and economically friendly data center.

TIERED ACCESS FOR SERVERS AND STORAGE
There is an old saying that the best I/O, whether local or remote, is an I/O that does not have to occur. I/O is an essential activity for computers of all shapes, sizes, and focus to read and write data in and out of memory (including external storage) and to communicate with other computers and networking devices. This includes communicating on a local and wide area basis for access to or over Internet, cloud, XaaS, or managed services providers such as shown in figure 1.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
Figure 1 The Big Picture: Data Center I/O and Networking

The challenge of I/O is that some form of connectivity (logical and physical), along with associated software is required along with time delays while waiting for reads and writes to occur. I/O operations that are closest to the CPU or main processor should be the fastest and occur most frequently for access to main memory using internal local CPU to memory interconnects. In other words, fast servers or processors need fast I/O, either in terms of low latency, I/O operations along with bandwidth capabilities.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
Figure 2 Tiered I/O and Networking Access

Moving out and away from the main processor, I/O remains fairly fast with distance but is more flexible and cost effective. An example is the PCIe bus and I/O interconnect shown in Figure 2, which is slower than processor-to-memory interconnects but is still able to support attachment of various device adapters with very good performance in a cost effective manner.

Farther from the main CPU or processor, various networking and I/O adapters can attach to PCIe, PCIx, or PCI interconnects for backward compatibility to support various distances, speeds, types of devices, and cost factors.

In general, the faster a processor or server is, the more prone to a performance impact it will be when it has to wait for slower I/O operations.

Consequently, faster servers need better-performing I/O connectivity and networks. Better performing means lower latency, more IOPS, and improved bandwidth to meet application profiles and types of operations.

Peripheral Component Interconnect (PCI)
Having established that computers need to perform some form of I/O to various devices, at the heart of many I/O and networking connectivity solutions is the Peripheral Component Interconnect (PCI) interface. PCI is an industry standard that specifies the chipsets used to communicate between CPUs and memory and the outside world of I/O and networking device peripherals.

Figure 3 shows an example of multiple servers or blades each with dedicated Fibre Channel (FC) and Ethernet adapters (there could be two or more for redundancy). Simply put the more servers and devices to attach to, the more adapters, cabling and complexity particularly for blade servers and dense rack mount systems.
PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
Figure 3 Dedicated PCI adapters for I/O and networking devices

Figure 4 shows an example of a PCI implementation including various components such as bridges, adapter slots, and adapter types. PCIe leverages multiple serial unidirectional point to point links, known as lanes, in contrast to traditional PCI, which used a parallel bus design.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)

Figure 4 PCI IOV Single Root Configuration Example

In traditional PCI, bus width varied from 32 to 64 bits; in PCIe, the number of lanes combined with PCIe version and signaling rate determine performance. PCIe interfaces can have 1, 2, 4, 8, 16, or 32 lanes for data movement, depending on card or adapter format and form factor. For example, PCI and PCIx performance can be up to 528 MB per second with a 64 bit, 66 MHz signaling rate, and PCIe is capable of over 4 GB (e.g., 32 Gbit) in each direction using 16 lanes for high-end servers.

The importance of PCIe and its predecessors is a shift from multiple vendors’ different proprietary interconnects for attaching peripherals to servers. For the most part, vendors have shifted to supporting PCIe or early generations of PCI in some form, ranging from native internal on laptops and workstations to I/O, networking, and peripheral slots on larger servers.

The most current version of PCI, as defined by the PCI Special Interest Group (PCISIG), is PCI Express (PCIe). Backwards compatibility exists by bridging previous generations, including PCIx and PCI, off a native PCIe bus or, in the past, bridging a PCIe bus to a PCIx native implementation. Beyond speed and bus width differences for the various generations and implementations, PCI adapters also are available in several form factors and applications.

Traditional PCI was generally limited to a main processor or was internal to a single computer, but current generations of PCI Express (PCIe) include support for PCI Special Interest Group (PCI) I/O virtualization (IOV), enabling the PCI bus to be extended to distances of a few feet. Compared to local area networking, storage interconnects, and other I/O connectivity technologies, a few feet is very short distance, but compared to the previous limit of a few inches, extended PCIe provides the ability for improved sharing of I/O and networking interconnects.

I/O VIRTUALIZATION(IOV)
On a traditional physical server, the operating system sees one or more instances of Fibre Channel and Ethernet adapters even if only a single physical adapter, such as an InfiniBand HCA, is installed in a PCI or PCIe slot. In the case of a virtualized server for example, Microsoft HyperV or VMware ESX/vSphere the hypervisor will be able to see and share a single physical adapter, or multiple adapters, for redundancy and performance to guest operating systems. The guest systems see what appears to be a standard SAS, FC or Ethernet adapter or NIC using standard plug-and-play drivers.

Virtual HBA or virtual network interface cards (NICs) and switches are, as their names imply, virtual representations of a physical HBA or NIC, similar to how a virtual machine emulates a physical machine with a virtual server. With a virtual HBA or NIC, physical NIC resources are carved up and allocated as virtual machines, but instead of hosting a guest operating system like Windows, UNIX, or Linux, a SAS or FC HBA, FCoE converged network adapter (CNA) or Ethernet NIC is presented.

In addition to virtual or software-based NICs, adapters, and switches found in server virtualization implementations, virtual LAN (VLAN), virtual SAN (VSAN), and virtual private network (VPN) are tools for providing abstraction and isolation or segmentation of physical resources. Using emulation and abstraction capabilities, various segments or sub networks can be physically connected yet logically isolated for management, performance, and security purposes. Some form of routing or gateway functionality enables various network segments or virtual networks to communicate with each other when appropriate security is met.

PCI-SIG IOV
PCI SIG IOV consists of a PCIe bridge attached to a PCI root complex along with an attachment to a separate PCI enclosure (Figure 5). Other components and facilities include address translation service (ATS), single-root IOV (SR IOV), and multiroot IOV (MR IOV). ATS enables performance to be optimized between an I/O device and a servers I/O memory management. Single root, SR IOV enables multiple guest operating systems to access a single I/O device simultaneously, without having to rely on a hypervisor for a virtual HBA or NIC.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)

Figure 5 PCI SIG IOV

The benefit is that physical adapter cards, located in a physically separate enclosure, can be shared within a single physical server without having to incur any potential I/O overhead via virtualization software infrastructure. MR IOV is the next step, enabling a PCIe or SR IOV device to be accessed through a shared PCIe fabric across different physically separated servers and PCIe adapter enclosures. The benefit is increased sharing of physical adapters across multiple servers and operating systems not to mention simplified cabling, reduced complexity and resource utilization.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
Figure 6 PCI SIG MR IOV

Figure 6 shows an example of a PCIe switched environment, where two physically separate servers or blade servers attach to an external PCIe enclosure or card cage for attachment to PCIe, PCIx, or PCI devices. Instead of the adapter cards physically plugging into each server, a high performance short-distance cable connects the servers PCI root complex via a PCIe bridge port to a PCIe bridge port in the enclosure device.

In figure 6, either SR IOV or MR IOV can take place, depending on specific PCIe firmware, server hardware, operating system, devices, and associated drivers and management software. For a SR IOV example, each server has access to some number of dedicated adapters in the external card cage, for example, InfiniBand, Fibre Channel, Ethernet, or Fibre Channel over Ethernet (FCoE) and converged networking adapters (CNA) also known as HBAs. SR IOV implementations do not allow different physical servers to share adapter cards. MR IOV builds on SR IOV by enabling multiple physical servers to access and share PCI devices such as HBAs and NICs safely with transparency.

The primary benefit of PCI IOV is to improve utilization of PCI devices, including adapters or mezzanine cards, as well as to enable performance and availability for slot-constrained and physical footprint or form factor-challenged servers. Caveats of PCI IOV are distance limitations and the need for hardware, firmware, operating system, and management software support to enable safe and transparent sharing of PCI devices. Examples of PCIe IOV vendors include Aprius, NextIO and Virtensys among others.

InfiniBand IOV
InfiniBand based IOV solutions are an alternative to Ethernet-based solutions. Essentially, InfiniBand approaches are similar, if not identical, to converged Ethernet approaches including FCoE, with the difference being InfiniBand as the network transport. InfiniBand HCAs with special firmware are installed into servers that then see a Fibre Channel HBA and Ethernet NIC from a single physical adapter. The InfiniBand HCA also attaches to a switch or director that in turn attaches to Fibre Channel SAN or Ethernet LAN networks.

The value of InfiniBand converged networks are that they exist today, and they can be used for consolidation as well as to boost performance and availability. InfiniBand IOV also provides an alternative for those who do not choose to deploy Ethernet.

From a power, cooling, floor-space or footprint standpoint, converged networks can be used for consolidation to reduce the total number of adapters and the associated power and cooling. In addition to removing unneeded adapters without loss of functionality, converged networks also free up or allow a reduction in the amount of cabling, which can improve airflow for cooling, resulting in additional energy efficiency. An example of a vendor using InfiniBand as a platform for I/O virtualization is Xsigo.

General takeaway points include the following:

  • Minimize the impact of I/O delays to applications, servers, storage, and networks
  • Do more with what you have, including improving utilization and performance
  • Consider latency, effective bandwidth, and availability in addition to cost
  • Apply the appropriate type and tiered I/O and networking to the task at hand
  • I/O operations and connectivity are being virtualized to simplify management
  • Convergence of networking transports and protocols continues to evolve
  • PCIe IOV is complimentary to converged networking including FCoE

Moving forward, a revolutionary new technology may emerge that finally eliminates the need for I/O operations. However until that time, or at least for the foreseeable future, several things can be done to minimize the impacts of I/O for local and remote networking as well as to simplify connectivity.

PCIe Fundamentals Server Storage I/O Network Essentials

Learn more about IOV, converged networks, LAN, SAN, MAN and WAN related topics in Chapter 9 (Networking with your servers and storage) of The Green and Virtual Data Center (CRC) as well as in Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier).

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)

Ok, so I should have used that intro last week before heading off to VMworld in San Francisco instead of after the fact.

Think of it as a high latency title or intro, kind of like attaching a fast SSD to a slow, high latency storage controller, or a fast server attached to a slow network, or fast network with slow storage and servers, it is what it is.

I/O virtualization (IOV), Virtual I/O (VIO) along with I/O and networking convergence have been getting more and more attention lately, particularly on the convergence front. In fact one might conclude that it is trendy to all of a sudden to be on the IOV, VIO and convergence bandwagon given how clouds, soa and SaaS hype are being challenged, perhaps even turning to storm clouds?

Lets get back on track, or in the case of the past week, get back in the car, get back in the plane, get back into the virtual office and what it all has to do with Virtual I/O and VMworld.

The convergence game has at its center Brocade emanating from the data center and storage centric I/O corner challenging Cisco hailing from the MAN, WAN, LAN general networking corner.

Granted both vendors have dabbled with success in each others corners or areas of focus in the past. For example, Brocade as via acquisitions (McData+Nishan+CNT+INRANGE among others) a diverse and capable stable of local and long distance SAN connectivity and channel extension for mainframe and open systems supporting data replication, remote tape and wide area clustering. Not to mention deep bench experience with the technologies, protocols and partners solutions for LAN, MAN (xWDM), WAN (iFCP, FCIP, etc) and even FAN (file area networking aka NAS) along with iSCSI in addition to Fibre Channel and FICON solutions.

Disclosure: Here’s another plug ;) Learn more about SANs, LANs, MANs, WANs, POTs, PANs and related technologies and techniques in my book “Resilient Storage NetworksDesigning Flexible Scalable Data Infrastructures" (Elsevier).

Cisco not to be outdone has a background in the LAN, MAN, WAN space directly, or similar to Brocade via partnerships with product and experience and depth. In fact while many of my former INRANGE and CNT associates ended up at Brocade via McData or in-directly, some ended up at Cisco. While Cisco is known for general networking, the past several years they have gone from zero to being successful in the Fibre Channel and yes, even the FICON mainframe space while like Brocade (HBAs) dabbling in other areas like servers and storage not to mention consumer products.

What does this have to do with IOV and VIO, let alone VMworld and my virtual office, hang on, hold that thought for a moment, lets get the convergence aspect out of the way first.

On the I/O and networking convergence (e.g. Fibre Channel over Ethernet – FCoE) scene both Brocade (Converged Enhanced Ethernet-CEE) and Cisco (Data Center Ethernet – DCE) along with their partners are rallying around each others camps. This is similar to how a pair of prize fighters maneuvers in advance of a match including plenty of trash talk, hype and all that goes with it. Brocade and Cisco throwing mud balls (or spam) at each other, or having someone else do it is nothing new, however in the past each has had their core areas of focus coming from different tenets in some cases selling to different people in an IT environment or those in VAR and partner organizations. Brocade and Cisco are not alone nor is the I/O networking convergence game the only one in play as it is being complimented by the IOV and VIO technologies addressing different value propositions in IT data centers.

Now on to the IOV and VIO aspect along with VMworld.

For those of you that attended VMworld and managed to get outside of session rooms, or media/analyst briefing or reeducation rooms, or out of partner and advisory board meetings walking the expo hall show floor, there was the usual sea of vendors and technology. There were the servers (physical and virtual), storage (physical and virtual), terminals, displays and other hardware, I/O and networking, data protection, security, cloud and managed services, development and visualization tools, infrastructure resource management (IRM) software tools, manufactures and VARs, consulting firms and even some analysts with booths selling their wares among others.

Likewise, in the onsite physical data center to support the virtual environment, there were servers, storage, networking, cabling and associated hardware along with applicable software and tucked away in all of that, there were also some converged I/O and networking, and, IOV technologies.

Yes, IOV, VIO and I/O networking convergence were at VMworld in force, just ask Jon Torr of Xsigo who was beaming like a proud papa wanting to tell anyone who would listen that his wares were part of the VMworld data center (Disclosure: Thanks for the T-Shirt).

Virtensys had their wares on display with Bob Nappa more than happy to show the technology beyond an UhiGui demo including how their solution includes disk drives and an LSI MegaRAID adapter to support VM boot while leveraging off-the shelf or existing PCIe adapters (SAS, FC, FCoE, Ethernet, SATA, etc.) while allowing adapter sharing across servers, not to mention, they won best new technology at VMworld award.

NextIO who is involved in the IOV / VIO game was there along with convergence vendors Brocade, Cisco, Qlogic and Emulex among others. Rest assured, there are many other vendors and VARs in the VIO and IOV game either still in stealth, semi-stealth or having recently launched.

IOV and VIO are complimentary to I/O and networking convergence in that solutions like those from Aprius, Virtensys, Xsigo, NextIO and others. While they sound similar, and in fact there is confusion as to if Fibre Channel N_Port Virtual ID (FC_NPVID) and VMware virtual adapters are IOV and VIO vs. solutions that are focused on PCIe device/resource extension and sharing.

Another point of confusion around I/O virtualization and virtual I/O are blade system or blade center connectivity solutions such as HP Virtual Connect or IBM Fabric Manger not to mention those form Engenera add confusion to the equation. Some of the buzzwords that you will be hearing and reading more about include PCIe Single Root IOV (SR-IOV) and Multi-Root IOV (MR-IOV). Think of it this way, within VMware you have virtual adapters, and Fibre Channel Virtualization N_Port IDs for LUN mapping/masking, zone management and other tasks.

IOV enables localized sharing of physical adapters across different physical servers (blades or chassis) with distances measured in a few meters; after all, it’s the PCIe bus that is being extended. Thus, it is not a replacement for longer distance in the data center solutions such as FCoE or even SAS for that matter, thus they are complimentary, or at least should be considered complimentary.

The following are some links to previous articles and related material including an excerpt (yes, another plug ;)) from chapter 9 “Networking with you servers and storage” of new book “The Green and Virtual Data Center” (CRC). Speaking of virtual and physical, “The Green and Virtual Data Center” (CRC) was on sale at the physical VMworld book store this week, as well as at the virtual book stores including Amazon.com

The Green and Virtual Data Center

The Green and Virtual Data Center (CRC) on book shelves at VMworld Book Store

Links to some IOV, VIO and I/O networking convergence pieces among others, as well as news coverage, comments and interviews can be found here and here with StorageIOblog posts that may be of interest found here and here.

SearchSystemChannel: Comparing I/O virtualization and virtual I/O benefits – August 2009

Enterprise Storage Forum: I/O, I/O, It’s Off to Virtual Work We Go – December 2007

Byte and Switch: I/O, I/O, It’s Off to Virtual Work We Go (Book Chapter Excerpt) – April 2009

Thus I went to VMworld in San Francisco this past week as much of the work I do is involved with convergence similar to my background, that is, servers, storage, I/O networking, hardware, software, virtualization, data protection, performance and capacity planning.

As to the virtual work, well, I spent some time on airplanes this week which as is often the case, my virtual office, granted it was real work that had to be done, however I also had a chance to meet up with some fellow tweeters at a tweet up Tuesday evening before getting back in a plane in my virtual office.

Now, I/O, I/O, its back to real work I go at Server and StorageIO , kind of rhymes doesnt it!

I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)