EPA Energy Star for Data Center Storage Update

EPA Energy Star

Following up on a recent post about Green IT, energy efficiency and optimization for servers, storage and more, here are some additional  thoughts, perspectives along with industry activity around the U.S. Environmental Protection Agency (EPA) Energy Star for Server, Data Center Storage and Data Centers.

First a quick update, Energy Star for Servers is in place with work now underway on expanding and extending beyond the first specification. Second is that Energy Star for Data Center storage definition is well underway including a recent workshop to refine the initial specification along with discussion for follow-on drafts.

Energy Star for Data Centers is also currently undergoing definition which is focused more on macro or facility energy (notice I did not say electricity) efficiency as opposed to productivity or effectiveness, items that the Server and Storage specifications are working towards.

Among all of the different industry trade or special interests groups, at least on the storage front the Storage Networking Industry Association (SNIA) Green Storage Initiative (GSI) and their Technical Work Groups (TWG) have been busily working for the past couple of years on taxonomies, metrics and other items in support of EPA Energy Star for Data Center Storage.

A challenge for SNIA along with others working on related material pertaining to storage and efficiency is the multi-role functionality of storage. That is, some storage simply stores data with little to no performance requirements while other storage is actively used for reading and writing. In addition, there are various categories, architectures not to mention hardware and software feature functionality or vendors with different product focus and interests.

Unlike servers that are either on and doing work, or, off or in low power mode, storage is either doing active work (e.g. moving data), storing in-active or idle data, or a combination of both. Hence for some, energy efficiency is about how much data can be stored in a given footprint with the least amount of power known as in-active or idle measurement.

On the other hand, storage efficiency is also about using the least amount of energy to produce the most amount of work or activity, for example IOPS or bandwidth per watt per footprint.

Thus the challenge and need for at least a two dimensional  model looking at, and reflecting different types or categories of storage aligned for active or in-active (e.g. storing) data enabling apples to apples, vs. apples to oranges comparison.

This is not all that different from how EPA looks at motor vehicle categories of economy cars, sport utility, work or heavy utility among others when doing different types of work, or, in idle.

What does this have to do with servers and storage?

Simple, when a server powers down where does its data go? That’s right, to a storage system using disk, ssd (RAM or flash), tape or optical for persistency. Likewise, when there is work to be done, where does the data get read into computer memory from, or written to? That’s right, a storage system. Hence the need to look at storage in a multi-tenant manner.

The storage industry is diverse with some vendors or products focused on performance or activity, while others on long term, low cost persistent storage for archive, backup, not to mention some doing a bit of both. Hence the nomenclature of herding cats towards a common goal when different parties have various interests that may conflict yet support needs of various customer storage usage requirements.

Figure 1 shows a simplified, streamlined storage taxonomy that has been put together by SNIA representing various types, categories and functions of data center storage. The green shaded areas are a good step in the right direction to simplify yet move towards realistic and achievable befits for storage consumers.


Figure 1 Source: EPA Energy Star for Data Center Storage web site document

The importance of the streamlined SNIA taxonomy is to help differentiate or characterize various types and tiers of storage (Figure 2) products facilitating apples to apples comparison instead of apples or oranges. For example, on-line primary storage needs to be looked at in terms of how much work or activity per energy footprint determines efficiency.


Figure 2: Tiered Storage Example

On other hand, storage for retaining large amounts of data that is in-active or idle for long periods of time should be looked at on a capacity per energy footprint basis. While final metrics are still being flushed out, some examples could be active storage gauged by IOPS or work or bandwidth per watt of energy per footprint while other storage for idle or inactive data could be looked at on a capacity per energy footprint basis.

What benchmarks or workloads to be used for simulating or measuring work or activity are still being discussed with proposals coming from various sources. For example SNIA GSI TWG are developing measurements and discussing metrics, as have the storage performance council (SPC) and SPEC among others including use of simulation tools such as IOmeter, VMware VMmark, TPC, Bonnie, or perhaps even Microsoft ESRP.

Tenants of Energy Star for Data Center Storage overtime hopefully will include:

  • Reflective of different types, categories, price-bands and storage usage scenarios
  • Measure storage efficiency for active work along with in-active or idle usage
  • Provide insight for both storage performance efficiency and effective capacity
  • Baseline or raw storage capacity along with effective enhanced optimized capacity
  • Easy to use metrics with more in-depth back ground or disclosure information

Ultimately the specification should help IT storage buyers and decision makers to compare and contrast different storage systems that are best suited and applicable to their usage scenarios.

This means measuring work or activity per energy footprint at a given capacity and data protection level to meet service requirements along with during in-active or idle periods. This also means showing storage that is capacity focused in terms of how much data can be stored in a given energy footprint.

One thing that will be tricky however will be differentiating GBytes per watt in terms of capacity, or, in terms of performance and bandwidth.

Here are some links to learn more:

Stay tuned for more on Energy Star for Data Centers, Servers and Data Center Storage.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

I/O Virtualization (IOV) Revisited

Is I/O Virtualization (IOV) a server topic, a network topic, or a storage topic (See previous post)?

Like server virtualization, IOV involves servers, storage, network, operating system, and other infrastructure resource management areas and disciplines. The business and technology value proposition or benefits of converged I/O networks and I/O virtualization are similar to those for server and storage virtualization.

Additional benefits of IOV include:

    • Doing more with what resources (people and technology) already exist or reduce costs
    • Single (or pair for high availability) interconnect for networking and storage I/O
    • Reduction of power, cooling, floor space, and other green efficiency benefits
    • Simplified cabling and reduced complexity for server network and storage interconnects
    • Boosting servers performance to maximize I/O or mezzanine slots
    • reduce I/O and data center bottlenecks
    • Rapid re-deployment to meet changing workload and I/O profiles of virtual servers
    • Scaling I/O capacity to meet high-performance and clustered application needs
    • Leveraging common cabling infrastructure and physical networking facilities

Before going further, lets take a step backwards for a few moments.

To say that I/O and networking demands and requirements are increasing is an understatement. The amount of data being generated, copied, and retained for longer periods of time is elevating the importance of the role of data storage and infrastructure resource management (IRM). Networking and input/output (I/O) connectivity technologies (figure 1) tie facilities, servers, storage tools for measurement and management, and best practices on a local and wide area basis to enable an environmentally and economically friendly data center.

TIERED ACCESS FOR SERVERS AND STORAGE
There is an old saying that the best I/O, whether local or remote, is an I/O that does not have to occur. I/O is an essential activity for computers of all shapes, sizes, and focus to read and write data in and out of memory (including external storage) and to communicate with other computers and networking devices. This includes communicating on a local and wide area basis for access to or over Internet, cloud, XaaS, or managed services providers such as shown in figure 1.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
Figure 1 The Big Picture: Data Center I/O and Networking

The challenge of I/O is that some form of connectivity (logical and physical), along with associated software is required along with time delays while waiting for reads and writes to occur. I/O operations that are closest to the CPU or main processor should be the fastest and occur most frequently for access to main memory using internal local CPU to memory interconnects. In other words, fast servers or processors need fast I/O, either in terms of low latency, I/O operations along with bandwidth capabilities.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
Figure 2 Tiered I/O and Networking Access

Moving out and away from the main processor, I/O remains fairly fast with distance but is more flexible and cost effective. An example is the PCIe bus and I/O interconnect shown in Figure 2, which is slower than processor-to-memory interconnects but is still able to support attachment of various device adapters with very good performance in a cost effective manner.

Farther from the main CPU or processor, various networking and I/O adapters can attach to PCIe, PCIx, or PCI interconnects for backward compatibility to support various distances, speeds, types of devices, and cost factors.

In general, the faster a processor or server is, the more prone to a performance impact it will be when it has to wait for slower I/O operations.

Consequently, faster servers need better-performing I/O connectivity and networks. Better performing means lower latency, more IOPS, and improved bandwidth to meet application profiles and types of operations.

Peripheral Component Interconnect (PCI)
Having established that computers need to perform some form of I/O to various devices, at the heart of many I/O and networking connectivity solutions is the Peripheral Component Interconnect (PCI) interface. PCI is an industry standard that specifies the chipsets used to communicate between CPUs and memory and the outside world of I/O and networking device peripherals.

Figure 3 shows an example of multiple servers or blades each with dedicated Fibre Channel (FC) and Ethernet adapters (there could be two or more for redundancy). Simply put the more servers and devices to attach to, the more adapters, cabling and complexity particularly for blade servers and dense rack mount systems.
PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
Figure 3 Dedicated PCI adapters for I/O and networking devices

Figure 4 shows an example of a PCI implementation including various components such as bridges, adapter slots, and adapter types. PCIe leverages multiple serial unidirectional point to point links, known as lanes, in contrast to traditional PCI, which used a parallel bus design.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)

Figure 4 PCI IOV Single Root Configuration Example

In traditional PCI, bus width varied from 32 to 64 bits; in PCIe, the number of lanes combined with PCIe version and signaling rate determine performance. PCIe interfaces can have 1, 2, 4, 8, 16, or 32 lanes for data movement, depending on card or adapter format and form factor. For example, PCI and PCIx performance can be up to 528 MB per second with a 64 bit, 66 MHz signaling rate, and PCIe is capable of over 4 GB (e.g., 32 Gbit) in each direction using 16 lanes for high-end servers.

The importance of PCIe and its predecessors is a shift from multiple vendors’ different proprietary interconnects for attaching peripherals to servers. For the most part, vendors have shifted to supporting PCIe or early generations of PCI in some form, ranging from native internal on laptops and workstations to I/O, networking, and peripheral slots on larger servers.

The most current version of PCI, as defined by the PCI Special Interest Group (PCISIG), is PCI Express (PCIe). Backwards compatibility exists by bridging previous generations, including PCIx and PCI, off a native PCIe bus or, in the past, bridging a PCIe bus to a PCIx native implementation. Beyond speed and bus width differences for the various generations and implementations, PCI adapters also are available in several form factors and applications.

Traditional PCI was generally limited to a main processor or was internal to a single computer, but current generations of PCI Express (PCIe) include support for PCI Special Interest Group (PCI) I/O virtualization (IOV), enabling the PCI bus to be extended to distances of a few feet. Compared to local area networking, storage interconnects, and other I/O connectivity technologies, a few feet is very short distance, but compared to the previous limit of a few inches, extended PCIe provides the ability for improved sharing of I/O and networking interconnects.

I/O VIRTUALIZATION(IOV)
On a traditional physical server, the operating system sees one or more instances of Fibre Channel and Ethernet adapters even if only a single physical adapter, such as an InfiniBand HCA, is installed in a PCI or PCIe slot. In the case of a virtualized server for example, Microsoft HyperV or VMware ESX/vSphere the hypervisor will be able to see and share a single physical adapter, or multiple adapters, for redundancy and performance to guest operating systems. The guest systems see what appears to be a standard SAS, FC or Ethernet adapter or NIC using standard plug-and-play drivers.

Virtual HBA or virtual network interface cards (NICs) and switches are, as their names imply, virtual representations of a physical HBA or NIC, similar to how a virtual machine emulates a physical machine with a virtual server. With a virtual HBA or NIC, physical NIC resources are carved up and allocated as virtual machines, but instead of hosting a guest operating system like Windows, UNIX, or Linux, a SAS or FC HBA, FCoE converged network adapter (CNA) or Ethernet NIC is presented.

In addition to virtual or software-based NICs, adapters, and switches found in server virtualization implementations, virtual LAN (VLAN), virtual SAN (VSAN), and virtual private network (VPN) are tools for providing abstraction and isolation or segmentation of physical resources. Using emulation and abstraction capabilities, various segments or sub networks can be physically connected yet logically isolated for management, performance, and security purposes. Some form of routing or gateway functionality enables various network segments or virtual networks to communicate with each other when appropriate security is met.

PCI-SIG IOV
PCI SIG IOV consists of a PCIe bridge attached to a PCI root complex along with an attachment to a separate PCI enclosure (Figure 5). Other components and facilities include address translation service (ATS), single-root IOV (SR IOV), and multiroot IOV (MR IOV). ATS enables performance to be optimized between an I/O device and a servers I/O memory management. Single root, SR IOV enables multiple guest operating systems to access a single I/O device simultaneously, without having to rely on a hypervisor for a virtual HBA or NIC.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)

Figure 5 PCI SIG IOV

The benefit is that physical adapter cards, located in a physically separate enclosure, can be shared within a single physical server without having to incur any potential I/O overhead via virtualization software infrastructure. MR IOV is the next step, enabling a PCIe or SR IOV device to be accessed through a shared PCIe fabric across different physically separated servers and PCIe adapter enclosures. The benefit is increased sharing of physical adapters across multiple servers and operating systems not to mention simplified cabling, reduced complexity and resource utilization.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
Figure 6 PCI SIG MR IOV

Figure 6 shows an example of a PCIe switched environment, where two physically separate servers or blade servers attach to an external PCIe enclosure or card cage for attachment to PCIe, PCIx, or PCI devices. Instead of the adapter cards physically plugging into each server, a high performance short-distance cable connects the servers PCI root complex via a PCIe bridge port to a PCIe bridge port in the enclosure device.

In figure 6, either SR IOV or MR IOV can take place, depending on specific PCIe firmware, server hardware, operating system, devices, and associated drivers and management software. For a SR IOV example, each server has access to some number of dedicated adapters in the external card cage, for example, InfiniBand, Fibre Channel, Ethernet, or Fibre Channel over Ethernet (FCoE) and converged networking adapters (CNA) also known as HBAs. SR IOV implementations do not allow different physical servers to share adapter cards. MR IOV builds on SR IOV by enabling multiple physical servers to access and share PCI devices such as HBAs and NICs safely with transparency.

The primary benefit of PCI IOV is to improve utilization of PCI devices, including adapters or mezzanine cards, as well as to enable performance and availability for slot-constrained and physical footprint or form factor-challenged servers. Caveats of PCI IOV are distance limitations and the need for hardware, firmware, operating system, and management software support to enable safe and transparent sharing of PCI devices. Examples of PCIe IOV vendors include Aprius, NextIO and Virtensys among others.

InfiniBand IOV
InfiniBand based IOV solutions are an alternative to Ethernet-based solutions. Essentially, InfiniBand approaches are similar, if not identical, to converged Ethernet approaches including FCoE, with the difference being InfiniBand as the network transport. InfiniBand HCAs with special firmware are installed into servers that then see a Fibre Channel HBA and Ethernet NIC from a single physical adapter. The InfiniBand HCA also attaches to a switch or director that in turn attaches to Fibre Channel SAN or Ethernet LAN networks.

The value of InfiniBand converged networks are that they exist today, and they can be used for consolidation as well as to boost performance and availability. InfiniBand IOV also provides an alternative for those who do not choose to deploy Ethernet.

From a power, cooling, floor-space or footprint standpoint, converged networks can be used for consolidation to reduce the total number of adapters and the associated power and cooling. In addition to removing unneeded adapters without loss of functionality, converged networks also free up or allow a reduction in the amount of cabling, which can improve airflow for cooling, resulting in additional energy efficiency. An example of a vendor using InfiniBand as a platform for I/O virtualization is Xsigo.

General takeaway points include the following:

  • Minimize the impact of I/O delays to applications, servers, storage, and networks
  • Do more with what you have, including improving utilization and performance
  • Consider latency, effective bandwidth, and availability in addition to cost
  • Apply the appropriate type and tiered I/O and networking to the task at hand
  • I/O operations and connectivity are being virtualized to simplify management
  • Convergence of networking transports and protocols continues to evolve
  • PCIe IOV is complimentary to converged networking including FCoE

Moving forward, a revolutionary new technology may emerge that finally eliminates the need for I/O operations. However until that time, or at least for the foreseeable future, several things can be done to minimize the impacts of I/O for local and remote networking as well as to simplify connectivity.

PCIe Fundamentals Server Storage I/O Network Essentials

Learn more about IOV, converged networks, LAN, SAN, MAN and WAN related topics in Chapter 9 (Networking with your servers and storage) of The Green and Virtual Data Center (CRC) as well as in Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier).

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Could Huawei buy Brocade?

Disclosure: I have no connection to Huawei. I own no stock in, nor have I worked for Brocade as an employee; however I did work for three years at SAN vendor INRANGE which was acquired by CNT. However I left to become an industry analyst prior to the acquisition by McData and well before Brocade bought McData. Brocade is not a current client; however I have done speaking events pertaining to general industry trends and perspectives at various Brocade customer events for them in the past.

Is Brocade for sale?

Last week a Wall Street Journal article mentioned Brocade (BRCD) might be for sale.

BRCD has a diverse product portfolio for Fibre Channel, Ethernet along with the emerging Fibre Channel over Ethernet (FCoE) market and a whos who of OEM and channel partners. Why not be for sale, good timing for investors, CEO Mike Klayko and his team have arguably done a good job of shifting and evolving the company.

Generally speaking, lets keep in perspective, everything is always for sale, and in an economy like now, bargains are everywhere. Many business are shopping, its just a matter of how visible the shopping for a seller or buyer is along with motivations and objectives including shareholder value.

Consequently, the coconut wires are abuzz with talk and speculation of who will buy Brocade or perhaps who Brocade might buy among other Merger and Acquisition (M and A) activity of who will buy who. For example, who might buy BRCD, why not EMC (they sold McData off years ago via IPO), or IBM (they sold some of their networking business to Cisco years ago) or HP (currently an OEM partner of BRCD) as possible buyers?

Last week I posted on twitter a response to a comment about who would want to buy Brocade with a response to the effect of why not a Huawei to which there was some silence except for industry luminary Steve Duplessie (have a look to see what Steve had to say).

Part of being an analyst IMHO should be to actually analyze things vs. simply reporting on what others want you to report or what you have read or hear elsewhere. This also means talking about scenarios that are of out of the box or in adjacent boxes from some perspectives or that might not be in-line with traditional thinking. Sometimes this means breaking away and thinking and saying what may not be obvious or practical. Having said that, lets take a step back for a moment as to why Brocade may or might not be for sale and who might or may not be interested in them.

IMHO, it has a lot to do with Cisco and not just because Brocade sees no opportunity to continue competing with the 800lb guerilla of LAN/MAN networking that has moved into Brocades stronghold of storage network SANs. Cisco is upsetting the table or apple cart with its server partners IBM, Dell, HP, Oracle/Sun and others by testing the waters of the server world with their UCS. So far I see this as something akin to a threat testing the defenses of a target before actually full out attacking.

In other words, checking to see how the opposition responds, what defense are put up, collect G2 or intelligence as well as how the rest of the world or industry might respond to an all out assault or shift of power or control. Of course, HP, IBM, Dell and Sun/Oracle will not take this move into their revenue and account control goes un-noticed with initial counter announcements having been made some re-emphasize relationship with Brocade along with their recent acquisition of Ethernet/IP vendor Foundry.

Now what does this have to do with Brocade potentially being sold and why the title involving Huawei?

Many of the recent industry acquisitions have been focused on shoring up technology or intellectual property (IP), eliminating a competitor or simply taking advantage of market conditions. For example, Datadomain was sold to EMC in a bidding war with NetApp, HP bought IBRIX, Oracle bought or is trying to buy Sun, Oracle also bought Virtual Iron, Dell bought Perot after HP bought EDS a year or so ago while Xerox bought ACS and so the M and A game continues among other deals.

Some of the deals are strategic, many being tactical, Brocade being bought I would put in the category of a strategic scenario, a bargaining chip or even pawn if you prefer in a much bigger game that is more than about switches, directors, HBAs, LANs, SANs, MANSs, WANs, POTS and PANs (Checkout my  book “Resilient Storage Networks”-Elsevier)!

So with conversations focused around Cisco expanding into servers to control the data center discussion, mindset, thinking, budgets and decision making, why wouldnt an HP, IBM, Dell let alone a NetApp, Oracle/Sun or even EMC want to buy Brocade as a bargaining chip in a bigger game? Why not a Ciena (they just bought some of Nortels assets), Juniper or 3Com (more of a merger of equals to fight Cisco), Microsoft (might upset their partner Cisco) or Fujitsu (Their Telco group that is) among others?

Then why not Huawei, a company some may have heard of, one that others may not have.

Who is Huawei you might ask?

Simple, they are a very large IT solutions provider who is also a large player in China with global operations including R&D in North America and many partnerships with U.S. vendors. By rough comparison, Cisco most recently reported annual revenue are about 36.1B (All are USD), BRCD about 1.5B, Juniper about $3.5B and 3COM about $1.3B and Huawei at about 23B USD with a year over year sales increase of 45%. Huawei has previous partnerships with storage vendors including Symantec and Falconstor among others. Huawei also has had partnership with 3com (H3C), a company that was first of the LAN vendors to get into SANs (pre-maturely) beating Cisco easily by several years.

Sure there would be many hurdles and issues, similar to the ones CNT and INRANGE had to overcome, or McData and CNT, or Brocade and McData among others. However in the much bigger game of IT account and thus budget control is played by HP, IBM, and Sun/Oracle among others, wouldn’t maintaining a dual-source for customers networking needs make sense, or, at least serve as a check to Cisco expansion efforts? If nothing else, maintaining the status quo in the industry for now, or, if the rules and game are changing, wouldn’t some of the bigger vendors want to get closer to the markets where Huawei is seeing rapid growth?

Does this mean that Brocade could be bought? Sure.
Does this mean Brocade cannot compete or is a sign of defeat? I don’t think so.
Does this mean that Brocade could end up buying or merging with someone else? Sure, why not.
Or, is it possible that someone like Huawei could end up buying Brocade? Why not!

Now, if Huawei were to buy Brocade, which begs the question for fun, could they be renamed or spun off as a division called HuaweiCade or HuaCadeWei? Anything is possible when you look outside the box.

Nuff said for now, food for thought.

Cheers – gs

Greg Schulz – StorageIO, Author “The Green and Virtual Data Center” (CRC)

I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)

Ok, so I should have used that intro last week before heading off to VMworld in San Francisco instead of after the fact.

Think of it as a high latency title or intro, kind of like attaching a fast SSD to a slow, high latency storage controller, or a fast server attached to a slow network, or fast network with slow storage and servers, it is what it is.

I/O virtualization (IOV), Virtual I/O (VIO) along with I/O and networking convergence have been getting more and more attention lately, particularly on the convergence front. In fact one might conclude that it is trendy to all of a sudden to be on the IOV, VIO and convergence bandwagon given how clouds, soa and SaaS hype are being challenged, perhaps even turning to storm clouds?

Lets get back on track, or in the case of the past week, get back in the car, get back in the plane, get back into the virtual office and what it all has to do with Virtual I/O and VMworld.

The convergence game has at its center Brocade emanating from the data center and storage centric I/O corner challenging Cisco hailing from the MAN, WAN, LAN general networking corner.

Granted both vendors have dabbled with success in each others corners or areas of focus in the past. For example, Brocade as via acquisitions (McData+Nishan+CNT+INRANGE among others) a diverse and capable stable of local and long distance SAN connectivity and channel extension for mainframe and open systems supporting data replication, remote tape and wide area clustering. Not to mention deep bench experience with the technologies, protocols and partners solutions for LAN, MAN (xWDM), WAN (iFCP, FCIP, etc) and even FAN (file area networking aka NAS) along with iSCSI in addition to Fibre Channel and FICON solutions.

Disclosure: Here’s another plug ;) Learn more about SANs, LANs, MANs, WANs, POTs, PANs and related technologies and techniques in my book “Resilient Storage NetworksDesigning Flexible Scalable Data Infrastructures" (Elsevier).

Cisco not to be outdone has a background in the LAN, MAN, WAN space directly, or similar to Brocade via partnerships with product and experience and depth. In fact while many of my former INRANGE and CNT associates ended up at Brocade via McData or in-directly, some ended up at Cisco. While Cisco is known for general networking, the past several years they have gone from zero to being successful in the Fibre Channel and yes, even the FICON mainframe space while like Brocade (HBAs) dabbling in other areas like servers and storage not to mention consumer products.

What does this have to do with IOV and VIO, let alone VMworld and my virtual office, hang on, hold that thought for a moment, lets get the convergence aspect out of the way first.

On the I/O and networking convergence (e.g. Fibre Channel over Ethernet – FCoE) scene both Brocade (Converged Enhanced Ethernet-CEE) and Cisco (Data Center Ethernet – DCE) along with their partners are rallying around each others camps. This is similar to how a pair of prize fighters maneuvers in advance of a match including plenty of trash talk, hype and all that goes with it. Brocade and Cisco throwing mud balls (or spam) at each other, or having someone else do it is nothing new, however in the past each has had their core areas of focus coming from different tenets in some cases selling to different people in an IT environment or those in VAR and partner organizations. Brocade and Cisco are not alone nor is the I/O networking convergence game the only one in play as it is being complimented by the IOV and VIO technologies addressing different value propositions in IT data centers.

Now on to the IOV and VIO aspect along with VMworld.

For those of you that attended VMworld and managed to get outside of session rooms, or media/analyst briefing or reeducation rooms, or out of partner and advisory board meetings walking the expo hall show floor, there was the usual sea of vendors and technology. There were the servers (physical and virtual), storage (physical and virtual), terminals, displays and other hardware, I/O and networking, data protection, security, cloud and managed services, development and visualization tools, infrastructure resource management (IRM) software tools, manufactures and VARs, consulting firms and even some analysts with booths selling their wares among others.

Likewise, in the onsite physical data center to support the virtual environment, there were servers, storage, networking, cabling and associated hardware along with applicable software and tucked away in all of that, there were also some converged I/O and networking, and, IOV technologies.

Yes, IOV, VIO and I/O networking convergence were at VMworld in force, just ask Jon Torr of Xsigo who was beaming like a proud papa wanting to tell anyone who would listen that his wares were part of the VMworld data center (Disclosure: Thanks for the T-Shirt).

Virtensys had their wares on display with Bob Nappa more than happy to show the technology beyond an UhiGui demo including how their solution includes disk drives and an LSI MegaRAID adapter to support VM boot while leveraging off-the shelf or existing PCIe adapters (SAS, FC, FCoE, Ethernet, SATA, etc.) while allowing adapter sharing across servers, not to mention, they won best new technology at VMworld award.

NextIO who is involved in the IOV / VIO game was there along with convergence vendors Brocade, Cisco, Qlogic and Emulex among others. Rest assured, there are many other vendors and VARs in the VIO and IOV game either still in stealth, semi-stealth or having recently launched.

IOV and VIO are complimentary to I/O and networking convergence in that solutions like those from Aprius, Virtensys, Xsigo, NextIO and others. While they sound similar, and in fact there is confusion as to if Fibre Channel N_Port Virtual ID (FC_NPVID) and VMware virtual adapters are IOV and VIO vs. solutions that are focused on PCIe device/resource extension and sharing.

Another point of confusion around I/O virtualization and virtual I/O are blade system or blade center connectivity solutions such as HP Virtual Connect or IBM Fabric Manger not to mention those form Engenera add confusion to the equation. Some of the buzzwords that you will be hearing and reading more about include PCIe Single Root IOV (SR-IOV) and Multi-Root IOV (MR-IOV). Think of it this way, within VMware you have virtual adapters, and Fibre Channel Virtualization N_Port IDs for LUN mapping/masking, zone management and other tasks.

IOV enables localized sharing of physical adapters across different physical servers (blades or chassis) with distances measured in a few meters; after all, it’s the PCIe bus that is being extended. Thus, it is not a replacement for longer distance in the data center solutions such as FCoE or even SAS for that matter, thus they are complimentary, or at least should be considered complimentary.

The following are some links to previous articles and related material including an excerpt (yes, another plug ;)) from chapter 9 “Networking with you servers and storage” of new book “The Green and Virtual Data Center” (CRC). Speaking of virtual and physical, “The Green and Virtual Data Center” (CRC) was on sale at the physical VMworld book store this week, as well as at the virtual book stores including Amazon.com

The Green and Virtual Data Center

The Green and Virtual Data Center (CRC) on book shelves at VMworld Book Store

Links to some IOV, VIO and I/O networking convergence pieces among others, as well as news coverage, comments and interviews can be found here and here with StorageIOblog posts that may be of interest found here and here.

SearchSystemChannel: Comparing I/O virtualization and virtual I/O benefits – August 2009

Enterprise Storage Forum: I/O, I/O, It’s Off to Virtual Work We Go – December 2007

Byte and Switch: I/O, I/O, It’s Off to Virtual Work We Go (Book Chapter Excerpt) – April 2009

Thus I went to VMworld in San Francisco this past week as much of the work I do is involved with convergence similar to my background, that is, servers, storage, I/O networking, hardware, software, virtualization, data protection, performance and capacity planning.

As to the virtual work, well, I spent some time on airplanes this week which as is often the case, my virtual office, granted it was real work that had to be done, however I also had a chance to meet up with some fellow tweeters at a tweet up Tuesday evening before getting back in a plane in my virtual office.

Now, I/O, I/O, its back to real work I go at Server and StorageIO , kind of rhymes doesnt it!

I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)

Summer Book Update and Back to School Reading

August and thus Summer 2009 in the northern hemisphere are swiftly passing by and start of a new school year is just around the corner which means it is also time for final vacations, time at the beach, pool, golf course, amusement park or favorite fishing hole among other past times. In order to help get you ready for fall (or late summer) book shopping for those with IT interests, here are some Amazon lists (here, here and here) for ideas, after all, the 2009 holiday season is not that far away!

Here’s a link to my Amazon.com Authors page that includes coverage of both my books, "The Green and Virtual Data Center" (CRC) and "Resilient Storage Networks – Designing Scalable Flexible Data Infrastructures" (Elsevier).

The Green and Virtual Data Center (CRC)Resilient Storage Networks - Designing Flexible Scalable Data Infrastructures (Elsevier)

Click here to look inside "The Green and Virtual Data Center" (CRC) and or inside "Resilient Storage Networks" (Elsevier).

Its been six months since the launch announcement of my new book "The Green and Virtual Data Center" (CRC) and general availability at Amazon.com and other global venues here and here. In celebration of the six month anniversary of the book launch (thank you very much to all who have bought a copy!), here is some coverage including what is being said, related articles, interviews, book reviews and more.

Article: New Green Data Center: shifting from avoidance to becoming more efficient IT-World August 2009

wsradio.com interview discussing themes and topics covered in the book including closing the green gap and shifting towards an IT efficiency and productivity for business sustainability.

Closing the green gap: Discussion about expanding data centers with environmental benefits at SearchDataCenter.com

From Greg Brunton – EDS/An HP Company: “Greg Schulz has presented a concise and visionary perspective on the Green issues, He has cut through the hype and highlighted where to start and what the options are. A great place to start your green journey and a useful handbook to have as the journey continues.”

From Rick Bauer – Storage Networking Industry Association (SNIA) – Education and Technology Director”
“Greg is one of the smartest “good guys” in the storage industry.
He has been a voice of calm amid all the “green IT hype” over the past few years. So when he speaks of the possible improvements that Green Tech can bring, it’s a much more realistic approach…”

From CMG (Computer Measurement Group) MeasureIT
I must admit that I have been slightly skeptical at times, when it comes to what the true value is behind all of the discussions on “green” technologies in the data center. As someone who has seen both the end user and vendor side of things, I think my skepticism gets heightened more than it normally would be. This book really helped dispel my skepticism.

The book is extremely well organized and easy to follow. Each chapter has a very good introduction and comprehensive summary. This book could easily serve as a blueprint for organizations to follow when they look for ideas on how to design new data centers. It’s a great addition to an IT Bookshelf. – Reviewed by Stephen R. Guendert, PhD (Brocade and CMG MeasureIT). Click here to read the full review in CMG MeasureIT.

From Tom Becchetti – IT Architect: “This book is packed full of information. From ecological and energy efficiencies, to virtualization strategies and what the future may hold for many of the key enabling technologies. Greg’s writing style benefits both technologists and management levels.”

From MSP Business Journal: Greg Schulz named an Eco-Tech Warrior – April 2009

From David Marshall at VMblog.com: If you follow me on Linked in, you might have seen that I had been reading a new book that came out at the beginning of the year titled, “The Green and Virtual Data Center” by Greg Schulz. Rather than writing about a specific virtualization platform and how to get it up and running, Schulz takes an interesting approach at stepping back and looking at the big picture. After reading the book, I reached out to the author to ask him a few more questions and to share his thoughts with readers of VMBlog.com. I know I’m not Oprah’s Book Club, but I think everyone here will enjoy this book. Click here to read more what David Marshal has to say.

From Zen Kishimoto of Altaterra Research: Book Review May 2009

From Kurt Marko of Processor.com Green and Virtual Book Review – April 2009

From Serial Storage Wire (STA): Green and SASy = Energy and Economic, Effective Storage – March 2009

From Computer Technology Review: Recent Comments on The Green and Virtual Data Center – March 2009

From Alan Radding in Big Fat Finance Blog: Green IT for Finance Operations – April 2009

From VMblog: Comments on The Green and Virtual Data CenterMarch 2009

From StorageIO Blog: Recent Comments and Tips – March 2009

From VMblog: Comments on The Green and Virtual Data CenterMarch 2009

From Data Center Links John Rath comments on “The Green and Virtual Data Center

From InfoStor Dave Simpson comments on “The Green and Virtual Data Center

From Sys-Con Georgiana Comsa comments on “The Green and Virtual Data Center

From Ziff Davis Heather Clancy comments on “The Green and Virtual Data Center”

From Byte & Switch Green IT and the Green Gap February 2009

From GreenerComputing: Enabling a Green and Virtual Data Center February 2009

From Sys-con: Comments on The Green and Virtual Data Center – March 2009

From ServerWatch: Green IT: Myths vs. Realities – February 2009

From Byte & Switch: Going Green and the Economic Downturn – February 2009

From Business Wire: Comments on The Green and Virtual Data Center Book – January 2009

Additional content and news can be found here and here with upcoming events listed here.

Interested in Kindle? Here’s a link to get a Kindle copy of "Resilient Storage Networks" (Elsevier) or to send a message via Amazon to publisher CRC that you would like to see a Kindle version of "The Green and Virtual Data Center". While you are at it, I also invite you to become a fan of my books at Facebook.

Thanks again to everyone who has obtained their copy of either of my books, also thanks to all of those who have done reviews, interviews and helped in many other ways!

Enjoy the rest of your summer!

Cheers – gs

Greg Schulz – twitter @storageio

Visit my new Amazon authors page

Amazon.com

In addition to my books The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier) being on Amazon, I now also have an Amazon Authors page.

If you are an Amazon shopper, you may already know about authors pages, if, not, they are a feature available for book authors to tie together information about their books, blogs and other material in one venue, similar to what other social networking and medium sites provide.

Visit my new authors page at Amazon.com by clicking here and if you have read either of my books, feel free to leave a review or comment, thanks in advance.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Tiered Communication and Media Venues

Storage I/O trends

Someone recently ask me what have I been doing as they had not seen or heard anything from me in a longtime on the web which had me a bit puzzled. Then it dawned on me, perhaps the person was focused on reading or following just one of the many different venues that I’m involved with around the world ranging from print, to web to live-in person to social networking and perhaps a site that I have not been doing much with as of late. On the flip side, I hear from others about how much they see and hear, good bad or indifferent and of all the different venues I’m involved with wondering how its all done or possible and how big of an army do I have to support all the content and venues.

Well, that got me to thinking a bit about how people have various preferences for how they get or share information. Pondering all the different mediums available for disseminating, receiving and sharing information and discussion, do you have a preferred medium, perhaps vetted via a traditional publisher or publication or un-vetted via the rapid fire quick pace world of Twitter, IM, personal blogs and social networking?

Do Webs, Blogs, Twitter, IM, Email, Articles, Books, Conferences, Podcasts, Magazines and other communication mediums fall under the class of tiered media and communications? IMHO sure, to each their own or many preferences.

What do these have to do with servers, storage, I/O networks and associated data management technologies and techniques? Simple, they are all forms of communications and information exchange that different people have preferences for getting or sharing information, news and opinions.

Now what does any of this have to do with myself and StorageIO? Simple, I realize that people have their own preferences on how they get or share information and thus give and take part in different venues and using various mediums around the world. How is StorageIO using and participating in these various mediums and venues? Read-on and see some examples. So here’s my take and what I’m doing with StorageIO to take part with different people using several diverse forms or mediums.

For some, its via web sites such as the main StorageIO web site (www.storageio.com) where information is added with regards to news, events, books, tips and articles, white papers and reports, services and experience among other content material or information.

StorageIO website www.storageio.com

Some people prefer traditionally published, printed and vetted content such as "Resilient Storage Networks" (Elsevier) ISBN-10: 1555583113 or ISBN-13: 978-1555583118 or "The Green and Virtual Data Center" (Auerbach) ISBN-10: 1420086669 or ISBN-13: 978-1420086669 as well as digital versions of published books like those on Kindle.

Books by StorageIO at www.storageio.com/books.html

Yet another venue are events such as conferences, seminars, custom events or other live in person meetings such as those found on the StorageIO events page.

Events at www.storageio.com/events.html

Some people like reading blogs such as Gregs’ StorageIO Blog (www.storageioblog.com) which can also be accessed via the main StorageIO web site (www.storageio.com).

StorageIO Blog

For other people, the preference is reading about information, industry trends and perspectives, quotes and interviews via traditional news sources, both IT industry related as well as market verticals such as those among others at the StorageIO in the News page.

StorageIO in the news at www.storageio.com/news.html

Another preference is to get information via pod cast, web cast or videos including those found here, here, here, and here among others.

Podcasts with StorageIO

Then the new and emerging mediums including Twitter, Facebook, Friendfeed, Plaxo, Technorati, and Linkedin, among others.

StorageIO on twitter at www.twitter/storageio

While the number of print based industry specific publications is on the decline, there are still some venues that print monthly, bi-monthly or quarterly venues of their pubs also digital versions to compliment web-based content such as searchstorage, enterprise storage forum and many others.

StorageIO in Print (physical and virtual)

Did I answer the question of how StorageIO is using and participating in different venues? If not, check out those mentioned above to learn and see more. However in a nutshell, there is a mix of working with existing venues ranging from books, to articles in journals, tips and commentary in news and other venues. There are also industry trends and perspectives white papers and solution briefs, web casts, pod casts and videos. There are live in person participating at conferences, seminars and custom events as well as regular updates on the web site and blogsite. For those who are into real-time, even more so than a blog, then there are the social sites including twitter or networking sites including Linkedin among others not to mention RSS feeds.

Do you prefer to get news and information as it happens, or, perhaps even before it happens as the story is still un-folding, or, perhaps to wait and get the story with insight and perspectives along with the story behind the story?

Do you have a preferred venue and medium for getting and enhancing information, or perhaps some combination of the above or others including Instant Messaging such as AOL (storageio), or email (info at storageio dot com), RSS, POTS or Plain Old Telephone System or Skype or Snail mail among many others venues, tools or aggregators and so what is it?

Needless to say, there are plenty of changes and options for getting and giving information and the one thing we can count on being constant is change it self.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Storage Magazine in a Virtual World

Despite some internet chatter the other day that the TechTarget Storage magazine (not to be confused with a different Dutch magazine that I happen to have recently appeared in) had ceased to exist, the reality is that the print version like so many other publications is giving way to an on-line, digital only version as has been the trend recently.

Printed magazines, whether weekly, monthly or quarterly for general interest or industry specific have all been under going a transformation over the past decade with examples including the Sears catalog giving way to new mediums and venues on the internet.

For the most part, those printed magazines that still exist keep getting smaller and thinner with less and less content to correspond to the decrease in advertising dollars that keeps the publications in existence in many cases. Personally I like and have adjusted to having virtual magazines in the form of on-line HTML or PDF or some other form as part of an on-line, downloadable virtual desktop. However, I still enjoyed being able to take a pile of magazines onto an airplane to read especially when you have to turn off your electronics and before nap time.

Magazines are not the only publications going to on-line, in addition to catalogs that have given way to the likes of Amazon.com among others, more books are also being published on-line either in PDF or secure download as well as emerging kindle versions. My book Resilient Storage Networks (Elsevier) is currently available at Amazon.com in both print as well as Kindle versions and while initially my new book The Green and Virtual Data Center (Auerbach) which can be ordered now at Amazon (and other venues) will be in print, rest assured, there will also be a digital version very soon.

Books have been an interesting scenario talking with other authors who have seen an increase in digital versions being sold, there is still a preference for readers to get a physical version that they can carry with them and make notes or use as a desktop paper weight or what suits your preference.

Back to TechTarget Storage magazine, what’s interesting is that TechTarget had only a hand full of printed publications with the bulk of their content being on-line at sites like SearchStorage and other sibling sites as well as their conferences, seminars and other custom events.

While I have not been involved Storage magazine as long as early contributors like Steve Foskett who has a nice posting on his blog, I have been involved with Storage magazine among many other TechTarget as well as most of the other industry related publications (print and on-line).

My involvement with Storage magazine for many years has included writing some articles (Scaling SANs, Bridging the Gap, and Automate Data Recovery), doing tips, ask the experts (ATE) as well as appearing in other authors articles providing commentary and industry trends and perspectives quotes not to mention always looking forward to getting my monthly hard copy version to take with and read on airplanes or trains when traveling. Storage magazine and many of the people involved with producing the publication from what I understand will continue to produce a publication inconjunction with SearchStorage and sibling sites such as SearchSMBstorage among others where you can find various articles, tips, podcasts and other material from myself and others in the industry.

Storage Magazine
December 2008 Storage Magzine

The final printed version is the December 2008 version and while I do not have any articles in this edition, I am honored to appear via interviews and providing quotes in a couple of articles including How you SAN will evolve by Alan Radding as well as Next Year’s hot technologies by Ellen O’Brien.

So here’s to one more printed version of a publication going to the archives, and look forward to the future of the on-line version as well as all of the other on-line venues that are doing what they need to do to remain viable in a changing world.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved