Green IT and Virtual Data Centers

Green IT and virtual data centers are no fad nor are they limited to large-scale environments.

Paying attention to how resources are used to deliver information services in a flexible, adaptable, energy-efficient, environmentally, and economically friendly way to boost efficiency and productivity are here to stay.

Read more here in the article I did for the folks over at Enterprise Systems Journal.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

How to win approval for upgrades: Link them to business benefits

Drew Rob has another good article over at Processor.com about various tips and strategies on how to gain approval for hardware (or software) purchases with some comments by yours truly.

My tips and advice that are quoted in the story include to link technology resources to business needs impact which may be common sense, however still a time tested effective technique.

Instead of speaking tech talk such as Performance, capacity, availability, IOPS, bandwidth, GHz, frames or packets per second, VMs to PM or dedupe ratio, map them to business speak, that is things that finance, accountants, MBAs or other management personal understand.

For example, how many transactions at a given response time can be supported by a given type of server, storage or networking device.

Or, put a different way, with a given device, how much work can be done and what is the associated monetary or business benefit.

Likewise, if you do not have a capacity plan for servers, storage, I/O and networking along with software and facilities covering performance, availability, capacity and energy demands now is the time to put one in place.

More on capacity and performance planning later, however for now, if you want to learn more, check Chapter 10 (Performance and Capacity Planning) in my book Resilient Storage Networks: Designing Flexible and Scalable Data Infrastructure: Elsevier).

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Did HP respond to EMC and Cisco VCE with Microsoft HyperV bundle?

Last week EMC and Cisco along with Intel and VMware created the VCE collation along with a consumption model based service joint venture called Acadia.

In other activity last week, HP made several announcements including:

  • Improvements in sensing technologies
  • StorageWorks enhancements (SVSP, IBRIX, EVA and HyperV, X9000 and others)

EMC and Cisco were relatively quiet this week on announcement front, however HP unleashed another round of announcements that among others included:

  • Quarterly financial results
  • SMB server, storage, network and virtualization enhancements (here, here, here and here)
  • Acquisitions of 3COM (see related blog post here)

The reason I bring up all of this HP activity is not to simply re-cap all of the news and announcements which you can find on many other blogs or news sites, rather I see as a trend.

That trend appears to be one of a company on the move, not ready to sit back on its laurels, rather a company that continues to innovate in-house and via acquisitions.

Some of those acquisitions including IBRIX were relatively small, some like EDS last year and the one this week of 3COM to some would be large while to others perhaps as being seen as medium sized. Either way, HP has been busy expanding its portfolio of technology solution and services offerings along with its comprehensive IT stack.

Cisco, EMC and HP are examples of companies looking to expand their IT stacks and footprint in terms of diversifying current product focus and reach, along with extending into new or further into existing customer and market sector areas. Last weeks EMC and Cisco signaled two large players combing their resources to make virtualization and private clouds easy to acquire and deploy for mid to large size environments with a theme around VMware.

This week buried in all of the HP announcements was one that caught my eye which is a virtualization solution bundle designed for small business (that is something smaller than a vblock0), something that was missing in the Cisco and EMC news of last week however one that Im sure will be addressed sooner versus later.

In the case of HP, the other thing with their virtualization bundle was the focus on the mid to small business that fall into the broad and diverse SMB category, not to mention including Microsoft.

Yes, that is right, while a VMware based solution from HP would be a no-brainer given all of the activity the two companies are involved  in as joint partners, Microsoft HyperV was front and center.

Is this a reaction to last weeks Cisco and EMC salvo?

Perhaps and some will jump to that conclusion. However I will also offer this alternative scenario, 85-90 percent of servers consolidated into virtual machines (VMs) on VMware or other hypervisors including Microsoft HyperV are Windows based.

Likewise as one of the largest if not largest server vendors (pick your favorite server category or price band) who also happens to be one of the largest Microsoft Windows partners, I would have been more surprised if HP had not done a HyperV bundle.

While Cisco and EMC may stay the course or at least talk the talk with a VMware affinity in the Acadia and VCE coalition for the time being, I would expect HP to flex its wings a bit and show diversity of support for multiple Hypervisors, Operating Systems across its various server, network, storage and services platforms.

I would not be surprised to see some VMware based bundles appear over time building on previous announced HP blade systems matrix solution bundles.

Welcome back my friends to the show that never ends, that is the on-going server, storage, networking, virtualization, hardware, software and services solutions game for enabling the adaptive, dynamic, flexible, scalable, resilient, service oriented, public or private cloud, infrastructure as a service green and virtual data center.

Stay tuned, there is much more to come!

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Acadia VCE: VMware + Cisco + EMC = Virtual Computing Environment

Was today the day the music died? (click here or here if you are not familar with the expression)

Add another three letter acronym (TLA) to your IT vocabulary if you are involved with server, storage, networking, virtualization, security and related infrastructure resource management (IRM) topics.

That new TLA is Virtual Computing Environment (VCE), a coalition formed by EMC and Cisco along with partner Intel called Acadia that was announced today. Of course, EMC who also happens to own VMware for virtualization and RSA for security software tools bring those to the coalition (read press release here).

For some quick fun, twittervile and the blogosphere have come up with other meanings such as:

VCE = Virtualization Communications Endpoint
VCE = VMware Cisco EMC
VCE = Very Cash Efficient
VCE = VMware Controls Everything
VCE = Virtualization Causes Enthusiasm
VCE = VMware Cisco Exclusive

Ok, so much for some fun, at least for now.

With Cisco, EMC and VMware announcing their new VCE coalition, has this signaled the end of servers, storage, networking, hardware and software for physical, virtual and clouding computing as we know it?

Does this mean all other vendors not in this announcement should pack it up, game over and go home?

The answer in my perspective is NO!

No, the music did not end today!

NO, servers, storage and networking for virtual or cloud environments has not ended.

Also, NO, other vendors do not have to go home today, the game is not over!

However a new game is on, one that some have seen before, for others it is something new, exciting perhaps revolutionary or an industry first.

What was announced?
Figure 1 shows a general vision or positioning from the three major players involved along with four tenants or topic areas of focus. Here is a link to a press release where you can read more.

CiscoVirtualizationCoalition.png
Figure 1: Source: Cisco, EMC, VMware

General points include:

  • A new coalition (e.g. VCE) focused on virtual compute for cloud and non cloud environments
  • A new company Acadia owned by EMC and Cisco (1/3 each) along with Intel and VMware
  • A new go to market pre-sales, service and support cross technology domain skill set team
  • Solution bundles or vblocks with technology from Cisco, EMC, Intel and VMware

What are the vblocks and components?
Pre-configured (see this link for a 3D model), tested, and supported with a single throat to choke model for streamlined end to end management and acquisition. There are three vblocks or virtual building blocks that include server, storage, I/O networking, and virtualization hypervisor software along with associated IRM software tools.

Cisco is bringing to the game their Unified Compute Solution (UCS) server along with Nexus 1000v and Multilayer Director (MDS) switches, EMC is bringing storage (Symmetrix VMax, CLARiiON and unified storage) along with their RSA security and Ionix IRM tools. VMware is providing their vSphere hypervisors running on Intel based services (via Cisco).

The components include:

  • EMC Ionix management tools and framework – The IRM tools
  • EMC RSA security framework software – The security tools
  • EMC VMware vSphere hypervisor virtualization software – The virtualization layer
  • EMC VMax, CLARiiON and unified storage systems – The storage
  • Cisco Nexus 1000v and MDS switches – The Network and connectivity
  • Cisco Unified Compute Solution (UCS) – The physical servers
  • Services and support – Cross technology domain presales, delivery and professional services

CiscoEMCVMwarevblock.jpg
Figure 2: Source: Cisco vblock (Server, Storage, Networking and Virtualization Software) via Cisco

The three vblock models are:
Vblock0: entry level system due out in 2010 supporting 300 to 800 VMs for initial customer consolidation, private clouds or other diverse applications in small or medium sized business. You can think of this as a SAN in a CAN or Data Center in a box with Cisco UCS and Nexus 1000v, EMC unified storage secured by RSA and VMware vSphere.

Vblock1: mid sized building block supporting 800 to 3000 VMs for consolidation and other optimization initiatives using Cisco UCS, Nexus and MDS switches along with EMC CLARiiON storage secured with RSA software hosting VMware hypervisors.

Vblock2 high end supporting up 3000 to 6000 VMs for large scale data center transformation or new virtualization efforts combing Cisco Unified Computing System (UCS), Nexus 1000v and MDS switches and EMC VMax Symmetix storage with RSA security software hosting VMware vSpshere hypervisor.

What does this all mean?
With this move, for some it will add fuel to the campfire that Cisco is moving closer to EMC and or VMware with a pre-nuptial via Acadia. For others, this will be seen as fragmentation for virtualization particularly if other vendors such as Dell, Fujitsu, HP, IBM and Microsoft among others are kept out of the game, not to mention their channels of vars or IT customers barriers.

Acadia is a new company or more precisely, a joint venture being created by major backers EMC and Cisco with minority backers being VMware and Intel.

Like any other joint ventures, for examples those commonly seen in the airline industry (e.g. transportation utility) where carriers pool resources such as SkyTeam whose members include Delta who had a JV with Airframe owner of KLM who had a antitrust immunity JV with northwest (now being digested by Delta).

These joint ventures can range from simple marketing alliances like you see with EMC programs such as their Select program to more formal OEM to ownership as is the case with VMware and RSA to this new model for Acadia.

An airline analogy may not be the most appropriate, yet there are some interesting similarities, least of which that air carriers rely on information systems and technologies provided by members of this collation among others. There is also a correlation in that joint ventures are about streamlining and creating a seamless end to end customer experience. That is, give them enough choice and options, keep them happy, take out the complexities and hopefully some cost, and with customer control come revenue and margin or profits.

Certainly there are opportunities to streamline and not just simply cut corners, perhaps that’s another area or analogy with the airlines where there is a current focus on cutting, nickel and dimming for services. Hopefully the Acadia and VCE are not just another example of vendors getting together around the campfire to sing Kumbaya in the name of increasing customer adoption, cost cutting or putting a marketing spin on how to sell more to customers for account control.

Now with all due respect to the individual companies and personal, at least in this iteration, it is not as much about the technology or packaging. Likewise, while important, it is also not just about bundling, integration and testing (they are important) as we have seen similar solutions before.

Rather, I think this has the potential for changing the way server, storage and networking hardware along with IRM and virtualization software are sold into organizations, for the better or worse.

What Im watching is how Acadia and their principal backers can navigate the channel maze and ultimately the customer maze to sell a cross technology domain solution. For example, will a sales call require six to fourteen legs (e.g. one person is a two legged call for those not up on sales or vendor lingo) with a storage, server, networking, VMware, RSA, Ionix and services representative?

Or, can a model to drive down the number of people or product specialist involved in a given sales call be achieved leveraging people with cross technology domain skills (e.g. someone who can speak server and storage hardware and software along with networking)?

Assuming Acadia and VCE vblocks address product integration issues, I see the bigger issue as being streamlining the sales process (including compensation plans) along with how partners are dealt with not to mention customers.

How will the sales pitch be to the Cisco network people at VARs or customer sites, or too the storage or server or VMware teams, or, all of the above?

What about the others?
Cisco has relationships with Dell, HP, IBM, Microsoft and Oracle/Sun among others that they will be stepping even more on the partner toes than when they launched the UCS earlier this year. EMC for its part if fairly diversified and is not as subservient to IBM however has a history of partnering with Dell, Oracle and Microsoft among others.

VMware has a smaller investment and thus more in the wings as is Intel given that both have large partnership with Dell, HP, IBM and Microsoft. Microsoft is of interest here because on one front the bulk of all servers virtualized into VMware VMs are Windows based.

On the other hand, Microsoft has their own virtualization hypervisor HyperV that depending upon how you look at it, could be a competitor of VMware or simply a nuisance. Im of the mindset that its still to early and don’t judge this game on the first round which VMware has won. Keep in mind the history such as desktop and browser wars that Microsoft lost in the first round only to come back strong later. This move could very well invigorate Microsoft, or perhaps Oracle, Citrix among others.

Now this is far from the first time that we have seen alliances, coalitions, marketing or sales promotion cross technology vendor clubs in the industry let alone from the specific vendors involved in this announcement.

One that comes to mind was 3COMs failed attempt in the late 90s to become the first traditional networking vendor to get into SANs, that was many years before Cisco could spell SAN let alone their Andiamo startup incubated. The 3COM initiative which was cancelled due to financial issues literally on the eve of rollout was to include the likes of STK (pre-sun), Qlogic, Anchor (People were still learning how to spell Brocade), Crossroads (FC to SCSI routers for tape), Legato (pre-EMC), DG CLARiiON (Pre-EMC), MTI (sold their patents to EMC, became a reseller, now defunct) along with some others slated to jump on the bandwagon.

Lets also not forget that while among the traditional networking market vendors Cisco is the $32B giant and all of the others including 3Com, Brocade, Broadcom, Ciena, Emulex, Juniper and Qlogic are the seven plus dwarfs. However, keep the $23B USD Huawei networking vendor that is growing at a 45% annual rate in mind.

I would keep an eye on AMD, Brocade, Citrix, Dell, Fujitsu, HP, Huawei, Juniper, Microsoft, NetApp, Oracle/Sun, Rackable and Symantec among many others for similar joint venture or marketing alliances.

Some of these have already surfaced with Brocade and Oracle sharing hugs and chugs (another sales term referring to alliance meetings over beers or shots).

Also keep in mind that VMware has a large software (customer business) footprint deployed on HP with Intel (and AMD) servers.

Oh, and those VMware based VMs running on HP servers also just happen to be hosting in their neighbor of 80% or more Windows based guests operating systems, I would say its game on time.

When I say its game on time, I dont think VMware is brash enough to cut HP (or others) off forcing them to move to Microsoft for virtualization. However the game is about control, control of technology stacks and partnerships, control of vars, integrators and the channel, as well as control of customers.

If you cannot tell, I find this topic fun and interesting.

For those who only know me from servers they often ask when did I learn about networking to which I say check out one of my books (Resilient Storage Networks-Elsevier). Meanwhile for others who know me from storage I get asked when did I learn about or get into servers to which I respond about 28 years ago when I worked in IT as the customer.

Bottom line on Acadia, vblocks and VCE for now, I like the idea of a unified and bundled solution as long as they are open and flexible.

On the other hand, I have many questions and even skeptical in some areas including of how this plays out for Cisco and EMC in terms of if it can be a unifier or polarized causing market fragmentation.

For some this is or will be dejavu, back to the future, while for others it is a new, exciting and revolutionary approach while for others it will be new fodder for smack talk!

More to follow soon.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

I/O Virtualization (IOV) Revisited

Is I/O Virtualization (IOV) a server topic, a network topic, or a storage topic (See previous post)?

Like server virtualization, IOV involves servers, storage, network, operating system, and other infrastructure resource management areas and disciplines. The business and technology value proposition or benefits of converged I/O networks and I/O virtualization are similar to those for server and storage virtualization.

Additional benefits of IOV include:

  • Doing more with what resources (people and technology) already exist or reduce costs
  • Single (or pair for high availability) interconnect for networking and storage I/O
  • Reduction of power, cooling, floor space, and other green efficiency benefits
  • Simplified cabling and reduced complexity for server network and storage interconnects
  • Boosting servers performance to maximize I/O or mezzanine slots
  • reduce I/O and data center bottlenecks
  • Rapid re-deployment to meet changing workload and I/O profiles of virtual servers
  • Scaling I/O capacity to meet high-performance and clustered application needs
  • Leveraging common cabling infrastructure and physical networking facilities

Before going further, lets take a step backwards for a few moments.

To say that I/O and networking demands and requirements are increasing is an understatement. The amount of data being generated, copied, and retained for longer periods of time is elevating the importance of the role of data storage and infrastructure resource management (IRM). Networking and input/output (I/O) connectivity technologies (figure 1) tie facilities, servers, storage tools for measurement and management, and best practices on a local and wide area basis to enable an environmentally and economically friendly data center.

TIERED ACCESS FOR SERVERS AND STORAGE
There is an old saying that the best I/O, whether local or remote, is an I/O that does not have to occur. I/O is an essential activity for computers of all shapes, sizes, and focus to read and write data in and out of memory (including external storage) and to communicate with other computers and networking devices. This includes communicating on a local and wide area basis for access to or over Internet, cloud, XaaS, or managed services providers such as shown in figure 1.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
Figure 1 The Big Picture: Data Center I/O and Networking

The challenge of I/O is that some form of connectivity (logical and physical), along with associated software is required along with time delays while waiting for reads and writes to occur. I/O operations that are closest to the CPU or main processor should be the fastest and occur most frequently for access to main memory using internal local CPU to memory interconnects. In other words, fast servers or processors need fast I/O, either in terms of low latency, I/O operations along with bandwidth capabilities.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
Figure 2 Tiered I/O and Networking Access

Moving out and away from the main processor, I/O remains fairly fast with distance but is more flexible and cost effective. An example is the PCIe bus and I/O interconnect shown in Figure 2, which is slower than processor-to-memory interconnects but is still able to support attachment of various device adapters with very good performance in a cost effective manner.

Farther from the main CPU or processor, various networking and I/O adapters can attach to PCIe, PCIx, or PCI interconnects for backward compatibility to support various distances, speeds, types of devices, and cost factors.

In general, the faster a processor or server is, the more prone to a performance impact it will be when it has to wait for slower I/O operations.

Consequently, faster servers need better-performing I/O connectivity and networks. Better performing means lower latency, more IOPS, and improved bandwidth to meet application profiles and types of operations.

Peripheral Component Interconnect (PCI)
Having established that computers need to perform some form of I/O to various devices, at the heart of many I/O and networking connectivity solutions is the Peripheral Component Interconnect (PCI) interface. PCI is an industry standard that specifies the chipsets used to communicate between CPUs and memory and the outside world of I/O and networking device peripherals.

Figure 3 shows an example of multiple servers or blades each with dedicated Fibre Channel (FC) and Ethernet adapters (there could be two or more for redundancy). Simply put the more servers and devices to attach to, the more adapters, cabling and complexity particularly for blade servers and dense rack mount systems.
PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
Figure 3 Dedicated PCI adapters for I/O and networking devices

Figure 4 shows an example of a PCI implementation including various components such as bridges, adapter slots, and adapter types. PCIe leverages multiple serial unidirectional point to point links, known as lanes, in contrast to traditional PCI, which used a parallel bus design.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)

Figure 4 PCI IOV Single Root Configuration Example

In traditional PCI, bus width varied from 32 to 64 bits; in PCIe, the number of lanes combined with PCIe version and signaling rate determine performance. PCIe interfaces can have 1, 2, 4, 8, 16, or 32 lanes for data movement, depending on card or adapter format and form factor. For example, PCI and PCIx performance can be up to 528 MB per second with a 64 bit, 66 MHz signaling rate, and PCIe is capable of over 4 GB (e.g., 32 Gbit) in each direction using 16 lanes for high-end servers.

The importance of PCIe and its predecessors is a shift from multiple vendors’ different proprietary interconnects for attaching peripherals to servers. For the most part, vendors have shifted to supporting PCIe or early generations of PCI in some form, ranging from native internal on laptops and workstations to I/O, networking, and peripheral slots on larger servers.

The most current version of PCI, as defined by the PCI Special Interest Group (PCISIG), is PCI Express (PCIe). Backwards compatibility exists by bridging previous generations, including PCIx and PCI, off a native PCIe bus or, in the past, bridging a PCIe bus to a PCIx native implementation. Beyond speed and bus width differences for the various generations and implementations, PCI adapters also are available in several form factors and applications.

Traditional PCI was generally limited to a main processor or was internal to a single computer, but current generations of PCI Express (PCIe) include support for PCI Special Interest Group (PCI) I/O virtualization (IOV), enabling the PCI bus to be extended to distances of a few feet. Compared to local area networking, storage interconnects, and other I/O connectivity technologies, a few feet is very short distance, but compared to the previous limit of a few inches, extended PCIe provides the ability for improved sharing of I/O and networking interconnects.

I/O VIRTUALIZATION(IOV)
On a traditional physical server, the operating system sees one or more instances of Fibre Channel and Ethernet adapters even if only a single physical adapter, such as an InfiniBand HCA, is installed in a PCI or PCIe slot. In the case of a virtualized server for example, Microsoft HyperV or VMware ESX/vSphere the hypervisor will be able to see and share a single physical adapter, or multiple adapters, for redundancy and performance to guest operating systems. The guest systems see what appears to be a standard SAS, FC or Ethernet adapter or NIC using standard plug-and-play drivers.

Virtual HBA or virtual network interface cards (NICs) and switches are, as their names imply, virtual representations of a physical HBA or NIC, similar to how a virtual machine emulates a physical machine with a virtual server. With a virtual HBA or NIC, physical NIC resources are carved up and allocated as virtual machines, but instead of hosting a guest operating system like Windows, UNIX, or Linux, a SAS or FC HBA, FCoE converged network adapter (CNA) or Ethernet NIC is presented.

In addition to virtual or software-based NICs, adapters, and switches found in server virtualization implementations, virtual LAN (VLAN), virtual SAN (VSAN), and virtual private network (VPN) are tools for providing abstraction and isolation or segmentation of physical resources. Using emulation and abstraction capabilities, various segments or sub networks can be physically connected yet logically isolated for management, performance, and security purposes. Some form of routing or gateway functionality enables various network segments or virtual networks to communicate with each other when appropriate security is met.

PCI-SIG IOV
PCI SIG IOV consists of a PCIe bridge attached to a PCI root complex along with an attachment to a separate PCI enclosure (Figure 5). Other components and facilities include address translation service (ATS), single-root IOV (SR IOV), and multiroot IOV (MR IOV). ATS enables performance to be optimized between an I/O device and a servers I/O memory management. Single root, SR IOV enables multiple guest operating systems to access a single I/O device simultaneously, without having to rely on a hypervisor for a virtual HBA or NIC.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)

Figure 5 PCI SIG IOV

The benefit is that physical adapter cards, located in a physically separate enclosure, can be shared within a single physical server without having to incur any potential I/O overhead via virtualization software infrastructure. MR IOV is the next step, enabling a PCIe or SR IOV device to be accessed through a shared PCIe fabric across different physically separated servers and PCIe adapter enclosures. The benefit is increased sharing of physical adapters across multiple servers and operating systems not to mention simplified cabling, reduced complexity and resource utilization.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
Figure 6 PCI SIG MR IOV

Figure 6 shows an example of a PCIe switched environment, where two physically separate servers or blade servers attach to an external PCIe enclosure or card cage for attachment to PCIe, PCIx, or PCI devices. Instead of the adapter cards physically plugging into each server, a high performance short-distance cable connects the servers PCI root complex via a PCIe bridge port to a PCIe bridge port in the enclosure device.

In figure 6, either SR IOV or MR IOV can take place, depending on specific PCIe firmware, server hardware, operating system, devices, and associated drivers and management software. For a SR IOV example, each server has access to some number of dedicated adapters in the external card cage, for example, InfiniBand, Fibre Channel, Ethernet, or Fibre Channel over Ethernet (FCoE) and converged networking adapters (CNA) also known as HBAs. SR IOV implementations do not allow different physical servers to share adapter cards. MR IOV builds on SR IOV by enabling multiple physical servers to access and share PCI devices such as HBAs and NICs safely with transparency.

The primary benefit of PCI IOV is to improve utilization of PCI devices, including adapters or mezzanine cards, as well as to enable performance and availability for slot-constrained and physical footprint or form factor-challenged servers. Caveats of PCI IOV are distance limitations and the need for hardware, firmware, operating system, and management software support to enable safe and transparent sharing of PCI devices. Examples of PCIe IOV vendors include Aprius, NextIO and Virtensys among others.

InfiniBand IOV
InfiniBand based IOV solutions are an alternative to Ethernet-based solutions. Essentially, InfiniBand approaches are similar, if not identical, to converged Ethernet approaches including FCoE, with the difference being InfiniBand as the network transport. InfiniBand HCAs with special firmware are installed into servers that then see a Fibre Channel HBA and Ethernet NIC from a single physical adapter. The InfiniBand HCA also attaches to a switch or director that in turn attaches to Fibre Channel SAN or Ethernet LAN networks.

The value of InfiniBand converged networks are that they exist today, and they can be used for consolidation as well as to boost performance and availability. InfiniBand IOV also provides an alternative for those who do not choose to deploy Ethernet.

From a power, cooling, floor-space or footprint standpoint, converged networks can be used for consolidation to reduce the total number of adapters and the associated power and cooling. In addition to removing unneeded adapters without loss of functionality, converged networks also free up or allow a reduction in the amount of cabling, which can improve airflow for cooling, resulting in additional energy efficiency. An example of a vendor using InfiniBand as a platform for I/O virtualization is Xsigo.

General takeaway points include the following:

  • Minimize the impact of I/O delays to applications, servers, storage, and networks
  • Do more with what you have, including improving utilization and performance
  • Consider latency, effective bandwidth, and availability in addition to cost
  • Apply the appropriate type and tiered I/O and networking to the task at hand
  • I/O operations and connectivity are being virtualized to simplify management
  • Convergence of networking transports and protocols continues to evolve
  • PCIe IOV is complimentary to converged networking including FCoE

Moving forward, a revolutionary new technology may emerge that finally eliminates the need for I/O operations. However until that time, or at least for the foreseeable future, several things can be done to minimize the impacts of I/O for local and remote networking as well as to simplify connectivity.

Learn more about IOV, converged networks, LAN, SAN, MAN and WAN related topics in Chapter 9 (Networking with your servers and storage) of The Green and Virtual Data Center (CRC) as well as in Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier).

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Should Everything Be Virtualized?

Storage I/O trends

Should everything, that is all servers, storage and I/O along with facilities, be virtualized?

The answer not surprisingly should be it depends!

Denny Cherry (aka Mrdenny) over at ITKE did a great recent post about applications not being virtualized, particularly databases. In general some of the points or themes we are on the same or similar page, while on others we slightly differ, not by very much.

Unfortunately consolidation is commonly misunderstood to be the sole function or value proposition of server virtualization given its first wave focus. I agree that not all applications or servers should be consolidated (note that I did not say virtualized).

From a consolidation standpoint, the emphasis is often on boosting resource use to cut physical hardware and management costs by boosting the number of virtual machines (VMs) per physical machine (PMs). Ironically, while VMs using VMware, Microsoft HyperV, Citrix/Xen among others can leverage a common gold image for cloning or rapid provisioning, there are still separate operating system instances and applications that need to be managed for each VM.

Sure, VM tools from the hypervisor along with 3rd party vendors help with these tasks as well as storage vendor tools including dedupe and thin provisioning help to cut the data footprint impact of these multiple images. However, there are still multiple images to manage providing a future opportunity for further cost and management reduction (more on that in a different post).

Getting back on track:

Some reasons that all servers or applications cannot be consolidated include among others:

  • Performance, response time, latency and Quality of Service (QoS)
  • Security requirements including keeping customers or applications separate
  • Vendor support of software on virtual or consolidated servers
  • Financial where different departments own hardware or software
  • Internal political or organizational barriers and turf wars

On the other hand, for those that see virtualization as enabling agility and flexibility, that is life beyond consolidation, there are many deployment opportunities for virtualization (note that I did not say consolidation). For some environments and applications, the emphasis can be on performance, quality of service (QoS) and other service characteristics where the ratio of VMs to PMs will be much lower, if not one to one. This is where Mrdenny and me are essentially on the same page, perhaps saying it different with plenty of caveats and clarification needed of course.

My view is that in life beyond consolidation, many more servers or applications can be virtualized than might be otherwise hosted by VMs (note that I did not say consolidated). For example, instead of a high number or ratio of VMs to PMs, a lower number and for some workloads or applications, even one VM to PM can be leveraged with a focus beyond basic CPU use.

Yes you read that correctly, I said why not configure some VMs on a one to one PM basis!

Here’s the premise, todays current wave or focus is around maximizing the number of VMs and/or the reduction of physical machines to cut capital and operating costs for under-utilized applications and servers, thus the move to stuff as many VMs into/onto a PM as possible.

However, for those applications that cannot be consolidated as outlined above, there is still a benefit of having a VM dedicated to a PM. For example, by dedicating a PM (blade, server or perhaps core) allows performance and QoS aims to be meet while still providing the ability for operational and infrastructure resource management (IRM), DCIM or ITSM flexibility and agility.

Meanwhile during busy periods, the application such as a database server could have its own PM, yet during off-hours, some over VM could be moved onto that PM for backup or other IRM/DCIM/ITSM activities. Likewise, by having the VM under the database with a dedicated PM, the application could be moved proactively for maintenance or in a clustered HA scenario support BC/DR.

What can and should be done?
First and foremost, decide how VMs is the right number to divide per PM for your environment and different applications to meet your particular requirements and business needs.

Identify various VM to PM ratios to align with different application service requirements. For example, some applications may run on virtual environments with a higher number of VMs to PMs, others with a lower number of VMs to PMs and some with a one VM to PM allocation.

Certainly there will be for different reasons the need to keep some applications on a direct PM without introducing a hypervisors and VM, however many applications and servers can benefit from virtualization (again note, I did not say consolation) for agility, flexibility, BC/DR, HA and ease of IRM assuming the costs work in your favor.

Additional general to do or action items include among others:

  • Look beyond CPU use also factoring in memory and I/O performance
  • Keep response time or latency in perspective as part of performance
  • More and fast memory are important for VMs as well as for applications including databases
  • High utilization may not show high hit rates or effectiveness of resource usage
  • Fast servers need fast memory, fast I/O and fast storage systems
  • Establish tiers of virtual and physical servers to meet different service requirements
  • See efficiency and optimization as more than simply driving up utilization to cut costs
  • Productivity and improved QoS are also tenants of an efficient and optimized environment

These are themes among others that are covered in chapters 3 (What Defines a Next-Generation and Virtual Data Center?), 4 (IT Infrastructure Resource Management), 5 (Measurement, Metrics, and Management of IT Resources), as well as 7 (Servers—Physical, Virtual, and Software) in my book “The Green and Virtual Data Center (CRC) that you can learn more about here.

Welcome to life beyond consolidation, the next wave of desktop, server, storage and IO virtualization along with the many new and expanded opportunities!

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Another StorageIO Appearance on Storage Monkeys InfoSmack

Following up from a previous appearance, I recently had another opportunity to participate in another Storage Monkeys InfoSmack podcast episode.

In the most recent podcast, discussions were centered on the recent service disruption at Microsoft/T-Mobile Side-Kick cloud services, FTC blogger disclosure guidelines, is Brocade up for sale and who should buy them, SNIA and SNW among other topics.

Here are a couple of relevant links pertaining to topics discussed in this InfoSmack session.

If you are involved with servers, storage, I/O networking, virtualization and other related data infrastructure topics, check out Storage Monkeys and InfoSmack.

Cheers – gs

Greg Schulz – StorageIO, Author “The Green and Virtual Data Center” (CRC)

Clouds and Data Loss: Time for CDP (Commonsense Data Protection)?

Today SNIA released a press release pertaining to cloud storage timed to coincide with SNW where we can only presume vendors are talking about their cloud storage stories.

Yet chatter on the coconut wire along with various news (here and here and here) and social media sites is how could cloud storage and information service provider T-Mobile/Microsoft/Side-Kick loose customers data?

Data loss is a dangerous phrase, after all, your data may still be intact somewhere, however if you cannot get to it when needed, that may seem like data loss to you.

There are many types of data loss including loss of accessibility or availability along with flat out loss. Let me clarify, loss of data availability or accessibility means that somewhere, your data is still intact, perhaps off-line on a removable disk, optical, tape or at another site on-line, near-line or off-line, its just that you cannot get to it yet. There is also real data loss where both your primary copy and backup as well as archive data are lost, stolen, corrupted or never actually protected.

Clouds or managed service providers in general are getting beat up due to some loss of access, availability or actual data loss, however before jumping on that bandwagon and pointing fingers at the service, how about a step back for a minute. Granted, given all of the cloud hype and proliferation of managed service offerings on the web (excuse me cloud), there is a bit of a lightning rod backlash or see I told you so approach.

Whats different about this story compared to prior disruptions with Amazon, Google, Blackberry among others is that unlike where access to information or services ranging from calendar, emails, contacts or other documents is disrupted for a period of time, it sounds as those data may have been lost.

Lost data you should say? How can you lose data after all there are copies of copies of data that have been snapshot, replicated and deduplicated storage across different tiered storage right?

Certainly anyone involved in data management or data protection is asking the question; why not go back to a snapshot copy, replicated volute, backup copy on disk or tape?

Needless to say, finger pointing aerobics are or will be in full swing. Instead, lets ask the question, is it time for CDP as in Commonsense Data Protection?

However, rather than point blame or spout off about how bad clouds are, or, that they are getting an un-fair shake and un-due coverage, and that just because there might be a few bad ones, not all clouds are bad particularly with recent outages.

I can think of many ways on how to actually lose data, however, to totally lose data requires not a technology failure, it can be something much simpler and is equally applicable to cloud, virtual and physical data centers and storage environments from the largest to the smallest to the consumer. Its simple, common sense, best practices, making copies of all data and keeping extra copies around somewhere, with more frequent or recent data having copies readily available.

Some trends Im seeing include among others:

  • Low cost craze leveraging free or near free services and products
  • Cloud hype and cloud bashing and need to discuss wide area in between those extremes
  • Renewed need for basic data protection including BC/DR, HA, backup and security
  • Opportunity to re-architect data protection in conjunction with other initiatives
  • Lack of adequate funding for continued and proactive data protection

Just to be safe, lets revisit some common data protection best practices:

  • Learn from mistakes, preferable during testing with aim to avoid repeating them again
  • Most disasters in IT and elsewhere are the result of a chain of events not being contained
  • RAID is not a replacement for backup, it simply provides availability or accessibility
  • Likewise, mirroring or replication by themselves is not a replacement for backup.
  • Use point in time RPO based data protection such as snapshots or backup with replication
  • Maintain a master backup or gold copy that can be used to restore to a given point of time
  • Keep backup on another medium, also protect backup catalog or other configuration data
  • If using deduplication, make sure that indexes/dictionary or Meta data is also protected.
  • Moving your data into the cloud is not a replacement for a data protection strategy
  • Test restoration of backed data both locally, as well as from cloud services
  • Employ data protection management (DPM) tools for event correlation and analysis
  • Data stored in clouds need to be part of a BC/DR and overall data protection strategy
  • Have extra copy of data placed in clouds kept in alternate location as part of BC/DR
  • Ask yourself, what will do you when your cloud data goes away (note its not if, its when)
  • Combine multiple layers or rings of defines and assume what can break will break

Clouds should not be scary; Clouds do not magically solve all IT or consumer issues. However they can be an effective tool when of high caliber as part of a total data protection strategy.

Perhaps this will be a wake up call, a reminder, that it is time to think beyond cost savings and a shift back to basic data protection best practices. What good is the best or most advanced technology if you have less than adequate practices or polices? Bottom line, time for Commonsense Data Protection (CDP).

Ok, nuff said for now, I need to go and make sure I have a good removable backup in case my other local copies fail or Im not able to get to my cloud copies!

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Clouds are like Electricity: Dont be Scared

Clouds

IT clouds (compute, applications, storage, and services) are like electricity in that they can be scary or confusing to some while being enabling or a necessity s to others not to mention being a polarizing force depending on where you sit or view them.

As a polarizing force, if you are a cloud crowd cheerleader or evangelist, you might view someone who does not subscribe or share your excitement, views or interpretations as a cynic.

On the other hand, if you are a skeptic, or perhaps scared or even a cynic, you might view anyone who talks about cloud in general or not specific terms as a cheerleader.

I have seen and experienced this electrifying polarization first hand having being told by crowd cloud cheerleaders or evangelists that I dont like clouds, that Im a cynic who does not know anything about clouds.

As a funny aside (at least I thought it was funny), I recently asked someone who gave me an ear full while they were trying to convert me to be a cloud believer if they had read any of the chapters in my new book The Green and Virtual Data Center (CRC). The response was NO and I said to the effect to bad, as in the book, I talk about how clouds can be complimentary to existing IT resources as being another tier of servers, storage, applications, facilities and IT services.

On the other hand, and this might be funny for some of the crowd cloud, when I bring up tiered IT resources including servers, storage, applications and facilities as well as where or how clouds can fit to compliment IT, I have been told by cynics or naysayers that Im a cloud cheerleader.

Wow, talk about polarized sides!

Now, what about all those that are somewhere in the middle, those that are skeptics who might see value for IT clouds for different scenarios and may in fact already be using clouds (depending upon someones definition).

For those in the middle, whether they are vendors, vars, media, press, analysts, consultants, IT professionals, investors or others, they can easily be misunderstood, misrepresented, and a missed opportunity, perhaps even lamented by those on either of the two extremes (e.g. cloud crowd cheerleaders or true skeptic nay sayers).

Time for some education, don’t be scared, however be careful!

When I worked for an electric power generating and transmission utility an important lesson was not to be scared of electricity, however, be educated, what to do, what not to do in different situations including what to do or not do in the actual power plant or substation. I was taught that when in the actual plant, or at a substation of which I visited in support of the applications and systems I was developing or maintaining, to do certain things. For example, number one, dont touch certain things, number two, if you fall, don’t grab anything, the fall may or may not hurt you, let alone the sudden stop where ever you land, however, if you grab something, that might kill you and you may not be able to let go further injuring yourself. This was a challenging thought as we are taught to grab onto something when falling.

What does this have to do with clouds?

Don’t grab and hang-on if you don’t know what you are grabbing on to if you don’t have to.

The cloud crowd can be polarizing and in some ways acting as a lightning rod drawing the scorns, cynicism ,skeptics, lambasting or being poked fun of given some of the over the top hype around clouds today. Now granted, not all cloud evangelists, vendors or cheerleaders deserve to be the brunt of some of this backlash within the industry; however, it comes with the territory.

Im in the middle as I pointed out above when I talk with vendors, vars, media, investors and IT customers.  Some I talk with are using clouds (perhaps not compliant with some of the definitions). Some are looking at clouds to move problems or mask issues, others are curious yet skeptical to see where or how they could use clouds to compliment their environments. Yet others are scared however maybe in the future will be more open minded as they become educated and see technologies evolve or shift beyond a fashionable trend.

So its time for disclosure, I seeIT clouds as being complimentary that can co-exist with other IT resources (servers, storage, software). In essence, my view is that clouds are just another tier of IT resources to be used when and where applicable as opposed to being a complete replacement, or, simply ignored.

My point is that cloud computing is another tier of traditional computing or servers providing a different performance, availability, capacity, economic and management attributes compared to other traditional technology delivery vehicles. Same thing with storage, same thing with data centers or hosting sites in general. This also applies to application services, in that a cloud web, email, expense, sales, crm, erp, office or other applications is a tier of those same implementations that may exist in a traditional environment. After all, legacy, physical, virtual, grid and cloud IT datacenters all have something in common, they rely on physical servers, storage, networks, software, metrics and management involving people, processes and best practices.

Now back to disclosure, I like clouds, however Im not a cloud cheerleader, Im a skeptic at times of some over the top hype, yet I also personally use some cloud services and technologies as well as advise others to leverage cloud services when, or where applicable to compliment, co-exist and help enable a green and virtual data center and information factory.

To the cloud crowd cheerleaders, too bad if I don’t line up with all of your belief systems or if you perceive me as raining on your parade by being a skeptic , or what you might think of as a cynic and non believer, even though I use clouds myself.

Likewise, to the true cynics (not skeptics) or naysayers, ease up, Im not drinking the cool-aid of the cheerleaders and evangelists, or at least not in large excessive binge doses. I agree that clouds are not the solution to every IT issue, regardless of what your definition of a cloud happens to be.

To everyone else, regardless of if you are the minatory or majority out there that do not fall into one of the two above groups I have this to say.

Dont be afraid, dont be scared of clouds, learn to navigate your way around and through the various technologies, techniques, products and services and indemnity where they might compliment and enable a flexible and scalable resilient IT infrastructure.

Take some time to listen and learn, become educated on what the different types of clouds (public, private, services, products, architectures, or marketecture), their attributes (compute, storage, applications, services, cost, availability, performance, protocols, functionality) and value proposition.

Look into how cloud technologies and techniques might compliment your existing environment to meet specific business objectives. You might find there are fits, you might there are not, however have a look and do some research so that you can at least hold your ground if storm clouds roll in.

After all, clouds are just another tier of IT resources to add to your tool box enabling more efficient and effective IT services delivery. Clouds do not have to be the all or nothing value proposition that often end up in discussions due to polarized extreme views and definitions or past experiences.

Look at it this way, IT relies on electricity, however electricity needs to be understood and respected not to mention used in effective ways. You can be scared of electricity, you can be caviler around it, or, it can be part of your environment and enabler as long as you know when, where and how to use it, not to mention not using it as applicable.

So next time you see a cloud crowd cheerleader, give them a hug, give them a pat on the back, an atta boy or atta girl as they are just doing their jobs, perhaps even following their beliefs and in the line of duty taking a lot of heat from the industry in the pursuit of their work.

On the other hand, as to the cynics and naysayers, they may in fact be using clouds already, perhaps not under the strict definition of some of the chieftains of the cloud crowd.

To everyone else, dont worry, don’t by scared about the clouds, instead, focus on your business, you IT issues and look at various tiers of technologies that can serve as an enabler in a cost effective manner.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

StorageIO in the news

StorageIO is regularly quoted and interviewed in various industry and vertical market venues and publications both on-line and in print on a global basis. The following is coverage, perspectives and commentary by StorageIO on IT industry trends including servers, storage, I/O networking, hardware, software, services, virtualization, cloud, cluster, grid, SSD, data protection, Green IT and more.

Realizing that some prefer blogs to webs to twitters to other venues, here are some recent links among others to media coverage and comments by me on a different topics that are among others found at www.storageio.com/news.html:

  • Virtualization Review: Comments on Clouds, Virtualizaiton and Cisco move into servers – July 2009
  • SearchStorage: Comments on Storage Resource Managemet (SRM) and related tools – July 2009
  • SearchStorage: Comments on flash SSD – July 2009
  • SearchDataBackup: Comments on Data backup reporting tools’ trends – July 2009
  • SearchServerVirtualization: Comments on Hyper-V R2 matches VMware with 64-processor support – July 2009
  • SearchStorage: Comments on HP buying IBRIX for clustered and Cloud NAS – July 2009
  • Enterprise Storage Forum: Comments on HP buying IBRIX for clustered and Cloud NAS – July 2009
  • eWeek: Comments on NetApps next moves after DDUP and EMC – July 2009
  • Enterprise Storage Forum: Comments on NetApps next moves after DDUP and EMC – July 2009
  • SearchStorage: Comments on EMC buying DataDomain, NetApps next moves – July 2009
  • SearchVirtualization: Comments on Microsft HyperV features and VMware – July 2009
  • SearchITchannel: Comments on social media for business – June 2009
  • SearchSMBstorage: Comments on Storage Resource Management (SRM) for SMBs – June 2009
  • Enterprise Storage Forum: Comments on IT Merger & Acquisition activity – June 2009
  • Evolving Solutions: Comments on Storage Consolidation, Networking & Green IT – June 2009
  • Enterprise Storage Forum: Comments on EMC letter to DDUP – June 2009
  • SearchStorage: Comments on best practices for effective thin provisioning – June 2009
  • Processor: Comments on Cloud computing, SaaS and SOAs – June 2009
  • Serverwatch: Comments in How EMC’s World Pulls the Data Center Together – June 2009
  • Processor: Comments on Virtual Security Is No Walk In The Park – May 2009
  • SearchStorage: Comments on EPA launching Green Storage specification – May 2009
  • SearchStorage: Comments on Storage Provisioning Tools – May 2009
  • Enterprise Systems Journal: Comments on Tape: The Zombie Technology – May 2009
  • Enterprise Storage Forum: Comments on Oracle Keeping Sun Storage Business – May 2009
  • IT Health Blogging: Discussion about iSCSI vs. Fibre Channel for Virtual Environments – May 2009
  • IT Business Edge: Discussion about IT Data Center Futures – May 2009
  • IT Business Edge: Comments on Tape being a Green Technology – April 2009
  • Big Fat Finance Blog: Quoted in story about Green IT for Finance Operaitons – April 2009
  • SearchStorage: Comments on FLASH and SSD Storage – April 2009
  • SearchStorage AU: Comments on Data Classificaiton – April 2009
  • IT Knowledge Exchange: Comments on FCoE and Converged Networking Coming Together – April 2009
  • SearchSMBStorage: Comments on Data Deduplicaiton for SMBs – April 2009
  • SearchSMBStorage: Comments on Blade Storage for SMBs – April 2009
  • SearchStorage: Comments on MAID technology remaining underutilized – April 2009
  • SearchDataCenter: Closing the green gap: Expanding data centers with environmental benefits – April 2009
  • ServerWatch: Comments on What’s Selling In the Data Storage Market? – April 2009
  • ServerWatch: Comments on Oracle Buys Sun: The Consequences – April 2009
  • SearchStorage: Comments on Tiered Storage – April 2009
  • SearchStorage: Comments on Data Classification for Storage Managers – April 2009
  • wsradio.com Interview closing the Green Gap


  • IT Knowledge Exchange: Comments on FCoE eco-system maturing – April 2009
  • Internet Revolution: Comments on the Pre-mature death of the disk drive – April 2009
  • Enterprise Storage Forum: Comments on EMC V-MAX announcement – April 2009
  • MSP Business Journal: Greg Schulz named an Eco-Tech Warrior – April 2009
  • Storage Magazine: Comments on Power-smart disk systems – April 2009
  • Storage Magazine: Comments on Replication Alternatives – April 2009
  • StorageIO Blog: Comments and Tape as a Green Storage Medium – April 2009
  • Inside HPC: Recent Comments on Tape and Green IT – April 2009
  • Processor.com: Recent Comments on Green and Virtual – April 2009
  • SearchDataCenter: Interview: Closing the green gap: Expanding data centers with environmental benefits – April 2009
  • Enterprise Systems Journal: Recent Comments and Tips – March 2009
  • Computer Technology Review: Recent Comments on The Green and Virtual Data Center – March 2009
  • VMblog: Comments on The Green and Virtual Data Center – March 2009
  • Sys-con: Comments on The Green and Virtual Data Center – March 2009
  • Server Watch: Comments on IBM possibly buying Sun – March 2009
  • Bnet: Comments on IBM possibly buying Sun – March 2009
  • SearchStorage: Comments on Tiered Storage 101 – March 2009
  • SearchStorage: Comments – Cisco pushes into Servers March 2009
  • Enterprise Storage Forum: Comments – Cisco Entering Server Market March 2009
  • Enterprise Storage Forum: Comments – State of Storage Job Market – March 2009
  • SearchSMBStorage: Comments on SMB Storage Options – March 2009
  • Enterprise Storage Forum: Comments on Sun Proposes New Solid State Storage Spec – March 2009
  • Enterprise Storage Forum: Comments on Despite Economy, Storage Bargains Hard to Find – March 2009
  • TechWorld: Comments on Where to Stash Your Data – February 2009
  • ServerWatch: Green IT: Myths vs. Realities – February 2009
  • Byte & Switch: Going Green and the Economic Downturn – February 2009
  • CTR: Comments on Tape Hardly Being On Way Out – February 2009
  • Processor: Comments on SSD (FLASH and RAM) – February 2009
  • Internet News: Comments on Steve Wozniak joining SSD startup – February 2009
  • SearchServerVirtualization: Comments on I/O and Virtualization – February 2009
  • Technology Inc.: Comments on Data De-dupe for DR – February 2009
  • SearchStorage: Comments on NetApp SMB NAS – February 2009
  • Check out the Tips, Tools and White Papers, and News pages for additional commentary, coverage and related content or events.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Recent tips, videos, articles and more

    Its been a busy year so far and there is still plenty more to do. Taking advantage of a short summer break, I’m getting caught up on some items including putting up a link to some of the recent articles, tips, reports, webcasts, videos and more that I have eluded to in recent posts. Realizing that some prefer blogs to webs to tweets to other venues, here are some links to recent articles, tips, videos, podcasts, webcasts, white papers and more that can be found on the StorageIO Tips, tools and White Papers pages.

    Recent articles, columns, tips, white papers and reports:

  • ITworld: The new green data center: From energy avoidance to energy efficiency August 2009
  • SearchSystemsChannel: Comparing I/O virtualization and virtual I/O benefits July 2009
  • SearchDisasterRecovery: Top server virtualization myths in DR and BC July 2009
  • Enterprise Storage Forum: Saving Money with Green Data Storage Technology July 2009
  • SearchSMB ATE Tips: SMB Tips and ATE by Greg Schulz
  • SearchSMB ATE Tip: Tape library storage July 2009
  • SearchSMB ATE Tip: Server-based operating systems vs. PC-based operating systems June 2009
  • SearchSMB ATE Tip: Pros/cons of block/variable block dedupe June 2009
  • FedTechAt the Ready: High-availability storage hinges on being ready for a system failure May 2009
  • Byte & Switch Part XI – Key Elements For A Green and Virtual Data Center May 2009
  • Byte & Switch Part X – Basic Steps For Building a Green and Virtual Data Center May 2009
  • InfoStor Technology Options for Green Storage: April 2009
  • Byte & Switch Part IX – I/O, I/O, Its off to Virtual Work We Go: Networks role in Virtual Data Centers April 2009
  • Byte & Switch Part VIII – Data Storage Can Become Green: There are many steps you can take April 2009
  • Byte & Switch Part VII – Server Virtualization Can Save Costs April 2009
  • Byte & Switch Part VI – Building a Habitat for Technology April 2009
  • Byte & Switch Part V – Data Center Measurement, Metrics & Capacity Planning April 2009
  • zJournal Storage & Data Management: Tips for Enabling Green and Virtual Efficient Data Management March 2009
  • Serial Storage Wire (STA): Green and SASy = Energy and Economic, Effective Storage March 2009
  • SearchSystemsChannel: FAQs: Green IT strategies for solutions providers March 2009
  • Computer Technology Review: Recent Comments on The Green and Virtual Data Center March 2009
  • Byte & Switch Part IV – Virtual Data Centers Can Promote Business Growth March 2009
  • Byte & Switch Part III – The Challenge of IT Infrastructure Resource Management March 2009
  • Byte & Switch Part II – Building an Efficient & Ecologically Friendly Data Center March 2009
  • Byte & Switch Part I – The Green Gap – Addressing Environmental & Economic Sustainability March 2009
  • Byte & Switch Green IT and the Green Gap February 2009
  • GreenerComputing: Enabling a Green and Virtual Data Center February 2009
  • Some recent videos and podcasts include:

  • bmighty.com The dark side of SMB virtualization July 2009
  • bmighty.com SMBs Are Now Virtualization’s “Sweet Spot” July 2009
  • eWeek.com Green IT is not dead, its new focus is about efficiency July 2009
  • SearchSystemsChannel FAQ: Using cloud computing services opportunities to get more business July 2009
  • SearchStorage FAQ guide – How Fibre Channel over Ethernet can combine networks July 2009
  • SearchDataCenter Business Benefits of Boosting Web hosting Efficiency June 2009
  • SearchStorageChannel Disaster recovery services for solution providers June 2009
  • The Serverside The Changing Dynamic of the Data Center April 2009
  • TechTarget Virtualization and Consolidation for Agility: Intels Xeon Processor 5500 series May 2009
  • TechTarget Virtualization and Consolidation for Agility: Intels Xeon Processor 5500 series May 2009
  • Intel Reduce Energy Usage while Increasing Business Productivity in the Data Center May 2009
  • WSRadio Closing the green gap and shifting towards an IT efficiency and productivity April 2009
  • bmighty.com July 2009
  • Check out the Tips, Tools and White Papers, and News pages for more commentary, coverage and related content or events.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Clarifying Clustered Storage Confusion

    Clustered storage can be iSCSI, Fibre Channel block based or NAS (NFS or CIFS or proprietary file system) file system based. Clustered storage can also be found in virtual tape library (VTL) including dedupe solutions along with other storage solutions such as those for archiving, cloud, medical or other specialized grids among others.

    Recently in the IT and data storage specific industry, there has been a flurry of merger and acquisition (M&A) (Here and here), new product enhancement or announcement activity around clustered storage. For example, HP buying clustered file system vendor IBRIX complimenting their previous acquisition of another clustered file system vendor (PolyServe) a few years ago, or, of iSCSI block clustered storage software vendor LeftHand earlier this year. Another recent acquisition is that of LSI buying clustered NAS vendor ONstor, not to mention Dell buying iSCSI block clustered storage vendor EqualLogic about a year and half ago, not to mention other vendor acquisitions or announcements involving storage and clustering.

    Where the confusion enters into play is the term cluster which means many things to different people, and even more so when clustered storage is combined with NAS or file based storage. For example, clustered NAS may infer a clustered file system when in reality a solution may only be multiple NAS filers, NAS heads, controllers or storage processors configured for availability or failover.

    What this means is that a NFS or CIFS file system may only be active on one node at a time, however in the event of a failover, the file system shifts from one NAS hardware device (e.g. NAS head or filer) to another. On the other hand, a clustered file system enables a NFS or CIFS or other file system to be active on multiple nodes (e.g. NAS heads, controllers, etc.) concurrently. The concurrent access may be for small random reads and writes for example supporting a popular website or file serving application, or, it may be for parallel reads or writes to a large sequential file.

    Clustered storage is no longer exclusive to the confines of high-performance sequential and parallel scientific computing or ultra large environments. Small files and I/O (read or write), including meta-data information, are also being supported by a new generation of multipurpose, flexible, clustered storage solutions that can be tailored to support different applications workloads.

    There are many different types of clustered and bulk storage systems. Clustered storage solutions may be block (iSCSI or Fibre Channel), NAS or file serving, virtual tape library (VTL), or archiving and object-or content-addressable storage. Clustered storage in general is similar to using clustered servers, providing scale beyond the limits of a single traditional system—scale for performance, scale for availability, and scale for capacity and to enable growth in a modular fashion, adding performance and intelligence capabilities along with capacity.

    For smaller environments, clustered storage enables modular pay-as-you-grow capabilities to address specific performance or capacity needs. For larger environments, clustered storage enables growth beyond the limits of a single storage system to meet performance, capacity, or availability needs.

    Applications that lend themselves to clustered and bulk storage solutions include:

    • Unstructured data files, including spreadsheets, PDFs, slide decks, and other documents
    • Email systems, including Microsoft Exchange Personal (.PST) files stored on file servers
    • Users’ home directories and online file storage for documents and multimedia
    • Web-based managed service providers for online data storage, backup, and restore
    • Rich media data delivery, hosting, and social networking Internet sites
    • Media and entertainment creation, including animation rendering and post processing
    • High-performance databases such as Oracle with NFS direct I/O
    • Financial services and telecommunications, transportation, logistics, and manufacturing
    • Project-oriented development, simulation, and energy exploration
    • Low-cost, high-performance caching for transient and look-up or reference data
    • Real-time performance including fraud detection and electronic surveillance
    • Life sciences, chemical research, and computer-aided design

    Clustered storage solutions go beyond meeting the basic requirements of supporting large sequential parallel or concurrent file access. Clustered storage systems can also support random access of small files for highly concurrent online and other applications. Scalable and flexible clustered file servers that leverage commonly deployed servers, networking, and storage technologies are well suited for new and emerging applications, including bulk storage of online unstructured data, cloud services, and multimedia, where extreme scaling of performance (IOPS or bandwidth), low latency, storage capacity, and flexibility at a low cost are needed.

    The bandwidth-intensive and parallel-access performance characteristics associated with clustered storage are generally known; what is not so commonly known is the breakthrough to support small and random IOPS associated with database, email, general-purpose file serving, home directories, and meta-data look-up (Figure 1). Note that a clustered storage system, and in particular, a clustered NAS may or may not include a clustered file system.

    Clustered Storage Model: Source The Green and Virtual Data Center (CRC)
    Figure 1 – Generic clustered storage model (Courtesy “The Green and Virtual Data Center  (CRC)”

    More nodes, ports, memory, and disks do not guarantee more performance for applications. Performance depends on how those resources are deployed and how the storage management software enables those resources to avoid bottlenecks. For some clustered NAS and storage systems, more nodes are required to compensate for overhead or performance congestion when processing diverse application workloads. Other things to consider include support for industry-standard interfaces, protocols, and technologies.

    Scalable and flexible clustered file server and storage systems provide the potential to leverage the inherent processing capabilities of constantly improving underlying hardware platforms. For example, software-based clustered storage systems that do not rely on proprietary hardware can be deployed on industry-standard high-density servers and blade centers and utilizes third-party internal or external storage.

    Clustered storage is no longer exclusive to niche applications or scientific and high-performance computing environments. Organizations of all sizes can benefit from ultra scalable, flexible, clustered NAS storage that supports application performance needs from small random I/O to meta-data lookup and large-stream sequential I/O that scales with stability to grow with business and application needs.

    Additional considerations for clustered NAS storage solutions include the following.

    • Can memory, processors, and I/O devices be varied to meet application needs?
    • Is there support for large file systems supporting many small files as well as large files?
    • What is the performance for small random IOPS and bandwidth for large sequential I/O?
    • How is performance enabled across different application in the same cluster instance?
    • Are I/O requests, including meta-data look-up, funneled through a single node?
    • How does a solution scale as the number of nodes and storage devices is increased?
    • How disruptive and time-consuming is adding new or replacing existing storage?
    • Is proprietary hardware needed, or can industry-standard servers and storage be used?
    • What data management features, including load balancing and data protection, exists?
    • What storage interface can be used: SAS, SATA, iSCSI, or Fibre Channel?
    • What types of storage devices are supported: SSD, SAS, Fibre Channel, or SATA disks?

    As with most storage systems, it is not the total number of hard disk drives (HDDs), the quantity and speed of tiered-access I/O connectivity, the types and speeds of the processors, or even the amount of cache memory that determines performance. The performance differentiator is how a manufacturer combines the various components to create a solution that delivers a given level of performance with lower power consumption.

    To avoid performance surprises, be leery of performance claims based solely on speed and quantity of HDDs or the speed and number of ports, processors and memory. How the resources are deployed and how the storage management software enables those resources to avoid bottlenecks are more important. For some clustered NAS and storage systems, more nodes are required to compensate for overhead or performance congestion.

    Learn more about clustered storage (block, file, VTL/dedupe, archive), clustered NAS, clustered file system, grids and cloud storage among other topics in the following links:

    "The Many faces of NAS – Which is appropriate for you?"

    Article: Clarifying Storage Cluster Confusion
    Presentation: Clustered Storage: “From SMB, to Scientific, to File Serving, to Commercial, Social Networking and Web 2.0”
    Video Interview: How to Scale Data Storage Systems with Clustering
    Guidelines for controlling clustering
    The benefits of clustered storage

    Along with other material on the StorageIO Tips and Tools or portfolio archive or events pages.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Worried about IT M&A, here come the new startups!

    Storage I/O trends

    Late last year , I did a post (see here) countering the notion that there is a lack of innovation in IT and specifically around data storage. Recently I did a post about a Funeral for Friend, not to mention yesterdays post about Summer marriages.

    For those who are concerned about lack of innovation, or, that consolidation will result in just a few big vendors, here’s some food for thought. Those big vendors in addition to growing via internal organic growth, also grow by buying or merging with other vendors. Those other vendors emerge as startups, some grow, blossom and are bought, some make a decent business on their own, some are looking to be bought, some need to be bought, some will see fire sales, liquidation or simply closing their doors and perhaps re-launching as a new company.

    With all the M&A activity currently that has taken place, and I’m sure (speculation only ;) ) that there will be plenty more, here’s a short and far from comprehensive list of some startups or companies you may not have heard of yet. There are additional ones who are still in deep stealth, some on the list are still in stealth, yet talking and letting information trickle out, thus only non-NDA information is being shown here. In other words, you can find out about these via publicly available information and sources.

    Something that I have noticed and talked with others in the industry about is that this generation of startups, at least for now are taking a far more low-key approach to their launches than in the past. Gone at least for now are the Dot COM era over the top announcements in some cases before there was even a product or shipping for actual customer production deployment scenario. This crop or corps of startups are taking their time leveraging the current economic situation to further incubate their technologies and go to market strategies, not to mention minimizing the amount of over the top VC funding we have seen in the past. Some of these may not appear to be storage related and that would be correct. This list includes those associated with data infrastructure technolgies from servers, to storage to networking, hardware, software and services among othes as a common theme.

    Disclosure Notice: None of these companies mentioned are nor have ever been clients of StorageIO. Why do I mention this, why not!

    Balesio – File compression solutions
    Box.net – Internet/web/cloud storage service with high availability and backup
    Cirrustore – Backup data protection tools
    Dataslide – Hard rectangular disk (HRD)
    Enclarity – Healthcare CRM and analysis tools
    Enstratus – Amazon cloud computing management tools
    Exludas – Multi core optimize
    Firescope – CMDB data solutions
    Greenbytes – ZFS based storage management solutions
    Likewise – Open backup software for macs/linux/windows
    Liquidcomputing – High density servers
    Maxiscale – Web infrastructure (Stealth)
    Metalogix – Archiving solutions
    Neptuny – Capacity Planning
    Netronome – Network and I/O optimization technology
    Newboundary – IT policy management and IRM tools
    Nexenta ZFS – based storage management solutions
    Pergamumsystems – Archive solutions (Stealth)
    Pranah – SMB Storage vendor formerly known as Marner
    Procedo – Archiving and migration solutions
    Rebit – Backup and data protection solutions
    Rightscale – Amazon cloud computing management tools
    Rmsource – Cloud backup solutions
    RNAnetworks – Virtual memory management solutions
    Scale Computing – Clustered storage management software
    ScaleMP – Multi-core virtualization for scale out
    SiberSystems – Goodsync data protection solutions
    Sparebackup – Backup data protection solutions
    StorageFusion – Storage resource analysis
    Storspeed – NAS/NFS optimization solutions (Stealth)
    Sugarsync – Backup and data protection solutions
    Surgient – Cloud computing solutions
    Synology – SMB storage solutions
    TwinStrata – BC/DR analysis and assessment tools
    Vadium – Security and encryption tools
    Vembu – Backup data protection tools
    Versant – Object database management solutions
    Vipre – Security, data loss, data leak prevention
    VirtenSys – Virtual I/O and I/O virtualization (IOV)
    Vizrt – Video management software tools
    WhipTail – Flash SSD solutions
    Xenos – Archive and data footprint reduction solutions

    Links to the above along with many other companies including manufactures and vars can be found on the Interesting Links page at StorageIO.

    Food for thought for your summer technology picnic fun.

    Nuf said for now.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Green Storage is Alive and Well: ENERGY STAR Enterprise Storage Stakeholder Meeting Details

    While Green hype and green washing may be on the endangered species list if not already extinct, there are many things taking place to shift the focus from talking about being green to enabling and leveraging efficiency and optimization to boost productivity and enable business sustainability.

    The industry has seen and is seeing the shift from the initial green hype cycle of a few years ago to the more recent trough of disillusionment (or here) typically found with a post technology or trend hangover, to the current re-emergence, and growing awareness of the many different faces and facets of being green.

    Granted there has been some recent activity by the U.S. government to add new climate control legislation (e.g. HR2454 – Waxman/Markey) to build on previous clean air acts of the 1990s as well as those dating back to the 1970s and earlier.

    While the green gap (or here) still exists with confusion by IT organizations that Green is only Green if and only if it is about reducing Carbon footprints as opposed to the realization that there are many different faces or facets of being Green and efficient. For example, there is also a growing awareness that addressing power, cooling, floor-space or footprint to enable sustained business growth as well as enabling next generation virtual, cloud as well as traditional forms of IT service ennablement has both economic and business benefits. That is, determining energy usage, shifting from energy avoidance to expanding and supporting energy efficiency initiatives along with boosting productivity, doing more with what you have, fitting into and growing within current or future constraints on available power, cooling, footprint/floorpsace, budget or manpower constraints while improving on service delivery to remain competitive. (Learn more in "The Green and Virtual Data Center" (CRC) )
    The Green and Virtual Data Center Book

    Regardless of if you are a eco-tech warrior or not, learning about and then closing the Green gap and how shifting a focus towards efficiency has both business economic and environmental benefits and helps to break down some of the perceptions about what Green is or is not.

    One such activity is the U.S. EPA Energy Star program which is about as much energy avoidance as it is about energy efficiency You might be familiar with Energy Star logos on various consumer products around your home or office as well as for laptops, notebooks, desktop and workstations. Recently EPA released a new standard specification for Energy Star for Servers and is now currently working on one for enterprise storage. As part of the initiative, stakeholders or those with an interest in data storage are invited to participate in upcoming EPA working sessions to provide feedback and input on what is important to you.

    US EPA Energy Star wants and needs you!US EPA Energy Star Logo

    Here’s the message received from the EPA via their mailing list this past week (in italics below):

    Dear Enterprise Storage Stakeholder or Other Interested Party:

    Provided below are additional details regarding the ENERGY STAR® Enterprise Storage Stakeholder Meeting scheduled for Monday, July 20, 2009 in San Jose, CA.  The U.S. Environmental Protection Agency (EPA) plans to use this opportunity to review feedback on the ENERGY STAR Specification Framework document and discuss initial plans for a Draft 1 specification. A conference call line will be provided to stakeholders who are unable to participate in person.

    Date: Monday, July 20, 2009
    Time: 11:00 AM to 4:00 PM Pacific Time (lunch will be provided)
    Location: The Sainte Claire Hotel, 302 South Market St., San Jose, CA 95113, 408.295.2000, www.thesainteclaire.com
    Conference Call Phone: Provided with meeting registration

    EPA would like to thank the Storage Networking Industry Association (SNIA) for providing lunch, refreshments, and logistical support for the ENERGY STAR stakeholder meeting.

    For the convenience of meeting attendees, this event is being held in conjunction with the SNIA Technical Symposium being held July 20-23, 2009.

    For more information on this event visit: ;

    The Sainte Claire Hotel is offering a special room rate of $149/night for participants in the ENERGY STAR Stakeholder Meeting.  Rooms can be booked by following the link to the SNIA Technical Symposium Web site.

    Please note: Whether you plan to attend in person or via conference call, you must RSVP to storage@energystar.gov no later than Monday, July 13, 2009. Conference call information and a copy of presentation materials will be distributed to all registered attendees in advance of the meeting.

    As a reminder, stakeholders are encouraged to submit feedback on the ENERGY STAR Enterprise Storage Specification Framework to storage@energystar.gov no later than this Friday, July 3, 2009.

    The latest program documentation is available for download at www.energystar.gov/newspecs.

    If you have any questions please contact Steve Pantano, ICF International, at spantano@icfi.com or Andrew Fanara, US EPA, at fanara.andrew@epa.gov.

    Thank you for your continued support of ENERGY STAR!

    Learn more at www.energystar.gov

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved