Saving Money with Green IT: Time To Invest In Information Factories

There is a good and timely article titled Green IT Can Save Money, Too over at Business Week that has a familiar topic and theme for those who read this blog or other content, articles, reports, books, white papers, videos, podcasts or in-person speaking and keynote sessions that I have done..

I posted a short version of this over there, here is the full version that would not fit in their comment section.

Short of calling it Green IT 2.0 or the perfect storm, there is a resurgence and more importantly IMHO a growing awareness of the many facets of Green IT along with Green in general having an economic business sustainability aspect.

While the Green Gap and confusion still exists, that is, the difference between what people think or perceive and actual opportunities or issues; with growing awareness, it will close or at least narrow. For example, when I regularly talk with IT professionals from various sized, different focused industries across the globe in diverse geographies and ask them about having to go green, the response is in the 7-15% range (these are changing) with most believing that Green is only about carbon footprint.

On the other hand, when I ask them if they have power, cooling, floor space or other footprint constraints including frozen or reduced budgets, recycling along with ewaste disposition or RoHS requirements, not to mention sustaining business growth without negatively impacting quality of service or customer experience, the response jumps up to 65-75% (these are changing) if not higher.

That is the essence of the green gap or disconnect!

Granted carbon dioxide or CO2 reduction is important along with NO2, water vapors and other related issues, however there is also the need to do more with what is available, stretch resources and footprints do be more productive in a shrinking footprint. Keep in mind that there is no such thing as an information, data or processing recession with all indicators pointing towards the need to move, manage and store larger amounts of data on a go forward basis. Thus, the need to do more in a given footprint or constraint, maximizing resources, energy, productivity and available budgets.

Innovation is the ability to do more with less at a lower cost without compromise on quality of service or negatively impacting customer experience. Regardless of if you are a manufacturer, or a service provider including in IT, by innovating with a diverse Green IT focus to become more efficient and optimized, the result is that your customers become more enabled and competitive.

By shifting from an avoidance model where cost cutting or containment are the near-term tactical focus to an efficiency and productivity model via optimization, net unit costs should be lowered while overall service experience increase in a positive manner. This means treating IT as an information factory, one that needs investment in the people, processes and technologies (hardware, software, services) along with management metric indicator tools.

The net result is that environmental or perceived Green issues are addressed and self-funded via the investment in Green IT technology that boosts productivity (e.g. closing or narrowing the Green Gap). Thus, the environmental concerns that organizations have or need to address for different reasons yet that lack funding get addressed via funding to boost business productivity which have tangible ROI characteristics similar to other lean manufacturing approaches.

Here are some additional links to learn more about these and other related themes:

Have a read over at Business Week about how Green IT Can Save Money, Too while thinking about how investing in IT infrastructure productivity (Information Factories) by becoming more efficient and optimized helps the business top and bottom line, not to mention the environment as well.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EPA Energy Star for Data Center Storage Update

EPA Energy Star

Following up on a recent post about Green IT, energy efficiency and optimization for servers, storage and more, here are some additional  thoughts, perspectives along with industry activity around the U.S. Environmental Protection Agency (EPA) Energy Star for Server, Data Center Storage and Data Centers.

First a quick update, Energy Star for Servers is in place with work now underway on expanding and extending beyond the first specification. Second is that Energy Star for Data Center storage definition is well underway including a recent workshop to refine the initial specification along with discussion for follow-on drafts.

Energy Star for Data Centers is also currently undergoing definition which is focused more on macro or facility energy (notice I did not say electricity) efficiency as opposed to productivity or effectiveness, items that the Server and Storage specifications are working towards.

Among all of the different industry trade or special interests groups, at least on the storage front the Storage Networking Industry Association (SNIA) Green Storage Initiative (GSI) and their Technical Work Groups (TWG) have been busily working for the past couple of years on taxonomies, metrics and other items in support of EPA Energy Star for Data Center Storage.

A challenge for SNIA along with others working on related material pertaining to storage and efficiency is the multi-role functionality of storage. That is, some storage simply stores data with little to no performance requirements while other storage is actively used for reading and writing. In addition, there are various categories, architectures not to mention hardware and software feature functionality or vendors with different product focus and interests.

Unlike servers that are either on and doing work, or, off or in low power mode, storage is either doing active work (e.g. moving data), storing in-active or idle data, or a combination of both. Hence for some, energy efficiency is about how much data can be stored in a given footprint with the least amount of power known as in-active or idle measurement.

On the other hand, storage efficiency is also about using the least amount of energy to produce the most amount of work or activity, for example IOPS or bandwidth per watt per footprint.

Thus the challenge and need for at least a two dimensional  model looking at, and reflecting different types or categories of storage aligned for active or in-active (e.g. storing) data enabling apples to apples, vs. apples to oranges comparison.

This is not all that different from how EPA looks at motor vehicle categories of economy cars, sport utility, work or heavy utility among others when doing different types of work, or, in idle.

What does this have to do with servers and storage?

Simple, when a server powers down where does its data go? That’s right, to a storage system using disk, ssd (RAM or flash), tape or optical for persistency. Likewise, when there is work to be done, where does the data get read into computer memory from, or written to? That’s right, a storage system. Hence the need to look at storage in a multi-tenant manner.

The storage industry is diverse with some vendors or products focused on performance or activity, while others on long term, low cost persistent storage for archive, backup, not to mention some doing a bit of both. Hence the nomenclature of herding cats towards a common goal when different parties have various interests that may conflict yet support needs of various customer storage usage requirements.

Figure 1 shows a simplified, streamlined storage taxonomy that has been put together by SNIA representing various types, categories and functions of data center storage. The green shaded areas are a good step in the right direction to simplify yet move towards realistic and achievable befits for storage consumers.


Figure 1 Source: EPA Energy Star for Data Center Storage web site document

The importance of the streamlined SNIA taxonomy is to help differentiate or characterize various types and tiers of storage (Figure 2) products facilitating apples to apples comparison instead of apples or oranges. For example, on-line primary storage needs to be looked at in terms of how much work or activity per energy footprint determines efficiency.


Figure 2: Tiered Storage Example

On other hand, storage for retaining large amounts of data that is in-active or idle for long periods of time should be looked at on a capacity per energy footprint basis. While final metrics are still being flushed out, some examples could be active storage gauged by IOPS or work or bandwidth per watt of energy per footprint while other storage for idle or inactive data could be looked at on a capacity per energy footprint basis.

What benchmarks or workloads to be used for simulating or measuring work or activity are still being discussed with proposals coming from various sources. For example SNIA GSI TWG are developing measurements and discussing metrics, as have the storage performance council (SPC) and SPEC among others including use of simulation tools such as IOmeter, VMware VMmark, TPC, Bonnie, or perhaps even Microsoft ESRP.

Tenants of Energy Star for Data Center Storage overtime hopefully will include:

  • Reflective of different types, categories, price-bands and storage usage scenarios
  • Measure storage efficiency for active work along with in-active or idle usage
  • Provide insight for both storage performance efficiency and effective capacity
  • Baseline or raw storage capacity along with effective enhanced optimized capacity
  • Easy to use metrics with more in-depth back ground or disclosure information

Ultimately the specification should help IT storage buyers and decision makers to compare and contrast different storage systems that are best suited and applicable to their usage scenarios.

This means measuring work or activity per energy footprint at a given capacity and data protection level to meet service requirements along with during in-active or idle periods. This also means showing storage that is capacity focused in terms of how much data can be stored in a given energy footprint.

One thing that will be tricky however will be differentiating GBytes per watt in terms of capacity, or, in terms of performance and bandwidth.

Here are some links to learn more:

Stay tuned for more on Energy Star for Data Centers, Servers and Data Center Storage.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

I/O Virtualization (IOV) Revisited

Is I/O Virtualization (IOV) a server topic, a network topic, or a storage topic (See previous post)?

Like server virtualization, IOV involves servers, storage, network, operating system, and other infrastructure resource management areas and disciplines. The business and technology value proposition or benefits of converged I/O networks and I/O virtualization are similar to those for server and storage virtualization.

Additional benefits of IOV include:

    • Doing more with what resources (people and technology) already exist or reduce costs
    • Single (or pair for high availability) interconnect for networking and storage I/O
    • Reduction of power, cooling, floor space, and other green efficiency benefits
    • Simplified cabling and reduced complexity for server network and storage interconnects
    • Boosting servers performance to maximize I/O or mezzanine slots
    • reduce I/O and data center bottlenecks
    • Rapid re-deployment to meet changing workload and I/O profiles of virtual servers
    • Scaling I/O capacity to meet high-performance and clustered application needs
    • Leveraging common cabling infrastructure and physical networking facilities

Before going further, lets take a step backwards for a few moments.

To say that I/O and networking demands and requirements are increasing is an understatement. The amount of data being generated, copied, and retained for longer periods of time is elevating the importance of the role of data storage and infrastructure resource management (IRM). Networking and input/output (I/O) connectivity technologies (figure 1) tie facilities, servers, storage tools for measurement and management, and best practices on a local and wide area basis to enable an environmentally and economically friendly data center.

TIERED ACCESS FOR SERVERS AND STORAGE
There is an old saying that the best I/O, whether local or remote, is an I/O that does not have to occur. I/O is an essential activity for computers of all shapes, sizes, and focus to read and write data in and out of memory (including external storage) and to communicate with other computers and networking devices. This includes communicating on a local and wide area basis for access to or over Internet, cloud, XaaS, or managed services providers such as shown in figure 1.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
Figure 1 The Big Picture: Data Center I/O and Networking

The challenge of I/O is that some form of connectivity (logical and physical), along with associated software is required along with time delays while waiting for reads and writes to occur. I/O operations that are closest to the CPU or main processor should be the fastest and occur most frequently for access to main memory using internal local CPU to memory interconnects. In other words, fast servers or processors need fast I/O, either in terms of low latency, I/O operations along with bandwidth capabilities.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
Figure 2 Tiered I/O and Networking Access

Moving out and away from the main processor, I/O remains fairly fast with distance but is more flexible and cost effective. An example is the PCIe bus and I/O interconnect shown in Figure 2, which is slower than processor-to-memory interconnects but is still able to support attachment of various device adapters with very good performance in a cost effective manner.

Farther from the main CPU or processor, various networking and I/O adapters can attach to PCIe, PCIx, or PCI interconnects for backward compatibility to support various distances, speeds, types of devices, and cost factors.

In general, the faster a processor or server is, the more prone to a performance impact it will be when it has to wait for slower I/O operations.

Consequently, faster servers need better-performing I/O connectivity and networks. Better performing means lower latency, more IOPS, and improved bandwidth to meet application profiles and types of operations.

Peripheral Component Interconnect (PCI)
Having established that computers need to perform some form of I/O to various devices, at the heart of many I/O and networking connectivity solutions is the Peripheral Component Interconnect (PCI) interface. PCI is an industry standard that specifies the chipsets used to communicate between CPUs and memory and the outside world of I/O and networking device peripherals.

Figure 3 shows an example of multiple servers or blades each with dedicated Fibre Channel (FC) and Ethernet adapters (there could be two or more for redundancy). Simply put the more servers and devices to attach to, the more adapters, cabling and complexity particularly for blade servers and dense rack mount systems.
PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
Figure 3 Dedicated PCI adapters for I/O and networking devices

Figure 4 shows an example of a PCI implementation including various components such as bridges, adapter slots, and adapter types. PCIe leverages multiple serial unidirectional point to point links, known as lanes, in contrast to traditional PCI, which used a parallel bus design.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)

Figure 4 PCI IOV Single Root Configuration Example

In traditional PCI, bus width varied from 32 to 64 bits; in PCIe, the number of lanes combined with PCIe version and signaling rate determine performance. PCIe interfaces can have 1, 2, 4, 8, 16, or 32 lanes for data movement, depending on card or adapter format and form factor. For example, PCI and PCIx performance can be up to 528 MB per second with a 64 bit, 66 MHz signaling rate, and PCIe is capable of over 4 GB (e.g., 32 Gbit) in each direction using 16 lanes for high-end servers.

The importance of PCIe and its predecessors is a shift from multiple vendors’ different proprietary interconnects for attaching peripherals to servers. For the most part, vendors have shifted to supporting PCIe or early generations of PCI in some form, ranging from native internal on laptops and workstations to I/O, networking, and peripheral slots on larger servers.

The most current version of PCI, as defined by the PCI Special Interest Group (PCISIG), is PCI Express (PCIe). Backwards compatibility exists by bridging previous generations, including PCIx and PCI, off a native PCIe bus or, in the past, bridging a PCIe bus to a PCIx native implementation. Beyond speed and bus width differences for the various generations and implementations, PCI adapters also are available in several form factors and applications.

Traditional PCI was generally limited to a main processor or was internal to a single computer, but current generations of PCI Express (PCIe) include support for PCI Special Interest Group (PCI) I/O virtualization (IOV), enabling the PCI bus to be extended to distances of a few feet. Compared to local area networking, storage interconnects, and other I/O connectivity technologies, a few feet is very short distance, but compared to the previous limit of a few inches, extended PCIe provides the ability for improved sharing of I/O and networking interconnects.

I/O VIRTUALIZATION(IOV)
On a traditional physical server, the operating system sees one or more instances of Fibre Channel and Ethernet adapters even if only a single physical adapter, such as an InfiniBand HCA, is installed in a PCI or PCIe slot. In the case of a virtualized server for example, Microsoft HyperV or VMware ESX/vSphere the hypervisor will be able to see and share a single physical adapter, or multiple adapters, for redundancy and performance to guest operating systems. The guest systems see what appears to be a standard SAS, FC or Ethernet adapter or NIC using standard plug-and-play drivers.

Virtual HBA or virtual network interface cards (NICs) and switches are, as their names imply, virtual representations of a physical HBA or NIC, similar to how a virtual machine emulates a physical machine with a virtual server. With a virtual HBA or NIC, physical NIC resources are carved up and allocated as virtual machines, but instead of hosting a guest operating system like Windows, UNIX, or Linux, a SAS or FC HBA, FCoE converged network adapter (CNA) or Ethernet NIC is presented.

In addition to virtual or software-based NICs, adapters, and switches found in server virtualization implementations, virtual LAN (VLAN), virtual SAN (VSAN), and virtual private network (VPN) are tools for providing abstraction and isolation or segmentation of physical resources. Using emulation and abstraction capabilities, various segments or sub networks can be physically connected yet logically isolated for management, performance, and security purposes. Some form of routing or gateway functionality enables various network segments or virtual networks to communicate with each other when appropriate security is met.

PCI-SIG IOV
PCI SIG IOV consists of a PCIe bridge attached to a PCI root complex along with an attachment to a separate PCI enclosure (Figure 5). Other components and facilities include address translation service (ATS), single-root IOV (SR IOV), and multiroot IOV (MR IOV). ATS enables performance to be optimized between an I/O device and a servers I/O memory management. Single root, SR IOV enables multiple guest operating systems to access a single I/O device simultaneously, without having to rely on a hypervisor for a virtual HBA or NIC.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)

Figure 5 PCI SIG IOV

The benefit is that physical adapter cards, located in a physically separate enclosure, can be shared within a single physical server without having to incur any potential I/O overhead via virtualization software infrastructure. MR IOV is the next step, enabling a PCIe or SR IOV device to be accessed through a shared PCIe fabric across different physically separated servers and PCIe adapter enclosures. The benefit is increased sharing of physical adapters across multiple servers and operating systems not to mention simplified cabling, reduced complexity and resource utilization.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
Figure 6 PCI SIG MR IOV

Figure 6 shows an example of a PCIe switched environment, where two physically separate servers or blade servers attach to an external PCIe enclosure or card cage for attachment to PCIe, PCIx, or PCI devices. Instead of the adapter cards physically plugging into each server, a high performance short-distance cable connects the servers PCI root complex via a PCIe bridge port to a PCIe bridge port in the enclosure device.

In figure 6, either SR IOV or MR IOV can take place, depending on specific PCIe firmware, server hardware, operating system, devices, and associated drivers and management software. For a SR IOV example, each server has access to some number of dedicated adapters in the external card cage, for example, InfiniBand, Fibre Channel, Ethernet, or Fibre Channel over Ethernet (FCoE) and converged networking adapters (CNA) also known as HBAs. SR IOV implementations do not allow different physical servers to share adapter cards. MR IOV builds on SR IOV by enabling multiple physical servers to access and share PCI devices such as HBAs and NICs safely with transparency.

The primary benefit of PCI IOV is to improve utilization of PCI devices, including adapters or mezzanine cards, as well as to enable performance and availability for slot-constrained and physical footprint or form factor-challenged servers. Caveats of PCI IOV are distance limitations and the need for hardware, firmware, operating system, and management software support to enable safe and transparent sharing of PCI devices. Examples of PCIe IOV vendors include Aprius, NextIO and Virtensys among others.

InfiniBand IOV
InfiniBand based IOV solutions are an alternative to Ethernet-based solutions. Essentially, InfiniBand approaches are similar, if not identical, to converged Ethernet approaches including FCoE, with the difference being InfiniBand as the network transport. InfiniBand HCAs with special firmware are installed into servers that then see a Fibre Channel HBA and Ethernet NIC from a single physical adapter. The InfiniBand HCA also attaches to a switch or director that in turn attaches to Fibre Channel SAN or Ethernet LAN networks.

The value of InfiniBand converged networks are that they exist today, and they can be used for consolidation as well as to boost performance and availability. InfiniBand IOV also provides an alternative for those who do not choose to deploy Ethernet.

From a power, cooling, floor-space or footprint standpoint, converged networks can be used for consolidation to reduce the total number of adapters and the associated power and cooling. In addition to removing unneeded adapters without loss of functionality, converged networks also free up or allow a reduction in the amount of cabling, which can improve airflow for cooling, resulting in additional energy efficiency. An example of a vendor using InfiniBand as a platform for I/O virtualization is Xsigo.

General takeaway points include the following:

  • Minimize the impact of I/O delays to applications, servers, storage, and networks
  • Do more with what you have, including improving utilization and performance
  • Consider latency, effective bandwidth, and availability in addition to cost
  • Apply the appropriate type and tiered I/O and networking to the task at hand
  • I/O operations and connectivity are being virtualized to simplify management
  • Convergence of networking transports and protocols continues to evolve
  • PCIe IOV is complimentary to converged networking including FCoE

Moving forward, a revolutionary new technology may emerge that finally eliminates the need for I/O operations. However until that time, or at least for the foreseeable future, several things can be done to minimize the impacts of I/O for local and remote networking as well as to simplify connectivity.

PCIe Fundamentals Server Storage I/O Network Essentials

Learn more about IOV, converged networks, LAN, SAN, MAN and WAN related topics in Chapter 9 (Networking with your servers and storage) of The Green and Virtual Data Center (CRC) as well as in Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier).

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Should Everything Be Virtualized?

Storage I/O trends

Should everything, that is all servers, storage and I/O along with facilities, be virtualized?

The answer not surprisingly should be it depends!

Denny Cherry (aka Mrdenny) over at ITKE did a great recent post about applications not being virtualized, particularly databases. In general some of the points or themes we are on the same or similar page, while on others we slightly differ, not by very much.

Unfortunately consolidation is commonly misunderstood to be the sole function or value proposition of server virtualization given its first wave focus. I agree that not all applications or servers should be consolidated (note that I did not say virtualized).

From a consolidation standpoint, the emphasis is often on boosting resource use to cut physical hardware and management costs by boosting the number of virtual machines (VMs) per physical machine (PMs). Ironically, while VMs using VMware, Microsoft HyperV, Citrix/Xen among others can leverage a common gold image for cloning or rapid provisioning, there are still separate operating system instances and applications that need to be managed for each VM.

Sure, VM tools from the hypervisor along with 3rd party vendors help with these tasks as well as storage vendor tools including dedupe and thin provisioning help to cut the data footprint impact of these multiple images. However, there are still multiple images to manage providing a future opportunity for further cost and management reduction (more on that in a different post).

Getting back on track:

Some reasons that all servers or applications cannot be consolidated include among others:

  • Performance, response time, latency and Quality of Service (QoS)
  • Security requirements including keeping customers or applications separate
  • Vendor support of software on virtual or consolidated servers
  • Financial where different departments own hardware or software
  • Internal political or organizational barriers and turf wars

On the other hand, for those that see virtualization as enabling agility and flexibility, that is life beyond consolidation, there are many deployment opportunities for virtualization (note that I did not say consolidation). For some environments and applications, the emphasis can be on performance, quality of service (QoS) and other service characteristics where the ratio of VMs to PMs will be much lower, if not one to one. This is where Mrdenny and me are essentially on the same page, perhaps saying it different with plenty of caveats and clarification needed of course.

My view is that in life beyond consolidation, many more servers or applications can be virtualized than might be otherwise hosted by VMs (note that I did not say consolidated). For example, instead of a high number or ratio of VMs to PMs, a lower number and for some workloads or applications, even one VM to PM can be leveraged with a focus beyond basic CPU use.

Yes you read that correctly, I said why not configure some VMs on a one to one PM basis!

Here’s the premise, todays current wave or focus is around maximizing the number of VMs and/or the reduction of physical machines to cut capital and operating costs for under-utilized applications and servers, thus the move to stuff as many VMs into/onto a PM as possible.

However, for those applications that cannot be consolidated as outlined above, there is still a benefit of having a VM dedicated to a PM. For example, by dedicating a PM (blade, server or perhaps core) allows performance and QoS aims to be meet while still providing the ability for operational and infrastructure resource management (IRM), DCIM or ITSM flexibility and agility.

Meanwhile during busy periods, the application such as a database server could have its own PM, yet during off-hours, some over VM could be moved onto that PM for backup or other IRM/DCIM/ITSM activities. Likewise, by having the VM under the database with a dedicated PM, the application could be moved proactively for maintenance or in a clustered HA scenario support BC/DR.

What can and should be done?
First and foremost, decide how VMs is the right number to divide per PM for your environment and different applications to meet your particular requirements and business needs.

Identify various VM to PM ratios to align with different application service requirements. For example, some applications may run on virtual environments with a higher number of VMs to PMs, others with a lower number of VMs to PMs and some with a one VM to PM allocation.

Certainly there will be for different reasons the need to keep some applications on a direct PM without introducing a hypervisors and VM, however many applications and servers can benefit from virtualization (again note, I did not say consolation) for agility, flexibility, BC/DR, HA and ease of IRM assuming the costs work in your favor.

Additional general to do or action items include among others:

  • Look beyond CPU use also factoring in memory and I/O performance
  • Keep response time or latency in perspective as part of performance
  • More and fast memory are important for VMs as well as for applications including databases
  • High utilization may not show high hit rates or effectiveness of resource usage
  • Fast servers need fast memory, fast I/O and fast storage systems
  • Establish tiers of virtual and physical servers to meet different service requirements
  • See efficiency and optimization as more than simply driving up utilization to cut costs
  • Productivity and improved QoS are also tenants of an efficient and optimized environment

These are themes among others that are covered in chapters 3 (What Defines a Next-Generation and Virtual Data Center?), 4 (IT Infrastructure Resource Management), 5 (Measurement, Metrics, and Management of IT Resources), as well as 7 (Servers—Physical, Virtual, and Software) in my book “The Green and Virtual Data Center (CRC) that you can learn more about here.

Welcome to life beyond consolidation, the next wave of desktop, server, storage and IO virtualization along with the many new and expanded opportunities!

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Another StorageIO Appearance on Storage Monkeys InfoSmack

Following up from a previous appearance, I recently had another opportunity to participate in another Storage Monkeys InfoSmack podcast episode.

In the most recent podcast, discussions were centered on the recent service disruption at Microsoft/T-Mobile Side-Kick cloud services, FTC blogger disclosure guidelines, is Brocade up for sale and who should buy them, SNIA and SNW among other topics.

Here are a couple of relevant links pertaining to topics discussed in this InfoSmack session.

If you are involved with servers, storage, I/O networking, virtualization and other related data infrastructure topics, check out Storage Monkeys and InfoSmack.

Cheers – gs

Greg Schulz – StorageIO, Author “The Green and Virtual Data Center” (CRC)

Poll: What Do You Think of IT Clouds?

Clouds

IT clouds (compute, applications, storage, and services) are a popular topic for discussion with some people being entirely sold on them as the way of the future, while others totally dismissing them, meanwhile, there’s plenty of thoughts in between.

I recently shared some of my thoughts in this blog post about IT clouds, now whats your take (your identity will remain confidential)?

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Storage Efficiency and Optimization – The Other Green

For those of you in the New York City area, I will be presenting live in person at Storage Decisions September 23, 2009 conference The Other Green, Storage Efficiency and Optimization.

Throw out the "green“: buzzword, and you’re still left with the task of saving or maximizing use of space, power, and cooling while stretching available IT dollars to support growth and business sustainability. For some environments the solution may be consolation while others need to maintain quality of service response time, performance and availability necessitating faster, energy efficient technologies to achieve optimization objectives.

To accomplish these and other related issues, you can turn to the cloud, virtualization, intelligent power management, data footprint reduction and data management not to mention various types of tiered storage and performance optimization techniques. The session will look at various techniques and strategies to optimize either on-line active or primary as well as near-line or secondary storage environment during tough economic times, as well as to position for future growth, after all, there is no such thing as a data recession!

Topics, technologies and techniques that will be discussed include among others:

  • Energy efficiency (strategic) vs. energy avoidance (tactical), whats different between them
  • Optimization and the need for speed vs. the need for capacity, finding the right balance
  • Metrics & measurements for management insight, what the industry is doing (or not doing)
  • Tiered storage and tiered access including SSD, FC, SAS, tape, clouds and more
  • Data footprint reduction (archive, compress, dedupe) and thin provision among others
  • Best practices, financial incentives and what you can do today

This is a free event for IT professionals, however space I hear is limited, learn more and register here.

For those interested in broader IT data center and infrastructure optimization, check out the on-going seminar series The Infrastructure Optimization and Planning Best Practices (V2.009) – Doing more with less without sacrificing storage, system or network capabilities Seminar series continues September 22, 2009 with a stop in Chicago. This is also a free Seminar, register and learn more here or here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)

Ok, so I should have used that intro last week before heading off to VMworld in San Francisco instead of after the fact.

Think of it as a high latency title or intro, kind of like attaching a fast SSD to a slow, high latency storage controller, or a fast server attached to a slow network, or fast network with slow storage and servers, it is what it is.

I/O virtualization (IOV), Virtual I/O (VIO) along with I/O and networking convergence have been getting more and more attention lately, particularly on the convergence front. In fact one might conclude that it is trendy to all of a sudden to be on the IOV, VIO and convergence bandwagon given how clouds, soa and SaaS hype are being challenged, perhaps even turning to storm clouds?

Lets get back on track, or in the case of the past week, get back in the car, get back in the plane, get back into the virtual office and what it all has to do with Virtual I/O and VMworld.

The convergence game has at its center Brocade emanating from the data center and storage centric I/O corner challenging Cisco hailing from the MAN, WAN, LAN general networking corner.

Granted both vendors have dabbled with success in each others corners or areas of focus in the past. For example, Brocade as via acquisitions (McData+Nishan+CNT+INRANGE among others) a diverse and capable stable of local and long distance SAN connectivity and channel extension for mainframe and open systems supporting data replication, remote tape and wide area clustering. Not to mention deep bench experience with the technologies, protocols and partners solutions for LAN, MAN (xWDM), WAN (iFCP, FCIP, etc) and even FAN (file area networking aka NAS) along with iSCSI in addition to Fibre Channel and FICON solutions.

Disclosure: Here’s another plug ;) Learn more about SANs, LANs, MANs, WANs, POTs, PANs and related technologies and techniques in my book “Resilient Storage NetworksDesigning Flexible Scalable Data Infrastructures" (Elsevier).

Cisco not to be outdone has a background in the LAN, MAN, WAN space directly, or similar to Brocade via partnerships with product and experience and depth. In fact while many of my former INRANGE and CNT associates ended up at Brocade via McData or in-directly, some ended up at Cisco. While Cisco is known for general networking, the past several years they have gone from zero to being successful in the Fibre Channel and yes, even the FICON mainframe space while like Brocade (HBAs) dabbling in other areas like servers and storage not to mention consumer products.

What does this have to do with IOV and VIO, let alone VMworld and my virtual office, hang on, hold that thought for a moment, lets get the convergence aspect out of the way first.

On the I/O and networking convergence (e.g. Fibre Channel over Ethernet – FCoE) scene both Brocade (Converged Enhanced Ethernet-CEE) and Cisco (Data Center Ethernet – DCE) along with their partners are rallying around each others camps. This is similar to how a pair of prize fighters maneuvers in advance of a match including plenty of trash talk, hype and all that goes with it. Brocade and Cisco throwing mud balls (or spam) at each other, or having someone else do it is nothing new, however in the past each has had their core areas of focus coming from different tenets in some cases selling to different people in an IT environment or those in VAR and partner organizations. Brocade and Cisco are not alone nor is the I/O networking convergence game the only one in play as it is being complimented by the IOV and VIO technologies addressing different value propositions in IT data centers.

Now on to the IOV and VIO aspect along with VMworld.

For those of you that attended VMworld and managed to get outside of session rooms, or media/analyst briefing or reeducation rooms, or out of partner and advisory board meetings walking the expo hall show floor, there was the usual sea of vendors and technology. There were the servers (physical and virtual), storage (physical and virtual), terminals, displays and other hardware, I/O and networking, data protection, security, cloud and managed services, development and visualization tools, infrastructure resource management (IRM) software tools, manufactures and VARs, consulting firms and even some analysts with booths selling their wares among others.

Likewise, in the onsite physical data center to support the virtual environment, there were servers, storage, networking, cabling and associated hardware along with applicable software and tucked away in all of that, there were also some converged I/O and networking, and, IOV technologies.

Yes, IOV, VIO and I/O networking convergence were at VMworld in force, just ask Jon Torr of Xsigo who was beaming like a proud papa wanting to tell anyone who would listen that his wares were part of the VMworld data center (Disclosure: Thanks for the T-Shirt).

Virtensys had their wares on display with Bob Nappa more than happy to show the technology beyond an UhiGui demo including how their solution includes disk drives and an LSI MegaRAID adapter to support VM boot while leveraging off-the shelf or existing PCIe adapters (SAS, FC, FCoE, Ethernet, SATA, etc.) while allowing adapter sharing across servers, not to mention, they won best new technology at VMworld award.

NextIO who is involved in the IOV / VIO game was there along with convergence vendors Brocade, Cisco, Qlogic and Emulex among others. Rest assured, there are many other vendors and VARs in the VIO and IOV game either still in stealth, semi-stealth or having recently launched.

IOV and VIO are complimentary to I/O and networking convergence in that solutions like those from Aprius, Virtensys, Xsigo, NextIO and others. While they sound similar, and in fact there is confusion as to if Fibre Channel N_Port Virtual ID (FC_NPVID) and VMware virtual adapters are IOV and VIO vs. solutions that are focused on PCIe device/resource extension and sharing.

Another point of confusion around I/O virtualization and virtual I/O are blade system or blade center connectivity solutions such as HP Virtual Connect or IBM Fabric Manger not to mention those form Engenera add confusion to the equation. Some of the buzzwords that you will be hearing and reading more about include PCIe Single Root IOV (SR-IOV) and Multi-Root IOV (MR-IOV). Think of it this way, within VMware you have virtual adapters, and Fibre Channel Virtualization N_Port IDs for LUN mapping/masking, zone management and other tasks.

IOV enables localized sharing of physical adapters across different physical servers (blades or chassis) with distances measured in a few meters; after all, it’s the PCIe bus that is being extended. Thus, it is not a replacement for longer distance in the data center solutions such as FCoE or even SAS for that matter, thus they are complimentary, or at least should be considered complimentary.

The following are some links to previous articles and related material including an excerpt (yes, another plug ;)) from chapter 9 “Networking with you servers and storage” of new book “The Green and Virtual Data Center” (CRC). Speaking of virtual and physical, “The Green and Virtual Data Center” (CRC) was on sale at the physical VMworld book store this week, as well as at the virtual book stores including Amazon.com

The Green and Virtual Data Center

The Green and Virtual Data Center (CRC) on book shelves at VMworld Book Store

Links to some IOV, VIO and I/O networking convergence pieces among others, as well as news coverage, comments and interviews can be found here and here with StorageIOblog posts that may be of interest found here and here.

SearchSystemChannel: Comparing I/O virtualization and virtual I/O benefits – August 2009

Enterprise Storage Forum: I/O, I/O, It’s Off to Virtual Work We Go – December 2007

Byte and Switch: I/O, I/O, It’s Off to Virtual Work We Go (Book Chapter Excerpt) – April 2009

Thus I went to VMworld in San Francisco this past week as much of the work I do is involved with convergence similar to my background, that is, servers, storage, I/O networking, hardware, software, virtualization, data protection, performance and capacity planning.

As to the virtual work, well, I spent some time on airplanes this week which as is often the case, my virtual office, granted it was real work that had to be done, however I also had a chance to meet up with some fellow tweeters at a tweet up Tuesday evening before getting back in a plane in my virtual office.

Now, I/O, I/O, its back to real work I go at Server and StorageIO , kind of rhymes doesnt it!

I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)

Worried about IT M&A, here come the new startups!

Storage I/O trends

Late last year , I did a post (see here) countering the notion that there is a lack of innovation in IT and specifically around data storage. Recently I did a post about a Funeral for Friend, not to mention yesterdays post about Summer marriages.

For those who are concerned about lack of innovation, or, that consolidation will result in just a few big vendors, here’s some food for thought. Those big vendors in addition to growing via internal organic growth, also grow by buying or merging with other vendors. Those other vendors emerge as startups, some grow, blossom and are bought, some make a decent business on their own, some are looking to be bought, some need to be bought, some will see fire sales, liquidation or simply closing their doors and perhaps re-launching as a new company.

With all the M&A activity currently that has taken place, and I’m sure (speculation only ;) ) that there will be plenty more, here’s a short and far from comprehensive list of some startups or companies you may not have heard of yet. There are additional ones who are still in deep stealth, some on the list are still in stealth, yet talking and letting information trickle out, thus only non-NDA information is being shown here. In other words, you can find out about these via publicly available information and sources.

Something that I have noticed and talked with others in the industry about is that this generation of startups, at least for now are taking a far more low-key approach to their launches than in the past. Gone at least for now are the Dot COM era over the top announcements in some cases before there was even a product or shipping for actual customer production deployment scenario. This crop or corps of startups are taking their time leveraging the current economic situation to further incubate their technologies and go to market strategies, not to mention minimizing the amount of over the top VC funding we have seen in the past. Some of these may not appear to be storage related and that would be correct. This list includes those associated with data infrastructure technolgies from servers, to storage to networking, hardware, software and services among othes as a common theme.

Disclosure Notice: None of these companies mentioned are nor have ever been clients of StorageIO. Why do I mention this, why not!

Balesio – File compression solutions
Box.net – Internet/web/cloud storage service with high availability and backup
Cirrustore – Backup data protection tools
Dataslide – Hard rectangular disk (HRD)
Enclarity – Healthcare CRM and analysis tools
Enstratus – Amazon cloud computing management tools
Exludas – Multi core optimize
Firescope – CMDB data solutions
Greenbytes – ZFS based storage management solutions
Likewise – Open backup software for macs/linux/windows
Liquidcomputing – High density servers
Maxiscale – Web infrastructure (Stealth)
Metalogix – Archiving solutions
Neptuny – Capacity Planning
Netronome – Network and I/O optimization technology
Newboundary – IT policy management and IRM tools
Nexenta ZFS – based storage management solutions
Pergamumsystems – Archive solutions (Stealth)
Pranah – SMB Storage vendor formerly known as Marner
Procedo – Archiving and migration solutions
Rebit – Backup and data protection solutions
Rightscale – Amazon cloud computing management tools
Rmsource – Cloud backup solutions
RNAnetworks – Virtual memory management solutions
Scale Computing – Clustered storage management software
ScaleMP – Multi-core virtualization for scale out
SiberSystems – Goodsync data protection solutions
Sparebackup – Backup data protection solutions
StorageFusion – Storage resource analysis
Storspeed – NAS/NFS optimization solutions (Stealth)
Sugarsync – Backup and data protection solutions
Surgient – Cloud computing solutions
Synology – SMB storage solutions
TwinStrata – BC/DR analysis and assessment tools
Vadium – Security and encryption tools
Vembu – Backup data protection tools
Versant – Object database management solutions
Vipre – Security, data loss, data leak prevention
VirtenSys – Virtual I/O and I/O virtualization (IOV)
Vizrt – Video management software tools
WhipTail – Flash SSD solutions
Xenos – Archive and data footprint reduction solutions

Links to the above along with many other companies including manufactures and vars can be found on the Interesting Links page at StorageIO.

Food for thought for your summer technology picnic fun.

Nuf said for now.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Catch of the day or post of the day!

Ok, I know, its been a couple of weeks since my last post. Sure I have been tweeting now and then, attending several briefings with new emerging as well as existing vendors for up-coming announcements, not to mention getting some other content out from webcasts, to podcasts, or videos, interviews, articles, tips and presentations at various events, pertaining to Green IT, virtualization, cloud storage and computing, backup, data protection, performance, capacity planning among other topics.

Anyway, for now a quick post as I have many others that I have been wanting to do and will be doing soon, however wanted to get a few things out sooner vs. later, and after all, all work and no play makes for a dull day right?

Well, last week after spending a couple of days in Chicago at Storage Decisions where I presented a couple of sessions and recorded several videos, I had a chance to get out and do some fishing and catching. Fishing is always great, however catching (and release) is even more fun, especially when you can catch some, toss some, and keep some for dinner which is what occurred last week when my friend Rob and me ventured out for a couple of hours and found where the fish were (see picture) on the St. Croix river.

Catch of the Day

Rob on left (Bruins warm up jacket for Bass fishing), Greg on the right (Mustang PFD Jacket)

Catch of the day line-up
From right to left, bottle bass (caught at the dock ;) ), stripped bass, northern pike (swamp shark), more stripped bass, and another bottle bass (also caught at the dock).

Ok, nuff fish talk for now, back to work, get a few things done, and then maybe this weekend, get another blog post done, maybe some fishing, and enjoying the summer weather before heading off to Toronto on Monday for Storage Decisions on Tuesday, then a couple of webcasts and web radio events on Wednesday among other activities.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

IBM Out, Oracle In as Buyer of Sun

Following on the heals of IBM in talks with Sun that broke down a week or so ago, today’s news is Oracle has agreed to buy Sun extending Larry Ellison’s software empire as well as boosting his hardware empire from fast sport platforms to server, storage and other IT data center hardware.

What’s the real play and story here is certainly open to discussion and debate, is it good, is it bad, who are the winners and losers will be determined as the dust settles, not to mention as responses from across the industry, not to mention new product announcements and enhances slated by some for as early as this week. What if any role does Cisco wanting to get into servers and maybe storage play, does Oracle want to make sure they remain at the big table?

Regarding discussions of this deal, what it means, the twitter world has been abuzz already this morning, click here to see and follow some of the conversations, perspectives and insights being exchanged.

Nuf said for now, its time to get ready to head off to the airport as I’m doing several events speaking and keynote sessions this week on the right coast while the left coast is abuzz with the Sun & Oracle activity.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

March and Mileage Mania Wrap-up

Today’s flight to Santa Ana (SNA) Orange County California for an 18 hour visit marks my 3rd trip to the left coast in the past four weeks that started out with a trip to Los Angeles. The purpose of today’s trip is to deliver a talk around Business Continuance (BC) and Disaster recovery (DR) topics for virtual server and storage environments along with related data transformation topics themes, part of a series of on-going events.

Planned flight path from MSP to SNA, note upper midwest snow storms. Thanks to Northwest Airlines, now part of Delta!
Planned flight path from MSP to SNA courtesy of Northwest Airlines, now part of Delta

This is a short trip to southern California in that I have to be back in Minneapolis for a Wednesday afternoon meeting followed by keynoting at an IT Infrastructure Optimization Seminar downtown Minneapolis Thursday morning. Right after Thursday morning session, its off to the other coast for some Friday morning and early afternoon sessions in the Boston area, the results of which I hope to be able to share with you in a not so distant future posting.

Where has March gone? Its been a busy and fun month out on the road with in-person seminars, vendor and user group events in Minneapolis, Los Angles, Las Vegas, Milwaukee, Atlanta, St. Louis, Birmingham, Minneapolis for CMG user group, Cincinnati and Orange County not to mention some other meetings and consulting engagements elsewhere including participating in a couple of webcast and virtual conference/seminars while on the road. Coverage and discussion around my new book "The Green and Virtual Data Center" (CRC) continues expand, read here to see what’s being said.

What has made the month fun in addition to traveling around the country is the interaction with the hundreds of IT professionals from organizations of all size hearing what they are encountering, what their challenges are, what they are thinking, and in general what’s on their mind.

Some of the common themes include:

  • There’s no such thing as a data recession, however the result is doing more with less, or, with what you have
  • Confusion abounds around green hype including carbon footprints vs. core IT and business issues
  • There is life beyond consolidation for server and storage virtualization to enable business agility
  • Security and encryption remain popular topic as does heterogeneous and affordable key management
  • End to end IT resource management for virtual environments is needed that is scalable and affordable
  • Performance and quality of service can not be sacrificed in the quest to drive up storage utilization
  • Clouds, SSD (FLASH), Dedupe, FCoE and Thin Provisioning among others are on the watch list
  • Tape continues to be used complimenting disks in tiered storage environments along with VTLs
  • Dedupe continues to be deployed and we are just seeing the very tip of the ice-berg of opportunity
  • Software licensing cost savings or reallocation should be a next step focus for virtual environments
  • Now, for a bit of irony and humor, overheard was a server sales person talking to a storage sales person comparing notes on how they are missing their forecasts as their customers are buying fewer servers and storage now that they are consolidating with virtualization, or using disk dedupe to eliminate disk drives. Doh!!!

    Now if those sales people can get their marketing folks to get them the play book for virtualization for business agility, improving performance and enabling business growth in an optimized, transformed environment, they might be able to talk a different story with their customers for new opportunities…

    What’s on deck for April? More of the same, however also watch and listen for some additional web based content including interviews quotes and perspectives on industry happenings, articles, tips and columns, reports, blogs, videos, podcasts, webcasts and twitter activity as well as appearances at events in Boston, Chicago, New Jersey and Providence among other venues.

    To all of those who came out to the various events in March, thank you very much and look forward to future follow-up conversations as well as seeing you at some of the upcoming future events.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Work and Entertainment From Coast to Coast

    A week ago I was in St. Petersburg, Tampa and Miami Florida for a mix of work and relaxation along with Karen (Mrs. Schulz), visiting with my cousin and her husband who lives in the St. Pete beach area for a few days before back to work. While in the St. Pete and Tampa area, for fun, we did an afternoon at Busch Garden including a ride on Montu. For those who have not ridden on Montu, here’s a video I found that someone recorded to help give you a perspective of the ride. Other fun activities included stops or time at Billys Stonecrab and Seafood joint, Kayaking, lounging pool-side, shelling at Ft. Desoto and St. Pete Beach as well as a visit to the Hurricane among others.

    In Miami, the pool area at the Four Seasons including a nice cabana pool-side spot to escape the cool breeze made for a great relaxing and catch-up on some work spot while Karen relaxed in the sun. Some of the restraunts in Miami we visited when taking a break from work included Gordon Birsch and Rosa for some outstanding, made at the table side fresh Guacamole en Molcajet!.

    Speaking of work, the Florida trip involved doing keynotes at events in both Tampa and Miami with a theme of IT Infrastructure Optimization with both events being well attended. Themes included doing more with less, or, doing more with what you have, addressing data footprint and data management to boost productivity, how to address the continued growth in data and need to process, move and store more data and information. A discussion point prompted the thought of if there is a data recession or not (See previous blog post and here). Other topics of discussion and interested included converged networking for voice, data and general networking, security, server and storage virtualization, performance and capacity planning, data protection and BC/DR among others.

    This past week involved a lunch and learn Keynote in the Minneapolis area with a local VAR, before a quick trip to the other (left) coast for another IT Infrastructure Optimization session and keynote, this time in Los Angeles. Some common themes heard from IT professionals at this past weeks events echoed those heard in Florida as well as concern about managing encryption keys not to mention securing virtual environments and software licensing models in virtualized server environments. The trip to LA also enabled a quick visit with friend Bruce Rave of Go Deep fame who provided a great tour and sightseeing of the Hollywood music scene.

    Hollywood stops included dinner at Genghis Cohens (The duck and cashew chicken were outstanding) followed by visits to the Cat and Fiddle and Infamous Rainbow Bar & Grill next door to legendary Roxy. People watching was great as was the music and ambiance including a Nikki Sixx of Motely Crew sighting at the Rainbow as well as Dr. Sanjay Gupta of CNN seen in hotel lobby minutes after appearing on Larry King Live.

    Thanks too everyone who came out and participated in the seminar events in Tampa, Miami, Minneapolis and LA, look forward to seeing and hearing from you again soon. Now its time to get ready to head off too the airport for this weeks events and activities including stops in Las Vegas and Milwaukee among others.

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Is There a Data and I/O Activity Recession?

    Storage I/O trends

    With all the focus on both domestic and international economic woes and discussion of recessions and depressions and possible future rapid inflation, recent conversations with IT professionals from organizations of all size across different industry sectors and geographies prompted the question, is there also a data and I/O activity recession?

    Here’s the premise, if you listen to current economic and financial reports as well as employment information, the immediate conclusion is that yes, there should also be an I recession in the form of contraction in the amount of data being processed, moved and stored which would also impact I/O (e.g. DAS,, LAN, SAN, FAN or NAS, MAN, WAN) networking activity as well. After all, the server, storage, I/O and networking vendors earnings are all being impacted right?

    As is often the case, there is more to the story, certainly vendor earnings are down and some vendors are shipping less product than during corresponding periods from a year or more ago. Likewise, I continue to hear from both IT organizations, vars and vendors of lengthened sales cycles due to increased due diligence and more security of IT acquisitions meaning that sales and revenue forecasts continue to be very volatile with some vendors pulling back on their future financial guidance.

    However, does that mean fewer servers, storage, I/O and networking components not to mention less software is being shipped? In some cases there is or has been a slow down. However in other cases, due to pricing pressures, increased performance and capacity density where more work can be done by fewer devices, consolidation, data footprint reduction, optimization, virtualization including VMware and other techniques, not to mention a decrease in some activity, there is less demand. On the other hand, while some retail vendors are seeing their business volume decrease, others such as Amazon are seeing continued heavy demand and activity.

    Been on a trip lately through an airport? Granted the airlines have instituted capacity management (e.g. capacity planning) and fleet optimization to align the number of flights or frequency as well as aircraft type (tiering) to the demand. In some cases smaller planes, in other cases larger planes, for some more stops at a lower price (trade time for money) or in other cases shorter direct routes for a higher fee. The point being is that while there is an economic recession underway, and granted there are fewer flights, many if not most of those flights are full which means transactions and information to process by the airlines reservations and operational as well as customer relations and loyalty systems.

    Mergers and acquisitions usually mean a reduction or consolidation of activity resulting in excess and surplus technologies, yet talking with some financial services organizations, over time some of their systems will be consolidated to achieve operating efficiency and synergies, near term, in some cases, there is the need for more IT resources to support the increased activity of supporting multiple applications, increased customer inquiry and conversion activity.

    On a go forward basis, there is the need to support more applications and services that will generate more I/O activity to enable data to be moved, processed and stored. Not to mention, data being retained in multiple locations for longer periods of time to meet both compliance and non regulatory compliance requirements as well as for BC/DR and business intelligence (BI) or data mining for marketing and other purposes.

    Speaking of the financial sector, while the economic value of most securities is depressed, and with the wild valuation swings in the stock markets, the result is more data to process, move and store on a daily basis, all of which continues to place more demand on IT infrastructure resources including servers, storage, I/O networking, software, facilities and the people to support them.

    Dow Jones Trading Activity Volume
    Dow Jones Trading Activity Volume (Courtesy of data360.org)

    For example, the amount of Dow Jones trading activity is on a logarithmic upward trend curve in the example chart from data360.org which means more transactions selling and buying. The result of more transactions is that there are also an increase in the number of back-office functions for settlement, tracking, surveillance, customer inquiry and reporting among others activities. This means that more I/Os are generated with data to be moved, processed, replicated, backed-up with additional downstream activity and processing.

    Shifting gears, same things with telephone and in particular cell phone traffic which indirectly relates on IT systems particular for support email and other messaging activity. Speaking of email, more and more emails are sent every day, granted many are spam, yet these all result in more activity as well as data.

    What’s the point in all of this?

    There is a common awareness among most IT professionals that there is more data generated and stored every year and that there is also an awareness of the increased threats and reliance upon data and information. However what’s either not as widely discussed is the increase in I/O and networking activity. That is, the space capacity often gets talked about, however, the I/O performance, response time, activity and data movement can be forgotten about or its importance to productivity diminished. So the point is, keep performance, response time, and latency in focus as well as IOPS and bandwidth when looking at, and planning IT infrastructure to avoid data center bottlenecks.

    Finally for now, what’s your take, is there a data and/or I/O networking recession, or is it business and activity as usual?

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved