EPA Energy Star for Data Center Storage Update

EPA Energy Star

Following up on a recent post about Green IT, energy efficiency and optimization for servers, storage and more, here are some additional  thoughts, perspectives along with industry activity around the U.S. Environmental Protection Agency (EPA) Energy Star for Server, Data Center Storage and Data Centers.

First a quick update, Energy Star for Servers is in place with work now underway on expanding and extending beyond the first specification. Second is that Energy Star for Data Center storage definition is well underway including a recent workshop to refine the initial specification along with discussion for follow-on drafts.

Energy Star for Data Centers is also currently undergoing definition which is focused more on macro or facility energy (notice I did not say electricity) efficiency as opposed to productivity or effectiveness, items that the Server and Storage specifications are working towards.

Among all of the different industry trade or special interests groups, at least on the storage front the Storage Networking Industry Association (SNIA) Green Storage Initiative (GSI) and their Technical Work Groups (TWG) have been busily working for the past couple of years on taxonomies, metrics and other items in support of EPA Energy Star for Data Center Storage.

A challenge for SNIA along with others working on related material pertaining to storage and efficiency is the multi-role functionality of storage. That is, some storage simply stores data with little to no performance requirements while other storage is actively used for reading and writing. In addition, there are various categories, architectures not to mention hardware and software feature functionality or vendors with different product focus and interests.

Unlike servers that are either on and doing work, or, off or in low power mode, storage is either doing active work (e.g. moving data), storing in-active or idle data, or a combination of both. Hence for some, energy efficiency is about how much data can be stored in a given footprint with the least amount of power known as in-active or idle measurement.

On the other hand, storage efficiency is also about using the least amount of energy to produce the most amount of work or activity, for example IOPS or bandwidth per watt per footprint.

Thus the challenge and need for at least a two dimensional  model looking at, and reflecting different types or categories of storage aligned for active or in-active (e.g. storing) data enabling apples to apples, vs. apples to oranges comparison.

This is not all that different from how EPA looks at motor vehicle categories of economy cars, sport utility, work or heavy utility among others when doing different types of work, or, in idle.

What does this have to do with servers and storage?

Simple, when a server powers down where does its data go? That’s right, to a storage system using disk, ssd (RAM or flash), tape or optical for persistency. Likewise, when there is work to be done, where does the data get read into computer memory from, or written to? That’s right, a storage system. Hence the need to look at storage in a multi-tenant manner.

The storage industry is diverse with some vendors or products focused on performance or activity, while others on long term, low cost persistent storage for archive, backup, not to mention some doing a bit of both. Hence the nomenclature of herding cats towards a common goal when different parties have various interests that may conflict yet support needs of various customer storage usage requirements.

Figure 1 shows a simplified, streamlined storage taxonomy that has been put together by SNIA representing various types, categories and functions of data center storage. The green shaded areas are a good step in the right direction to simplify yet move towards realistic and achievable befits for storage consumers.


Figure 1 Source: EPA Energy Star for Data Center Storage web site document

The importance of the streamlined SNIA taxonomy is to help differentiate or characterize various types and tiers of storage (Figure 2) products facilitating apples to apples comparison instead of apples or oranges. For example, on-line primary storage needs to be looked at in terms of how much work or activity per energy footprint determines efficiency.


Figure 2: Tiered Storage Example

On other hand, storage for retaining large amounts of data that is in-active or idle for long periods of time should be looked at on a capacity per energy footprint basis. While final metrics are still being flushed out, some examples could be active storage gauged by IOPS or work or bandwidth per watt of energy per footprint while other storage for idle or inactive data could be looked at on a capacity per energy footprint basis.

What benchmarks or workloads to be used for simulating or measuring work or activity are still being discussed with proposals coming from various sources. For example SNIA GSI TWG are developing measurements and discussing metrics, as have the storage performance council (SPC) and SPEC among others including use of simulation tools such as IOmeter, VMware VMmark, TPC, Bonnie, or perhaps even Microsoft ESRP.

Tenants of Energy Star for Data Center Storage overtime hopefully will include:

  • Reflective of different types, categories, price-bands and storage usage scenarios
  • Measure storage efficiency for active work along with in-active or idle usage
  • Provide insight for both storage performance efficiency and effective capacity
  • Baseline or raw storage capacity along with effective enhanced optimized capacity
  • Easy to use metrics with more in-depth back ground or disclosure information

Ultimately the specification should help IT storage buyers and decision makers to compare and contrast different storage systems that are best suited and applicable to their usage scenarios.

This means measuring work or activity per energy footprint at a given capacity and data protection level to meet service requirements along with during in-active or idle periods. This also means showing storage that is capacity focused in terms of how much data can be stored in a given energy footprint.

One thing that will be tricky however will be differentiating GBytes per watt in terms of capacity, or, in terms of performance and bandwidth.

Here are some links to learn more:

Stay tuned for more on Energy Star for Data Centers, Servers and Data Center Storage.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

I/O Virtualization (IOV) Revisited

Is I/O Virtualization (IOV) a server topic, a network topic, or a storage topic (See previous post)?

Like server virtualization, IOV involves servers, storage, network, operating system, and other infrastructure resource management areas and disciplines. The business and technology value proposition or benefits of converged I/O networks and I/O virtualization are similar to those for server and storage virtualization.

Additional benefits of IOV include:

    • Doing more with what resources (people and technology) already exist or reduce costs
    • Single (or pair for high availability) interconnect for networking and storage I/O
    • Reduction of power, cooling, floor space, and other green efficiency benefits
    • Simplified cabling and reduced complexity for server network and storage interconnects
    • Boosting servers performance to maximize I/O or mezzanine slots
    • reduce I/O and data center bottlenecks
    • Rapid re-deployment to meet changing workload and I/O profiles of virtual servers
    • Scaling I/O capacity to meet high-performance and clustered application needs
    • Leveraging common cabling infrastructure and physical networking facilities

Before going further, lets take a step backwards for a few moments.

To say that I/O and networking demands and requirements are increasing is an understatement. The amount of data being generated, copied, and retained for longer periods of time is elevating the importance of the role of data storage and infrastructure resource management (IRM). Networking and input/output (I/O) connectivity technologies (figure 1) tie facilities, servers, storage tools for measurement and management, and best practices on a local and wide area basis to enable an environmentally and economically friendly data center.

TIERED ACCESS FOR SERVERS AND STORAGE
There is an old saying that the best I/O, whether local or remote, is an I/O that does not have to occur. I/O is an essential activity for computers of all shapes, sizes, and focus to read and write data in and out of memory (including external storage) and to communicate with other computers and networking devices. This includes communicating on a local and wide area basis for access to or over Internet, cloud, XaaS, or managed services providers such as shown in figure 1.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
Figure 1 The Big Picture: Data Center I/O and Networking

The challenge of I/O is that some form of connectivity (logical and physical), along with associated software is required along with time delays while waiting for reads and writes to occur. I/O operations that are closest to the CPU or main processor should be the fastest and occur most frequently for access to main memory using internal local CPU to memory interconnects. In other words, fast servers or processors need fast I/O, either in terms of low latency, I/O operations along with bandwidth capabilities.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
Figure 2 Tiered I/O and Networking Access

Moving out and away from the main processor, I/O remains fairly fast with distance but is more flexible and cost effective. An example is the PCIe bus and I/O interconnect shown in Figure 2, which is slower than processor-to-memory interconnects but is still able to support attachment of various device adapters with very good performance in a cost effective manner.

Farther from the main CPU or processor, various networking and I/O adapters can attach to PCIe, PCIx, or PCI interconnects for backward compatibility to support various distances, speeds, types of devices, and cost factors.

In general, the faster a processor or server is, the more prone to a performance impact it will be when it has to wait for slower I/O operations.

Consequently, faster servers need better-performing I/O connectivity and networks. Better performing means lower latency, more IOPS, and improved bandwidth to meet application profiles and types of operations.

Peripheral Component Interconnect (PCI)
Having established that computers need to perform some form of I/O to various devices, at the heart of many I/O and networking connectivity solutions is the Peripheral Component Interconnect (PCI) interface. PCI is an industry standard that specifies the chipsets used to communicate between CPUs and memory and the outside world of I/O and networking device peripherals.

Figure 3 shows an example of multiple servers or blades each with dedicated Fibre Channel (FC) and Ethernet adapters (there could be two or more for redundancy). Simply put the more servers and devices to attach to, the more adapters, cabling and complexity particularly for blade servers and dense rack mount systems.
PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
Figure 3 Dedicated PCI adapters for I/O and networking devices

Figure 4 shows an example of a PCI implementation including various components such as bridges, adapter slots, and adapter types. PCIe leverages multiple serial unidirectional point to point links, known as lanes, in contrast to traditional PCI, which used a parallel bus design.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)

Figure 4 PCI IOV Single Root Configuration Example

In traditional PCI, bus width varied from 32 to 64 bits; in PCIe, the number of lanes combined with PCIe version and signaling rate determine performance. PCIe interfaces can have 1, 2, 4, 8, 16, or 32 lanes for data movement, depending on card or adapter format and form factor. For example, PCI and PCIx performance can be up to 528 MB per second with a 64 bit, 66 MHz signaling rate, and PCIe is capable of over 4 GB (e.g., 32 Gbit) in each direction using 16 lanes for high-end servers.

The importance of PCIe and its predecessors is a shift from multiple vendors’ different proprietary interconnects for attaching peripherals to servers. For the most part, vendors have shifted to supporting PCIe or early generations of PCI in some form, ranging from native internal on laptops and workstations to I/O, networking, and peripheral slots on larger servers.

The most current version of PCI, as defined by the PCI Special Interest Group (PCISIG), is PCI Express (PCIe). Backwards compatibility exists by bridging previous generations, including PCIx and PCI, off a native PCIe bus or, in the past, bridging a PCIe bus to a PCIx native implementation. Beyond speed and bus width differences for the various generations and implementations, PCI adapters also are available in several form factors and applications.

Traditional PCI was generally limited to a main processor or was internal to a single computer, but current generations of PCI Express (PCIe) include support for PCI Special Interest Group (PCI) I/O virtualization (IOV), enabling the PCI bus to be extended to distances of a few feet. Compared to local area networking, storage interconnects, and other I/O connectivity technologies, a few feet is very short distance, but compared to the previous limit of a few inches, extended PCIe provides the ability for improved sharing of I/O and networking interconnects.

I/O VIRTUALIZATION(IOV)
On a traditional physical server, the operating system sees one or more instances of Fibre Channel and Ethernet adapters even if only a single physical adapter, such as an InfiniBand HCA, is installed in a PCI or PCIe slot. In the case of a virtualized server for example, Microsoft HyperV or VMware ESX/vSphere the hypervisor will be able to see and share a single physical adapter, or multiple adapters, for redundancy and performance to guest operating systems. The guest systems see what appears to be a standard SAS, FC or Ethernet adapter or NIC using standard plug-and-play drivers.

Virtual HBA or virtual network interface cards (NICs) and switches are, as their names imply, virtual representations of a physical HBA or NIC, similar to how a virtual machine emulates a physical machine with a virtual server. With a virtual HBA or NIC, physical NIC resources are carved up and allocated as virtual machines, but instead of hosting a guest operating system like Windows, UNIX, or Linux, a SAS or FC HBA, FCoE converged network adapter (CNA) or Ethernet NIC is presented.

In addition to virtual or software-based NICs, adapters, and switches found in server virtualization implementations, virtual LAN (VLAN), virtual SAN (VSAN), and virtual private network (VPN) are tools for providing abstraction and isolation or segmentation of physical resources. Using emulation and abstraction capabilities, various segments or sub networks can be physically connected yet logically isolated for management, performance, and security purposes. Some form of routing or gateway functionality enables various network segments or virtual networks to communicate with each other when appropriate security is met.

PCI-SIG IOV
PCI SIG IOV consists of a PCIe bridge attached to a PCI root complex along with an attachment to a separate PCI enclosure (Figure 5). Other components and facilities include address translation service (ATS), single-root IOV (SR IOV), and multiroot IOV (MR IOV). ATS enables performance to be optimized between an I/O device and a servers I/O memory management. Single root, SR IOV enables multiple guest operating systems to access a single I/O device simultaneously, without having to rely on a hypervisor for a virtual HBA or NIC.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)

Figure 5 PCI SIG IOV

The benefit is that physical adapter cards, located in a physically separate enclosure, can be shared within a single physical server without having to incur any potential I/O overhead via virtualization software infrastructure. MR IOV is the next step, enabling a PCIe or SR IOV device to be accessed through a shared PCIe fabric across different physically separated servers and PCIe adapter enclosures. The benefit is increased sharing of physical adapters across multiple servers and operating systems not to mention simplified cabling, reduced complexity and resource utilization.

PCI SIG IOV (C) 2009 The Green and Virtual Data Center (CRC)
Figure 6 PCI SIG MR IOV

Figure 6 shows an example of a PCIe switched environment, where two physically separate servers or blade servers attach to an external PCIe enclosure or card cage for attachment to PCIe, PCIx, or PCI devices. Instead of the adapter cards physically plugging into each server, a high performance short-distance cable connects the servers PCI root complex via a PCIe bridge port to a PCIe bridge port in the enclosure device.

In figure 6, either SR IOV or MR IOV can take place, depending on specific PCIe firmware, server hardware, operating system, devices, and associated drivers and management software. For a SR IOV example, each server has access to some number of dedicated adapters in the external card cage, for example, InfiniBand, Fibre Channel, Ethernet, or Fibre Channel over Ethernet (FCoE) and converged networking adapters (CNA) also known as HBAs. SR IOV implementations do not allow different physical servers to share adapter cards. MR IOV builds on SR IOV by enabling multiple physical servers to access and share PCI devices such as HBAs and NICs safely with transparency.

The primary benefit of PCI IOV is to improve utilization of PCI devices, including adapters or mezzanine cards, as well as to enable performance and availability for slot-constrained and physical footprint or form factor-challenged servers. Caveats of PCI IOV are distance limitations and the need for hardware, firmware, operating system, and management software support to enable safe and transparent sharing of PCI devices. Examples of PCIe IOV vendors include Aprius, NextIO and Virtensys among others.

InfiniBand IOV
InfiniBand based IOV solutions are an alternative to Ethernet-based solutions. Essentially, InfiniBand approaches are similar, if not identical, to converged Ethernet approaches including FCoE, with the difference being InfiniBand as the network transport. InfiniBand HCAs with special firmware are installed into servers that then see a Fibre Channel HBA and Ethernet NIC from a single physical adapter. The InfiniBand HCA also attaches to a switch or director that in turn attaches to Fibre Channel SAN or Ethernet LAN networks.

The value of InfiniBand converged networks are that they exist today, and they can be used for consolidation as well as to boost performance and availability. InfiniBand IOV also provides an alternative for those who do not choose to deploy Ethernet.

From a power, cooling, floor-space or footprint standpoint, converged networks can be used for consolidation to reduce the total number of adapters and the associated power and cooling. In addition to removing unneeded adapters without loss of functionality, converged networks also free up or allow a reduction in the amount of cabling, which can improve airflow for cooling, resulting in additional energy efficiency. An example of a vendor using InfiniBand as a platform for I/O virtualization is Xsigo.

General takeaway points include the following:

  • Minimize the impact of I/O delays to applications, servers, storage, and networks
  • Do more with what you have, including improving utilization and performance
  • Consider latency, effective bandwidth, and availability in addition to cost
  • Apply the appropriate type and tiered I/O and networking to the task at hand
  • I/O operations and connectivity are being virtualized to simplify management
  • Convergence of networking transports and protocols continues to evolve
  • PCIe IOV is complimentary to converged networking including FCoE

Moving forward, a revolutionary new technology may emerge that finally eliminates the need for I/O operations. However until that time, or at least for the foreseeable future, several things can be done to minimize the impacts of I/O for local and remote networking as well as to simplify connectivity.

PCIe Fundamentals Server Storage I/O Network Essentials

Learn more about IOV, converged networks, LAN, SAN, MAN and WAN related topics in Chapter 9 (Networking with your servers and storage) of The Green and Virtual Data Center (CRC) as well as in Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier).

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Should Everything Be Virtualized?

Storage I/O trends

Should everything, that is all servers, storage and I/O along with facilities, be virtualized?

The answer not surprisingly should be it depends!

Denny Cherry (aka Mrdenny) over at ITKE did a great recent post about applications not being virtualized, particularly databases. In general some of the points or themes we are on the same or similar page, while on others we slightly differ, not by very much.

Unfortunately consolidation is commonly misunderstood to be the sole function or value proposition of server virtualization given its first wave focus. I agree that not all applications or servers should be consolidated (note that I did not say virtualized).

From a consolidation standpoint, the emphasis is often on boosting resource use to cut physical hardware and management costs by boosting the number of virtual machines (VMs) per physical machine (PMs). Ironically, while VMs using VMware, Microsoft HyperV, Citrix/Xen among others can leverage a common gold image for cloning or rapid provisioning, there are still separate operating system instances and applications that need to be managed for each VM.

Sure, VM tools from the hypervisor along with 3rd party vendors help with these tasks as well as storage vendor tools including dedupe and thin provisioning help to cut the data footprint impact of these multiple images. However, there are still multiple images to manage providing a future opportunity for further cost and management reduction (more on that in a different post).

Getting back on track:

Some reasons that all servers or applications cannot be consolidated include among others:

  • Performance, response time, latency and Quality of Service (QoS)
  • Security requirements including keeping customers or applications separate
  • Vendor support of software on virtual or consolidated servers
  • Financial where different departments own hardware or software
  • Internal political or organizational barriers and turf wars

On the other hand, for those that see virtualization as enabling agility and flexibility, that is life beyond consolidation, there are many deployment opportunities for virtualization (note that I did not say consolidation). For some environments and applications, the emphasis can be on performance, quality of service (QoS) and other service characteristics where the ratio of VMs to PMs will be much lower, if not one to one. This is where Mrdenny and me are essentially on the same page, perhaps saying it different with plenty of caveats and clarification needed of course.

My view is that in life beyond consolidation, many more servers or applications can be virtualized than might be otherwise hosted by VMs (note that I did not say consolidated). For example, instead of a high number or ratio of VMs to PMs, a lower number and for some workloads or applications, even one VM to PM can be leveraged with a focus beyond basic CPU use.

Yes you read that correctly, I said why not configure some VMs on a one to one PM basis!

Here’s the premise, todays current wave or focus is around maximizing the number of VMs and/or the reduction of physical machines to cut capital and operating costs for under-utilized applications and servers, thus the move to stuff as many VMs into/onto a PM as possible.

However, for those applications that cannot be consolidated as outlined above, there is still a benefit of having a VM dedicated to a PM. For example, by dedicating a PM (blade, server or perhaps core) allows performance and QoS aims to be meet while still providing the ability for operational and infrastructure resource management (IRM), DCIM or ITSM flexibility and agility.

Meanwhile during busy periods, the application such as a database server could have its own PM, yet during off-hours, some over VM could be moved onto that PM for backup or other IRM/DCIM/ITSM activities. Likewise, by having the VM under the database with a dedicated PM, the application could be moved proactively for maintenance or in a clustered HA scenario support BC/DR.

What can and should be done?
First and foremost, decide how VMs is the right number to divide per PM for your environment and different applications to meet your particular requirements and business needs.

Identify various VM to PM ratios to align with different application service requirements. For example, some applications may run on virtual environments with a higher number of VMs to PMs, others with a lower number of VMs to PMs and some with a one VM to PM allocation.

Certainly there will be for different reasons the need to keep some applications on a direct PM without introducing a hypervisors and VM, however many applications and servers can benefit from virtualization (again note, I did not say consolation) for agility, flexibility, BC/DR, HA and ease of IRM assuming the costs work in your favor.

Additional general to do or action items include among others:

  • Look beyond CPU use also factoring in memory and I/O performance
  • Keep response time or latency in perspective as part of performance
  • More and fast memory are important for VMs as well as for applications including databases
  • High utilization may not show high hit rates or effectiveness of resource usage
  • Fast servers need fast memory, fast I/O and fast storage systems
  • Establish tiers of virtual and physical servers to meet different service requirements
  • See efficiency and optimization as more than simply driving up utilization to cut costs
  • Productivity and improved QoS are also tenants of an efficient and optimized environment

These are themes among others that are covered in chapters 3 (What Defines a Next-Generation and Virtual Data Center?), 4 (IT Infrastructure Resource Management), 5 (Measurement, Metrics, and Management of IT Resources), as well as 7 (Servers—Physical, Virtual, and Software) in my book “The Green and Virtual Data Center (CRC) that you can learn more about here.

Welcome to life beyond consolidation, the next wave of desktop, server, storage and IO virtualization along with the many new and expanded opportunities!

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

PUE, Are you Managing Power, Energy or Productivity?

With a renewed focus on Green IT including energy Efficiency and Optimization of servers, storage, networks and facilities, is your focus on managing power, energy, or, productivity?

For example, do you use or are interested in metrics such as Greengrid PUE or 80 Plus efficient power supplies along with initiatives such as EPA Energy Star for servers and emerging Energy Star for Data Center for Storage in terms of energy usage?

Or are you interested in productivity such as amount of work or activity that can be done in a given amount of time,or how much information can be stored in a given footprint (power, cooling, floor space, budget, management)?

For many organizations, there tends to be a focus and in both managing power along with managing productivity. The two are or should interrelated, however there are some disconnects with some emphasis and metrics. For example, the Green grid PUE is a macro facilities centric metric that does not show the productivity, quality or measure of services being delivered by a data center or information factory. Instead, PUE provides a gauge of how the habitat, that is the building and power distribution along with cooling are efficient with respect to the total energy consumption of IT equipment.

As a refresher, PUE is a macro metric that is essentially a ratio of how much total power or energy goes into a facility vs. the amount of energy used by IT equipment. For example, if 12Kw (smaller room/site) or 12Mw (larger site) are required to power an IT data center or computer room for that matter, and of that energy load, 6kWh or 6Mw, the PUE would be 2. A PUE of 2 is an indicator that 50% of energy going to power a facility or computer room goes towards IT equipment (servers, storage, networks, telecom and related equipment) with the balance going towards running the facility or environment which typically has had the highest percentage being HVAC/cooling.

In the case of EPA Energy Star for Data Centers which initially is focused on the habitat or facility efficiency, the answer is measuring and managing energy use and facility efficiency as opposed to productivity or useful work. The metric for EPA Energy Star for Data Center initially will be Energy Usage Effectiveness (EUE) that will be used to calculate a ratting for a data center facility. Those data centers in the top25 percentile will qualify for Energy Star certification.

Note the word energy and not power which means that the data center macro metric based on Green grid PUE rating looks at all source of energy used by a data center and not just electrical power. What this means is that a macro and holistic facilities energy consumption could be a combination of electrical power, diesel, propane or natural gas or other fuel sources to generate or create power for IT equipment, HVAC/Cooling and other needs. By using a metric that factor in all energy sources, a facility that uses solar, radiant, heat pumps, economizers or other techniques to reduce demands on energy will make a better rating.

By using a macro metric such as EUE or PUE (ratio = Total_Power_Used / IT_Power_Needs), a starting point is available to decide and compare efficiency and cost to power or energize a facility or room also known as a habitat for technology.

Managing Productivity of Information Factories (E.g. Data Centers)
What EUE and PUE do not reflect or indicate is how much data is processed, moved and stored by servers, storage and networks within a facility. On the other hand or extreme from macro metrics are micro or component metrics that gauge energy usage on an individual device basis. Some of these micro metrics may have activity or productivity indicator measurements associated with them, some don’t. Where these leave a big gap and opportunity is to fill the span between the macro and micro.

This is where work is being done by various industry groups including SNIA GSI, SPC and SPEC among others along with EPA Energy Star among others to move beyond macro PUE indicators to more granular effectiveness and efficiency metrics that reflect productivity. Ultimately productivity is important to gauge,  the return on investment and business value of how much data can be processed by servers, moved via networks or stored on storage devices in a given energy footprint or cost.

In Figure 1 are shown four basic approaches (in addition to doing nothing) to energy efficiency. One approach is to avoid energy usage, similar to following a rationing model, but this approach will affect the amount of work that can be accomplished. Another approach is to do more work using the same amount of energy, boosting energy efficiency, or do same amount of work (or storage data) however with less energy.

Tiered Storage
Figure 1 the Many Faces of Energy Efficiency Source: The Green and Virtual Data Center(CRC)

The energy efficiency gap is the difference between the amount of work accomplished or information stored in a given footprint and the energy consumed. In other words, the bigger the energy efficiency gap, the better, as seen in the fourth scenario, doing more work or storing more information in a smaller footprint using less energy. Clock here to read more about Shifting from energy avoidance to energy efficiency.

Watch for new metrics looking at productivity and activity for servers, storage and networks ranging from MHz or GHz per watt, transactions or IOPS per watt, bandwidth, frames or packets processed per watt or capacity stored per watt in a given footprint. One of the confusing metrics is Gbytes or Tbytes per watt in that it can mean storage capacity or bandwidth, thus, understand the context of the metric. Likewise watch for metrics that reflect energy usage for active along with in-active including idle or dormant storage common with archives, backup or fixed content data.

What this all means is that work continues on developing usable and relevant metrics and measurement not only for macro energy usage, also, to gauge the effectiveness of delivering IT services. The business value proposition of driving efficiency and optimization including increased productivity along with storing more information in a given footprint is to support density and business sustainability.

 

Additional resources and where to learn in addition to those mentioned above include:

EPA Energy Star for Data Center Storage

Storage Efficiency and Optimization – The Other Green

Performance = Availability StorageIOblog featured ITKE guest blog

SPC and Storage Benchmarking Games

Shifting from energy avoidance to energy efficiency

Green IT Confusion Continues, Opportunities Missed!

Green Power and Cooling Tools and Calculators

Determining Computer or Server Energy Use

Examples of Green Metrics

Green IT, Power, Energy and Related Tools or Calculators

Chapter 10 (Performance and Capacity Planning)
Resilient Storage Networks (Elsevier)

Chapter 5 (Measurement, Metrics and Management of IT Resources)
The Green and Virtual Data Center (CRC)

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Clouds and Data Loss: Time for CDP (Commonsense Data Protection)?

Today SNIA released a press release pertaining to cloud storage timed to coincide with SNW where we can only presume vendors are talking about their cloud storage stories.

Yet chatter on the coconut wire along with various news (here and here and here) and social media sites is how could cloud storage and information service provider T-Mobile/Microsoft/Side-Kick loose customers data?

Data loss is a dangerous phrase, after all, your data may still be intact somewhere, however if you cannot get to it when needed, that may seem like data loss to you.

There are many types of data loss including loss of accessibility or availability along with flat out loss. Let me clarify, loss of data availability or accessibility means that somewhere, your data is still intact, perhaps off-line on a removable disk, optical, tape or at another site on-line, near-line or off-line, its just that you cannot get to it yet. There is also real data loss where both your primary copy and backup as well as archive data are lost, stolen, corrupted or never actually protected.

Clouds or managed service providers in general are getting beat up due to some loss of access, availability or actual data loss, however before jumping on that bandwagon and pointing fingers at the service, how about a step back for a minute. Granted, given all of the cloud hype and proliferation of managed service offerings on the web (excuse me cloud), there is a bit of a lightning rod backlash or see I told you so approach.

Whats different about this story compared to prior disruptions with Amazon, Google, Blackberry among others is that unlike where access to information or services ranging from calendar, emails, contacts or other documents is disrupted for a period of time, it sounds as those data may have been lost.

Lost data you should say? How can you lose data after all there are copies of copies of data that have been snapshot, replicated and deduplicated storage across different tiered storage right?

Certainly anyone involved in data management or data protection is asking the question; why not go back to a snapshot copy, replicated volute, backup copy on disk or tape?

Needless to say, finger pointing aerobics are or will be in full swing. Instead, lets ask the question, is it time for CDP as in Commonsense Data Protection?

However, rather than point blame or spout off about how bad clouds are, or, that they are getting an un-fair shake and un-due coverage, and that just because there might be a few bad ones, not all clouds are bad particularly with recent outages.

I can think of many ways on how to actually lose data, however, to totally lose data requires not a technology failure, it can be something much simpler and is equally applicable to cloud, virtual and physical data centers and storage environments from the largest to the smallest to the consumer. Its simple, common sense, best practices, making copies of all data and keeping extra copies around somewhere, with more frequent or recent data having copies readily available.

Some trends Im seeing include among others:

  • Low cost craze leveraging free or near free services and products
  • Cloud hype and cloud bashing and need to discuss wide area in between those extremes
  • Renewed need for basic data protection including BC/DR, HA, backup and security
  • Opportunity to re-architect data protection in conjunction with other initiatives
  • Lack of adequate funding for continued and proactive data protection

Just to be safe, lets revisit some common data protection best practices:

  • Learn from mistakes, preferable during testing with aim to avoid repeating them again
  • Most disasters in IT and elsewhere are the result of a chain of events not being contained
  • RAID is not a replacement for backup, it simply provides availability or accessibility
  • Likewise, mirroring or replication by themselves is not a replacement for backup.
  • Use point in time RPO based data protection such as snapshots or backup with replication
  • Maintain a master backup or gold copy that can be used to restore to a given point of time
  • Keep backup on another medium, also protect backup catalog or other configuration data
  • If using deduplication, make sure that indexes/dictionary or Meta data is also protected.
  • Moving your data into the cloud is not a replacement for a data protection strategy
  • Test restoration of backed data both locally, as well as from cloud services
  • Employ data protection management (DPM) tools for event correlation and analysis
  • Data stored in clouds need to be part of a BC/DR and overall data protection strategy
  • Have extra copy of data placed in clouds kept in alternate location as part of BC/DR
  • Ask yourself, what will do you when your cloud data goes away (note its not if, its when)
  • Combine multiple layers or rings of defines and assume what can break will break

Clouds should not be scary; Clouds do not magically solve all IT or consumer issues. However they can be an effective tool when of high caliber as part of a total data protection strategy.

Perhaps this will be a wake up call, a reminder, that it is time to think beyond cost savings and a shift back to basic data protection best practices. What good is the best or most advanced technology if you have less than adequate practices or polices? Bottom line, time for Commonsense Data Protection (CDP).

Ok, nuff said for now, I need to go and make sure I have a good removable backup in case my other local copies fail or Im not able to get to my cloud copies!

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Could Huawei buy Brocade?

Disclosure: I have no connection to Huawei. I own no stock in, nor have I worked for Brocade as an employee; however I did work for three years at SAN vendor INRANGE which was acquired by CNT. However I left to become an industry analyst prior to the acquisition by McData and well before Brocade bought McData. Brocade is not a current client; however I have done speaking events pertaining to general industry trends and perspectives at various Brocade customer events for them in the past.

Is Brocade for sale?

Last week a Wall Street Journal article mentioned Brocade (BRCD) might be for sale.

BRCD has a diverse product portfolio for Fibre Channel, Ethernet along with the emerging Fibre Channel over Ethernet (FCoE) market and a whos who of OEM and channel partners. Why not be for sale, good timing for investors, CEO Mike Klayko and his team have arguably done a good job of shifting and evolving the company.

Generally speaking, lets keep in perspective, everything is always for sale, and in an economy like now, bargains are everywhere. Many business are shopping, its just a matter of how visible the shopping for a seller or buyer is along with motivations and objectives including shareholder value.

Consequently, the coconut wires are abuzz with talk and speculation of who will buy Brocade or perhaps who Brocade might buy among other Merger and Acquisition (M and A) activity of who will buy who. For example, who might buy BRCD, why not EMC (they sold McData off years ago via IPO), or IBM (they sold some of their networking business to Cisco years ago) or HP (currently an OEM partner of BRCD) as possible buyers?

Last week I posted on twitter a response to a comment about who would want to buy Brocade with a response to the effect of why not a Huawei to which there was some silence except for industry luminary Steve Duplessie (have a look to see what Steve had to say).

Part of being an analyst IMHO should be to actually analyze things vs. simply reporting on what others want you to report or what you have read or hear elsewhere. This also means talking about scenarios that are of out of the box or in adjacent boxes from some perspectives or that might not be in-line with traditional thinking. Sometimes this means breaking away and thinking and saying what may not be obvious or practical. Having said that, lets take a step back for a moment as to why Brocade may or might not be for sale and who might or may not be interested in them.

IMHO, it has a lot to do with Cisco and not just because Brocade sees no opportunity to continue competing with the 800lb guerilla of LAN/MAN networking that has moved into Brocades stronghold of storage network SANs. Cisco is upsetting the table or apple cart with its server partners IBM, Dell, HP, Oracle/Sun and others by testing the waters of the server world with their UCS. So far I see this as something akin to a threat testing the defenses of a target before actually full out attacking.

In other words, checking to see how the opposition responds, what defense are put up, collect G2 or intelligence as well as how the rest of the world or industry might respond to an all out assault or shift of power or control. Of course, HP, IBM, Dell and Sun/Oracle will not take this move into their revenue and account control goes un-noticed with initial counter announcements having been made some re-emphasize relationship with Brocade along with their recent acquisition of Ethernet/IP vendor Foundry.

Now what does this have to do with Brocade potentially being sold and why the title involving Huawei?

Many of the recent industry acquisitions have been focused on shoring up technology or intellectual property (IP), eliminating a competitor or simply taking advantage of market conditions. For example, Datadomain was sold to EMC in a bidding war with NetApp, HP bought IBRIX, Oracle bought or is trying to buy Sun, Oracle also bought Virtual Iron, Dell bought Perot after HP bought EDS a year or so ago while Xerox bought ACS and so the M and A game continues among other deals.

Some of the deals are strategic, many being tactical, Brocade being bought I would put in the category of a strategic scenario, a bargaining chip or even pawn if you prefer in a much bigger game that is more than about switches, directors, HBAs, LANs, SANs, MANSs, WANs, POTS and PANs (Checkout my  book “Resilient Storage Networks”-Elsevier)!

So with conversations focused around Cisco expanding into servers to control the data center discussion, mindset, thinking, budgets and decision making, why wouldnt an HP, IBM, Dell let alone a NetApp, Oracle/Sun or even EMC want to buy Brocade as a bargaining chip in a bigger game? Why not a Ciena (they just bought some of Nortels assets), Juniper or 3Com (more of a merger of equals to fight Cisco), Microsoft (might upset their partner Cisco) or Fujitsu (Their Telco group that is) among others?

Then why not Huawei, a company some may have heard of, one that others may not have.

Who is Huawei you might ask?

Simple, they are a very large IT solutions provider who is also a large player in China with global operations including R&D in North America and many partnerships with U.S. vendors. By rough comparison, Cisco most recently reported annual revenue are about 36.1B (All are USD), BRCD about 1.5B, Juniper about $3.5B and 3COM about $1.3B and Huawei at about 23B USD with a year over year sales increase of 45%. Huawei has previous partnerships with storage vendors including Symantec and Falconstor among others. Huawei also has had partnership with 3com (H3C), a company that was first of the LAN vendors to get into SANs (pre-maturely) beating Cisco easily by several years.

Sure there would be many hurdles and issues, similar to the ones CNT and INRANGE had to overcome, or McData and CNT, or Brocade and McData among others. However in the much bigger game of IT account and thus budget control is played by HP, IBM, and Sun/Oracle among others, wouldn’t maintaining a dual-source for customers networking needs make sense, or, at least serve as a check to Cisco expansion efforts? If nothing else, maintaining the status quo in the industry for now, or, if the rules and game are changing, wouldn’t some of the bigger vendors want to get closer to the markets where Huawei is seeing rapid growth?

Does this mean that Brocade could be bought? Sure.
Does this mean Brocade cannot compete or is a sign of defeat? I don’t think so.
Does this mean that Brocade could end up buying or merging with someone else? Sure, why not.
Or, is it possible that someone like Huawei could end up buying Brocade? Why not!

Now, if Huawei were to buy Brocade, which begs the question for fun, could they be renamed or spun off as a division called HuaweiCade or HuaCadeWei? Anything is possible when you look outside the box.

Nuff said for now, food for thought.

Cheers – gs

Greg Schulz – StorageIO, Author “The Green and Virtual Data Center” (CRC)

Poll: Whats Your Take on FTC Guidelines For Bloggers?

If you have not heard or read yet, the Federal Trade Commission (FTC) last week released new guidelines pertaining to blogger (or other social media) disclosure of if they are being paid, receiving free products or services, or simply had their costs covered to attend an event that they will be writing, posting or blogging about.

Not surprisingly, there are those who are up in arms, those that are cheering that its about time, and everyone else trying to figure out what the new rules mean, who they apply to and when. For some I expect to see a rash of disclosures by those not sure what it means or being safe while others continue to do what they have been doing, business or blogging or both as usual. As with many things, all bloggers do not get paid or receive renumeration (compensation in some shape or form) for what they write or blog, however there are some that do and is often the case, a few bad apples turn a good thing into a problem or black-eye for everyone else.

Here’s a couple of links for some background:
Discussion over at StorageMonkeys.com pertaining to IT/Storage Analysts
Discussion at Blogher.com what the FTC guides mean to you
FTC blogger guidelines

I interpret the new FTC guidelines as pertaining to me or anyone else who has a blog regardless of if they are a social media elite professional or just for fun blogger, blog on their own time for work our their own other purposes, for profit, as a media or journalist, reporter or freelance writer, consultant or contractor, vendor or customer. My view and its just that, a view is that blogs, along with other forms of social media are tools for communication, collaborating and conversation. Thus, I have a blog, twitter, website, facebook, linkedin along with having material appear in print, on-line as well as in person, all are simply different means for interacting and communications.

As with any new communication venue, there is an era of wide open and what some might call the wide open use such as we are seeing with social media mediums today, the web in general in the past, not to mention print, TV or radio in the past.

I’m reading into these guidelines as a maturing process and acknowledgement that social media including blogs have now emerged into a viable and full fledged communication medium that consumers utilize for making decisions, thus guides need to be in place.

I like other bloggers are wondering abut the details including when to disclose something, how the guidelines will be enforced among other questions, that is unless you are one that does not believe the guidelines apply to yourself.

With all of this in mind, here’s a new poll, what’s your take on the FTC guidelines?

As for my own disclosures, look for them in white papers, articles, blogs and other venues as applicable.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Poll: What Do You Think of IT Clouds?

Clouds

IT clouds (compute, applications, storage, and services) are a popular topic for discussion with some people being entirely sold on them as the way of the future, while others totally dismissing them, meanwhile, there’s plenty of thoughts in between.

I recently shared some of my thoughts in this blog post about IT clouds, now whats your take (your identity will remain confidential)?

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

The function of XaaS(X) – Pick a letter

Remember the xSP era where X was I for ISP (Internet Service Provider) or M for Managed Service Provider (MSP) or S for Storage Service Provider, part of buzzword bingo?

That was similar to the xLM craze where X could have been I for Information Lifecycle Management (ILM), D for Data Lifecycle Management (DLM) and so forth where even someone tried to register the term ILM and failed instead of grabbing something like XLM, lest I digress.

Fast forward to today, given the wide spread use of anything SaaS among other XaaS terms, lets have a quick and perhaps fun look at what some of the different usages of the new function XaaS(X) in the IT industry today.

By no means is this an exhaustive list, feel free to comment with others, the more the merrier. Using the Basic English alphabet without numbers or extended character sets, here are some possibilities among others (some are and continue to be used in the industry):

AAnalyst, Application, Archive, Audit or Authentication
BBackup or Blogger
CCloud, Complier, Compute or Connectivity
DData management, Datawharehouse, DBA, Dedupe, Development, Disk or Docmanagement
EEmail, Encryption or Evangelist
FFiles or Freeware
GGrid or Google
HHelp, Hotline or Hype
IILM, Information, Infrastructure, IO or IT
JJobs
KKbytes
LLibrary or Linkedin
MMainframe, Marketing, Manufacturing, Media, Memory or Middleware
NNAS, Networking or Notification
OOffice, Oracle, Optical or Optimization
PPerformance, Petabytes, Platform, Policy, Police, Print or PR
QQuality
RRAID, Replication, Reporter, Research or Rightsmanagement
SSAN, Search, Security, Server, Software, Storage, Support
TTape, Technology, Testing, Tradegroup, Trends or Twittering
UUnfollow
VVAR, Virtualization or Vendor
WWeb
XXray
YYoutube
ZzSeries or zilla

Feel free to comment with others for the list, and likewise, feel free to share the list.

Cheers gs

Cheers gs
Greg Schulz – StorageIO, Author “The Green and Virtual Data Center” (CRC)

Clouds are like Electricity: Dont be Scared

Clouds

IT clouds (compute, applications, storage, and services) are like electricity in that they can be scary or confusing to some while being enabling or a necessity s to others not to mention being a polarizing force depending on where you sit or view them.

As a polarizing force, if you are a cloud crowd cheerleader or evangelist, you might view someone who does not subscribe or share your excitement, views or interpretations as a cynic.

On the other hand, if you are a skeptic, or perhaps scared or even a cynic, you might view anyone who talks about cloud in general or not specific terms as a cheerleader.

I have seen and experienced this electrifying polarization first hand having being told by crowd cloud cheerleaders or evangelists that I dont like clouds, that Im a cynic who does not know anything about clouds.

As a funny aside (at least I thought it was funny), I recently asked someone who gave me an ear full while they were trying to convert me to be a cloud believer if they had read any of the chapters in my new book The Green and Virtual Data Center (CRC). The response was NO and I said to the effect to bad, as in the book, I talk about how clouds can be complimentary to existing IT resources as being another tier of servers, storage, applications, facilities and IT services.

On the other hand, and this might be funny for some of the crowd cloud, when I bring up tiered IT resources including servers, storage, applications and facilities as well as where or how clouds can fit to compliment IT, I have been told by cynics or naysayers that Im a cloud cheerleader.

Wow, talk about polarized sides!

Now, what about all those that are somewhere in the middle, those that are skeptics who might see value for IT clouds for different scenarios and may in fact already be using clouds (depending upon someones definition).

For those in the middle, whether they are vendors, vars, media, press, analysts, consultants, IT professionals, investors or others, they can easily be misunderstood, misrepresented, and a missed opportunity, perhaps even lamented by those on either of the two extremes (e.g. cloud crowd cheerleaders or true skeptic nay sayers).

Time for some education, don’t be scared, however be careful!

When I worked for an electric power generating and transmission utility an important lesson was not to be scared of electricity, however, be educated, what to do, what not to do in different situations including what to do or not do in the actual power plant or substation. I was taught that when in the actual plant, or at a substation of which I visited in support of the applications and systems I was developing or maintaining, to do certain things. For example, number one, dont touch certain things, number two, if you fall, don’t grab anything, the fall may or may not hurt you, let alone the sudden stop where ever you land, however, if you grab something, that might kill you and you may not be able to let go further injuring yourself. This was a challenging thought as we are taught to grab onto something when falling.

What does this have to do with clouds?

Don’t grab and hang-on if you don’t know what you are grabbing on to if you don’t have to.

The cloud crowd can be polarizing and in some ways acting as a lightning rod drawing the scorns, cynicism ,skeptics, lambasting or being poked fun of given some of the over the top hype around clouds today. Now granted, not all cloud evangelists, vendors or cheerleaders deserve to be the brunt of some of this backlash within the industry; however, it comes with the territory.

Im in the middle as I pointed out above when I talk with vendors, vars, media, investors and IT customers.  Some I talk with are using clouds (perhaps not compliant with some of the definitions). Some are looking at clouds to move problems or mask issues, others are curious yet skeptical to see where or how they could use clouds to compliment their environments. Yet others are scared however maybe in the future will be more open minded as they become educated and see technologies evolve or shift beyond a fashionable trend.

So its time for disclosure, I seeIT clouds as being complimentary that can co-exist with other IT resources (servers, storage, software). In essence, my view is that clouds are just another tier of IT resources to be used when and where applicable as opposed to being a complete replacement, or, simply ignored.

My point is that cloud computing is another tier of traditional computing or servers providing a different performance, availability, capacity, economic and management attributes compared to other traditional technology delivery vehicles. Same thing with storage, same thing with data centers or hosting sites in general. This also applies to application services, in that a cloud web, email, expense, sales, crm, erp, office or other applications is a tier of those same implementations that may exist in a traditional environment. After all, legacy, physical, virtual, grid and cloud IT datacenters all have something in common, they rely on physical servers, storage, networks, software, metrics and management involving people, processes and best practices.

Now back to disclosure, I like clouds, however Im not a cloud cheerleader, Im a skeptic at times of some over the top hype, yet I also personally use some cloud services and technologies as well as advise others to leverage cloud services when, or where applicable to compliment, co-exist and help enable a green and virtual data center and information factory.

To the cloud crowd cheerleaders, too bad if I don’t line up with all of your belief systems or if you perceive me as raining on your parade by being a skeptic , or what you might think of as a cynic and non believer, even though I use clouds myself.

Likewise, to the true cynics (not skeptics) or naysayers, ease up, Im not drinking the cool-aid of the cheerleaders and evangelists, or at least not in large excessive binge doses. I agree that clouds are not the solution to every IT issue, regardless of what your definition of a cloud happens to be.

To everyone else, regardless of if you are the minatory or majority out there that do not fall into one of the two above groups I have this to say.

Dont be afraid, dont be scared of clouds, learn to navigate your way around and through the various technologies, techniques, products and services and indemnity where they might compliment and enable a flexible and scalable resilient IT infrastructure.

Take some time to listen and learn, become educated on what the different types of clouds (public, private, services, products, architectures, or marketecture), their attributes (compute, storage, applications, services, cost, availability, performance, protocols, functionality) and value proposition.

Look into how cloud technologies and techniques might compliment your existing environment to meet specific business objectives. You might find there are fits, you might there are not, however have a look and do some research so that you can at least hold your ground if storm clouds roll in.

After all, clouds are just another tier of IT resources to add to your tool box enabling more efficient and effective IT services delivery. Clouds do not have to be the all or nothing value proposition that often end up in discussions due to polarized extreme views and definitions or past experiences.

Look at it this way, IT relies on electricity, however electricity needs to be understood and respected not to mention used in effective ways. You can be scared of electricity, you can be caviler around it, or, it can be part of your environment and enabler as long as you know when, where and how to use it, not to mention not using it as applicable.

So next time you see a cloud crowd cheerleader, give them a hug, give them a pat on the back, an atta boy or atta girl as they are just doing their jobs, perhaps even following their beliefs and in the line of duty taking a lot of heat from the industry in the pursuit of their work.

On the other hand, as to the cynics and naysayers, they may in fact be using clouds already, perhaps not under the strict definition of some of the chieftains of the cloud crowd.

To everyone else, dont worry, don’t by scared about the clouds, instead, focus on your business, you IT issues and look at various tiers of technologies that can serve as an enabler in a cost effective manner.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

StorageIO aka Greg Schulz appears on Infosmack

If you are in the IT industry, and specifically have any interest or tie to data infrastructures from servers, to storage and networking including hardware, software, services not to mention virtualization and clouds, InfoSmack and Storage Monkeys should be on your read or listen list.

Recently I was invited to be a guest on the InfoSmack podcast which is about a 50 some minute talk show format around storage, networking, virtualization and related topics.

The topics discussed include Sun and Oracle from a storage standpoint, Solid State Disk (SSD) among others.

Now, a word of caution, InfoSmack is not your typical prim and proper venue, nor is it a low class trash talking production.

Its fun and informative where the hosts and attendees are not afraid of poking fun at them selves while exploring topics and the story behind the story in a candid non scripted manner.

Check it out.

Cheers – gs

Greg Schulz – StorageIOblog, twitter @storageio Author “The Green and Virtual Data Center” (CRC)

Blame IT on the UN in NYC this week

This week is UN week in NYC, that annual fall event that results in traffic jams that make normal traffic seem like a breeze.

What with the security lockdowns, sudden road closures, re-routes, news crews, security details and the like, it’s a wonder anything gets done. I was in NYC for about 26 hours this week at the Storage Decisions event where I presented on optimizing for performance and capacity to enable efficient and green storage as well as recording a video on cloud storage and saw or experienced first hand the delays.

This is not going to be one of those complain about how I was inconvenienced rants, rather a bit of fun

Consequently, should you have or had any issues this past week, do like others and blame the UN. For example, late for a meeting, presentation, conference call, coffee break or lunch, getting home or to the ballpark, blame it on the UN. Other potential items that you can feel free to blame on the UN in NYC this week include:

  • RAID rebuilds on those large disk drives taking to long
  • Server, workstation, desktop, laptop or iphone reboots taking to long.
  • Database consistency checks or virus scans taking to long, you know who you can blame!
  • Cannot get a cell phone, landline or wireless connection inside, outside or anywhere?
  • Vmotion taking to long to migrate a server, failover not as fast, you know the drill.
  • IT budget scrapped, yet you have to do more, guess who’s to blame this week
  • Regulatory compliance, BC/DR, data security have you locked up, yup, thats right!
  • Cant download, upload or access WebEx, FedEx or backup to cloud, yup, blame it on the UN
  • Cant get a loan or venture capital financing for your startup, it’s the UNs fault right?
  • Your kindle brook and Amazon took away the books you bought and downloaded?
  • Missed your flight, train, car pool ride in another city, you know the story.
  • Interoperability and vendor finger pointing got you in a bind, yup; it’s the UN in NYC that’s the issue.
  • Forest fires or dust storms in Australia, ice cap melting at the north pole, yup, the UN in NYC this week

Look, I was stuck in traffic, made the best of it, listened to Infosmack #20 while doing some emails, doing a few calls instead of getting all twisted up about it. I actually like visiting NYC, lots to see and do, however also nice to move on, for those who have never experience NYC during UN week, give it a try sometime.

Cheers – gs

Greg Schulz – StorageIOblog, twitter @storageio Author “The Green and Virtual Data Center” (CRC)

Technorati tags: NYC, UN

Storage Efficiency and Optimization – The Other Green

For those of you in the New York City area, I will be presenting live in person at Storage Decisions September 23, 2009 conference The Other Green, Storage Efficiency and Optimization.

Throw out the "green“: buzzword, and you’re still left with the task of saving or maximizing use of space, power, and cooling while stretching available IT dollars to support growth and business sustainability. For some environments the solution may be consolation while others need to maintain quality of service response time, performance and availability necessitating faster, energy efficient technologies to achieve optimization objectives.

To accomplish these and other related issues, you can turn to the cloud, virtualization, intelligent power management, data footprint reduction and data management not to mention various types of tiered storage and performance optimization techniques. The session will look at various techniques and strategies to optimize either on-line active or primary as well as near-line or secondary storage environment during tough economic times, as well as to position for future growth, after all, there is no such thing as a data recession!

Topics, technologies and techniques that will be discussed include among others:

  • Energy efficiency (strategic) vs. energy avoidance (tactical), whats different between them
  • Optimization and the need for speed vs. the need for capacity, finding the right balance
  • Metrics & measurements for management insight, what the industry is doing (or not doing)
  • Tiered storage and tiered access including SSD, FC, SAS, tape, clouds and more
  • Data footprint reduction (archive, compress, dedupe) and thin provision among others
  • Best practices, financial incentives and what you can do today

This is a free event for IT professionals, however space I hear is limited, learn more and register here.

For those interested in broader IT data center and infrastructure optimization, check out the on-going seminar series The Infrastructure Optimization and Planning Best Practices (V2.009) – Doing more with less without sacrificing storage, system or network capabilities Seminar series continues September 22, 2009 with a stop in Chicago. This is also a free Seminar, register and learn more here or here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Back to School and Dedupe School

Summers is over hear in the northern hemisphere and its back to school time.

This coming week I will be the substitute teacher filling in for my friend Mr. Backup in Minneapolis and Toronto for TechTargets Dedupe School. If you are in either city and have not yet signed up, check out the link here to learn more.

Hope to see you this week, or, next week at Infrastructure Optimization in Chicago or Storage Decisions in NYC where I will also be presenting or teaching if you prefer, as well as listening and learning from the attendees whats on their minds.

Stay current on other upcoming activities on our events page, as well as see whats new or in the news here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved