EMC Storage and Management Software Getting FAST

EMC has announced the availability of the first phase of FAST (Fully Automated Storage Tiering) functionality for their Symmetrix VMAX, CLARiiON and Celerra storage systems.

FAST was first previewed earlier this year (see here and here).

Key themes of FAST are to leverage policies for enabling automation to support large scale environments, doing more with what you have along with enabling virtual data centers for traditional, private and public clouds as well as enhancing IT economics.

This means enabling performance and capacity planning analysis along with facilitating load balancing or other infrastructure optimization activities to boost productivity, efficiency and resource usage effectiveness not to mention enabling Green IT.

Is FAST revolutionary? That will depend on who you talk or listen to.

Some vendors will jump up and down similar to donkey in shrek wanting to be picked or noticed claiming to have been the first to implement LUN or file movement inside of storage systems, or, as operating system or file system or volume manager built in. Others will claim to have done it via third party information lifecycle management (ILM) software including hierarchal storage management (HSM) tools among others. Ok, fair enough, than let their games begin (or continue) and I will leave it up to the variou vendors and their followings to debate whos got what or not.

BTW, anyone remember system manage storage on IBM mainframes or array based movement in HP AutoRAID among others?

Vendors have also in the past provided built in or third party add on tools for providing insight and awareness ranging from capacity or space usage and allocation storage resource management (SRM) tools, performance advisory activity monitors or charge back among others. For example, hot files analysis and reporting tool have been popular in the past, often operating system specific for identifying candidate files for placement on SSD or other fast storage. Granted the tools provided insight and awareness, there was still the time and error prone task of decision making and subsequently data movement, not to mention associated down time.

What is new here with FAST is the integrated approach, tools that are operating system independent, functionality in the array, available for different product family and price bands as well as that are optimized for improving user and IT productivity in medium to high-end enterprise scale environments.

One of the knocks on previous technology is either the performance impact to an application when data was moved, or, impact to other applications when data is being moved in the background. Another issue has been avoiding excessive thrashing due to data being moved at the expense of taking performance cycles from production applications. This would also be similar to having too many snapshots or raid rebuild that are not optimized running in the background on a storage system lacking sufficient performance capability. Another knock has been that historically, either 3rd party host or appliance based software was needed, or, solutions were designed and targeted for workgroup, departmental or small environments.

What is FAST and how is it implemented
FAST is technology for moving data within storage systems (and external for Celerra) for load balancing, capacity and performance optimization to meet quality of service (QoS) performance, availability, capacity along with energy and economic initiatives (figure1) across different tiers or types of storage devices. For example, moving data from slower SATA disks where a performance bottleneck exists to faster Fibre Channel or SSD devices. Similarly, cold or infrequently data on faster more expensive storage devices can be marked as candidates for migration to lower cost SATA devices based on customer policies.

EMC FAST
Figure 1 FAST big picture Source EMC

The premise is that policies are defined based on activity along with capacity to determine when data becomes a candidate for movement. All movement is performed in the background concurrently while applications are accessing data without disruptions. This means that there are no stub files or application pause or timeouts that occur or erratic I/O activity while data is being migrated. Another aspect of FAST data movement which is performed in the actual storage systems by their respective controllers is the ability for EMC management tools to identify hot or active LUNs or volumes (files in the case of Celerra) as candidates for moving (figure 2).

EMC FAST
Figure 2 FAST what it does Source EMC

However, users specify if they want data moved on its own or under supervision enabling a deterministic environment where the storage system and associated management tools makes recommendations and suggestions for administrators to approve before migration occurs. This capacity can be a safeguard as well as a learn mode enabling organizations to become comfortable with the technology along with its recommendations while applying knowledge of current business dynamics (figure 3).

EMC FAST
Figure 3 The Value proposition of FAST Source EMC

FAST is implemented as technology resident or embedded in the EMC VMAX (aka Symmetrix), CLARiiON and Cellera along with external management software tools. In the case of the block (figure 4) storage systems including DMX/VMAX and CLARiiON family of products that support FAST, data movement is on a LUN or volume basis and within a single storage system. For NAS or file based Cellera storage systems, FAST is implanted using FMA technology enabling either in the box or externally to other storage systems on a file basis.

EMC FAST
Figure 4 Example of FAST activity Source EMC

What this means is that data at the LUN or volume level can be moved across different tiers of storage or disk drives within a CLARiiON instance, or, within a VMAX instance (e.g. amongst the nodes). For example, Virtual LUNs are a building block that is leveraged for data movement and migration combined with external management tools including Navisphere for the CLARiiON and Symmetrix management console along with Ionix all of which has been enhanced.

Note however that initially data is not moved externally between different CLARiiONs or VMAX systems. For external data movement, other existing EMC tools would be deployed. In the case of Celerra, files can be moved within a specific CLARiiON as well as externally across other storage systems. External storage systems that files can be moved across using EMC FMA technology includes other Celleras, Centera and ATMOS solutions based upon defined policies.

What do I like most and why?

Integration of management tools providing insight with ability for user to setup polices as well as approve or intercede with data movement and placement as their specific philosophies dictate. This is key, for those who want to, let the system manage it self with your supervision of course. For those who prefer to take their time, then take simple steps by using the solution for initially providing insight into hot or cold spots and then helping to make decisions on what changes to make. Use the solution and adapt it to your specific environment and philosophy approach, what a concept, a tool that works for you, vs you working for it.

What dont I like and why?

There is and will remain some confusion about intra and inter box or system data movement and migration, operations that can be done by other EMC technology today for those who need it. For example I have had questions asking if FAST is nothing more than EMC Invista or some other data mover appliance sitting in front of Symmetrix or CLARiiONs and the answer is NO. Thus EMC will need to articulate that FAST is both an umbrella term as well as a product feature set combining the storage system along with associated management tools unique to each of the different storage systems. In addition, there will be confusion at least with GA of lack of support for Symmetrix DMX vs supported VMAX. Of course with EMC pricing is always a question so lets see how this plays out in the market with customer acceptance.

What about the others?

Certainly some will jump up and down claiming ratification of their visions welcoming EMC to the game while forgetting that there were others before them. However, it can also be said that EMC like others who have had LUN and volume movement or cloning capabilities for large scale solutions are taking the next step. Thus I would expect other vendors to continue movement in the same direction with their own unique spin and approach. For others who have in the past made automated tiering their marketing differentiation, I would suggest they come up with some new spins and stories as those functions are about to become table stakes or common feature functionality on a go forward basis.

When and where to use?

In theory, anyone with a Symmetrix/VMAX, CLARiiON or Celerra that supports the new functionality should be a candidate for the capabilities, that is, at least the insight, analysis, monitoring and situation awareness capabilities Note that does not mean actually enabling the automated movement initially.

While the concept is to enable automated system managed storage (Hmmm, Mainframe DejaVu anyone), for those who want to walk before they run, enabling the insight and awareness capabilities can provide valuable information about how resources are being used. The next step would then to look at the recommendations of the tools, and if you concur with the recommendations, then take remedial action by telling the system when the movement can occur at your desired time.

For those ready to run, then let it rip and take off as FAST as you want. In either situation, look at FAST for providing insight and situational awareness of hot and cold storage, where opportunities exist for optimizing and gaining efficiency in how resources are used, all important aspects for enabling a Green and Virtual Data Center not to mention as well as supporting public and private clouds.

FYI, FTC Disclosure and FWIW

I have done content related projects for EMC in the past (see here), they are not currently a client nor have they sponsored, underwritten, influenced, renumerated, utilize third party off shore swiss, cayman or south american unnumbered bank accounts, or provided any other reimbursement for this post, however I did personally sign and hand to Joe Tucci a copy of my book The Green and Virtual Data Center (CRC) ;).

Bottom line

Do I like what EMC is doing with FAST and this approach? Yes.

Do I think there is room for improvement and additional enhancements? Absolutely!

Whats my recommendation? Have a look, do your homework, due diligence and see if its applicable to your environment while asking others vendors what they will be doing (under NDA if needed).

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

SSD and Storage System Performance

Jacob Gsoedl has a new article over at SearchStorage titled How to add solidstate storage to your enterprise data storage systems.

In his article which includes some commentary by me, Jacob lays out various options on where and how to deploy solid state devices (SSD) in and with enterprise storage systems.

While many vendors have jumped on the latest SSD bandwagon adding flash based devices to storage systems, where and how they implement the technologies varies.

Some vendors take a simplistic approach of qualify flash SSD devices for attachment to their storage controllers similar to how any other Fibre Channel, SAS or SATA hard disk drive (HDD) would be.

Yet others take a more in depth approach including optimizing controller software, firmware or micro code to leverage flash SSD devices along with addressing wear leveling, read and write performance among other capabilities.

Performance is another area where on paper a flash SSD device might appear to be fast and enable a storage system to be faster.

However, systems that are not optimized for higher throughput and or increased IOPs needing lower latency may end up placing restrictions on the number of flash SSD devices or other configuration constraints. Even worse is when expected performance improvements are not realized as after all, fast controllers need fast devices, and fast devices need fast controllers.

RAM and flash based SSD are great enabling technologies for boosting performance, productivity and enabling a green efficient environment however do your homework.

Look at how various vendors implement and support SSD particularly flash based products with enhancements to storage controllers for optimal performance.

Likewise check out the activity of  the SNIA Solid State Storage Initiative (SSSI) among other industry trade group or vendor initiatives around enhancing along with best practices for SSD.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

The other Green Storage: Efficiency and Optimization

Some believe that green storage is specifically designed to reduce power and cooling costs.

The reality is that there are many ways to reduce environmental impact while enhancing the economics of data storage besides simply booting utilizing.

These include optimizing data storage capacity as well as boosting performance to increase productivity per watt of energy used when work needs to be done.

Some approaches require new hardware or software while others can be accomplished with changes to management including reconfiguration leveraging insight and awareness of resource needs.

Here are some related links:

The Other Green: Storage Efficiency and Optimization (Videocast)

Energy efficient technology sales depend on the pitch

Performance metrics: Evaluating your data storage efficiency

How to reduce your Data Footprint impact (Podcast)

Optimizing enterprise data storage capacity and performance to reduce your data footprint

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Green IT and Virtual Data Centers

Green IT and virtual data centers are no fad nor are they limited to large-scale environments.

Paying attention to how resources are used to deliver information services in a flexible, adaptable, energy-efficient, environmentally, and economically friendly way to boost efficiency and productivity are here to stay.

Read more here in the article I did for the folks over at Enterprise Systems Journal.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

How to win approval for upgrades: Link them to business benefits

Drew Rob has another good article over at Processor.com about various tips and strategies on how to gain approval for hardware (or software) purchases with some comments by yours truly.

My tips and advice that are quoted in the story include to link technology resources to business needs impact which may be common sense, however still a time tested effective technique.

Instead of speaking tech talk such as Performance, capacity, availability, IOPS, bandwidth, GHz, frames or packets per second, VMs to PM or dedupe ratio, map them to business speak, that is things that finance, accountants, MBAs or other management personal understand.

For example, how many transactions at a given response time can be supported by a given type of server, storage or networking device.

Or, put a different way, with a given device, how much work can be done and what is the associated monetary or business benefit.

Likewise, if you do not have a capacity plan for servers, storage, I/O and networking along with software and facilities covering performance, availability, capacity and energy demands now is the time to put one in place.

More on capacity and performance planning later, however for now, if you want to learn more, check Chapter 10 (Performance and Capacity Planning) in my book Resilient Storage Networks: Designing Flexible and Scalable Data Infrastructure: Elsevier).

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Acadia VCE: VMware + Cisco + EMC = Virtual Computing Environment

Was today the day the music died? (click here or here if you are not familar with the expression)

Add another three letter acronym (TLA) to your IT vocabulary if you are involved with server, storage, networking, virtualization, security and related infrastructure resource management (IRM) topics.

That new TLA is Virtual Computing Environment (VCE), a coalition formed by EMC and Cisco along with partner Intel called Acadia that was announced today. Of course, EMC who also happens to own VMware for virtualization and RSA for security software tools bring those to the coalition (read press release here).

For some quick fun, twittervile and the blogosphere have come up with other meanings such as:

VCE = Virtualization Communications Endpoint
VCE = VMware Cisco EMC
VCE = Very Cash Efficient
VCE = VMware Controls Everything
VCE = Virtualization Causes Enthusiasm
VCE = VMware Cisco Exclusive

Ok, so much for some fun, at least for now.

With Cisco, EMC and VMware announcing their new VCE coalition, has this signaled the end of servers, storage, networking, hardware and software for physical, virtual and clouding computing as we know it?

Does this mean all other vendors not in this announcement should pack it up, game over and go home?

The answer in my perspective is NO!

No, the music did not end today!

NO, servers, storage and networking for virtual or cloud environments has not ended.

Also, NO, other vendors do not have to go home today, the game is not over!

However a new game is on, one that some have seen before, for others it is something new, exciting perhaps revolutionary or an industry first.

What was announced?
Figure 1 shows a general vision or positioning from the three major players involved along with four tenants or topic areas of focus. Here is a link to a press release where you can read more.

CiscoVirtualizationCoalition.png
Figure 1: Source: Cisco, EMC, VMware

General points include:

  • A new coalition (e.g. VCE) focused on virtual compute for cloud and non cloud environments
  • A new company Acadia owned by EMC and Cisco (1/3 each) along with Intel and VMware
  • A new go to market pre-sales, service and support cross technology domain skill set team
  • Solution bundles or vblocks with technology from Cisco, EMC, Intel and VMware

What are the vblocks and components?
Pre-configured (see this link for a 3D model), tested, and supported with a single throat to choke model for streamlined end to end management and acquisition. There are three vblocks or virtual building blocks that include server, storage, I/O networking, and virtualization hypervisor software along with associated IRM software tools.

Cisco is bringing to the game their Unified Compute Solution (UCS) server along with Nexus 1000v and Multilayer Director (MDS) switches, EMC is bringing storage (Symmetrix VMax, CLARiiON and unified storage) along with their RSA security and Ionix IRM tools. VMware is providing their vSphere hypervisors running on Intel based services (via Cisco).

The components include:

  • EMC Ionix management tools and framework – The IRM tools
  • EMC RSA security framework software – The security tools
  • EMC VMware vSphere hypervisor virtualization software – The virtualization layer
  • EMC VMax, CLARiiON and unified storage systems – The storage
  • Cisco Nexus 1000v and MDS switches – The Network and connectivity
  • Cisco Unified Compute Solution (UCS) – The physical servers
  • Services and support – Cross technology domain presales, delivery and professional services

CiscoEMCVMwarevblock.jpg
Figure 2: Source: Cisco vblock (Server, Storage, Networking and Virtualization Software) via Cisco

The three vblock models are:
Vblock0: entry level system due out in 2010 supporting 300 to 800 VMs for initial customer consolidation, private clouds or other diverse applications in small or medium sized business. You can think of this as a SAN in a CAN or Data Center in a box with Cisco UCS and Nexus 1000v, EMC unified storage secured by RSA and VMware vSphere.

Vblock1: mid sized building block supporting 800 to 3000 VMs for consolidation and other optimization initiatives using Cisco UCS, Nexus and MDS switches along with EMC CLARiiON storage secured with RSA software hosting VMware hypervisors.

Vblock2 high end supporting up 3000 to 6000 VMs for large scale data center transformation or new virtualization efforts combing Cisco Unified Computing System (UCS), Nexus 1000v and MDS switches and EMC VMax Symmetix storage with RSA security software hosting VMware vSpshere hypervisor.

What does this all mean?
With this move, for some it will add fuel to the campfire that Cisco is moving closer to EMC and or VMware with a pre-nuptial via Acadia. For others, this will be seen as fragmentation for virtualization particularly if other vendors such as Dell, Fujitsu, HP, IBM and Microsoft among others are kept out of the game, not to mention their channels of vars or IT customers barriers.

Acadia is a new company or more precisely, a joint venture being created by major backers EMC and Cisco with minority backers being VMware and Intel.

Like any other joint ventures, for examples those commonly seen in the airline industry (e.g. transportation utility) where carriers pool resources such as SkyTeam whose members include Delta who had a JV with Airframe owner of KLM who had a antitrust immunity JV with northwest (now being digested by Delta).

These joint ventures can range from simple marketing alliances like you see with EMC programs such as their Select program to more formal OEM to ownership as is the case with VMware and RSA to this new model for Acadia.

An airline analogy may not be the most appropriate, yet there are some interesting similarities, least of which that air carriers rely on information systems and technologies provided by members of this collation among others. There is also a correlation in that joint ventures are about streamlining and creating a seamless end to end customer experience. That is, give them enough choice and options, keep them happy, take out the complexities and hopefully some cost, and with customer control come revenue and margin or profits.

Certainly there are opportunities to streamline and not just simply cut corners, perhaps that’s another area or analogy with the airlines where there is a current focus on cutting, nickel and dimming for services. Hopefully the Acadia and VCE are not just another example of vendors getting together around the campfire to sing Kumbaya in the name of increasing customer adoption, cost cutting or putting a marketing spin on how to sell more to customers for account control.

Now with all due respect to the individual companies and personal, at least in this iteration, it is not as much about the technology or packaging. Likewise, while important, it is also not just about bundling, integration and testing (they are important) as we have seen similar solutions before.

Rather, I think this has the potential for changing the way server, storage and networking hardware along with IRM and virtualization software are sold into organizations, for the better or worse.

What Im watching is how Acadia and their principal backers can navigate the channel maze and ultimately the customer maze to sell a cross technology domain solution. For example, will a sales call require six to fourteen legs (e.g. one person is a two legged call for those not up on sales or vendor lingo) with a storage, server, networking, VMware, RSA, Ionix and services representative?

Or, can a model to drive down the number of people or product specialist involved in a given sales call be achieved leveraging people with cross technology domain skills (e.g. someone who can speak server and storage hardware and software along with networking)?

Assuming Acadia and VCE vblocks address product integration issues, I see the bigger issue as being streamlining the sales process (including compensation plans) along with how partners are dealt with not to mention customers.

How will the sales pitch be to the Cisco network people at VARs or customer sites, or too the storage or server or VMware teams, or, all of the above?

What about the others?
Cisco has relationships with Dell, HP, IBM, Microsoft and Oracle/Sun among others that they will be stepping even more on the partner toes than when they launched the UCS earlier this year. EMC for its part if fairly diversified and is not as subservient to IBM however has a history of partnering with Dell, Oracle and Microsoft among others.

VMware has a smaller investment and thus more in the wings as is Intel given that both have large partnership with Dell, HP, IBM and Microsoft. Microsoft is of interest here because on one front the bulk of all servers virtualized into VMware VMs are Windows based.

On the other hand, Microsoft has their own virtualization hypervisor HyperV that depending upon how you look at it, could be a competitor of VMware or simply a nuisance. Im of the mindset that its still to early and don’t judge this game on the first round which VMware has won. Keep in mind the history such as desktop and browser wars that Microsoft lost in the first round only to come back strong later. This move could very well invigorate Microsoft, or perhaps Oracle, Citrix among others.

Now this is far from the first time that we have seen alliances, coalitions, marketing or sales promotion cross technology vendor clubs in the industry let alone from the specific vendors involved in this announcement.

One that comes to mind was 3COMs failed attempt in the late 90s to become the first traditional networking vendor to get into SANs, that was many years before Cisco could spell SAN let alone their Andiamo startup incubated. The 3COM initiative which was cancelled due to financial issues literally on the eve of rollout was to include the likes of STK (pre-sun), Qlogic, Anchor (People were still learning how to spell Brocade), Crossroads (FC to SCSI routers for tape), Legato (pre-EMC), DG CLARiiON (Pre-EMC), MTI (sold their patents to EMC, became a reseller, now defunct) along with some others slated to jump on the bandwagon.

Lets also not forget that while among the traditional networking market vendors Cisco is the $32B giant and all of the others including 3Com, Brocade, Broadcom, Ciena, Emulex, Juniper and Qlogic are the seven plus dwarfs. However, keep the $23B USD Huawei networking vendor that is growing at a 45% annual rate in mind.

I would keep an eye on AMD, Brocade, Citrix, Dell, Fujitsu, HP, Huawei, Juniper, Microsoft, NetApp, Oracle/Sun, Rackable and Symantec among many others for similar joint venture or marketing alliances.

Some of these have already surfaced with Brocade and Oracle sharing hugs and chugs (another sales term referring to alliance meetings over beers or shots).

Also keep in mind that VMware has a large software (customer business) footprint deployed on HP with Intel (and AMD) servers.

Oh, and those VMware based VMs running on HP servers also just happen to be hosting in their neighbor of 80% or more Windows based guests operating systems, I would say its game on time.

When I say its game on time, I dont think VMware is brash enough to cut HP (or others) off forcing them to move to Microsoft for virtualization. However the game is about control, control of technology stacks and partnerships, control of vars, integrators and the channel, as well as control of customers.

If you cannot tell, I find this topic fun and interesting.

For those who only know me from servers they often ask when did I learn about networking to which I say check out one of my books (Resilient Storage Networks-Elsevier). Meanwhile for others who know me from storage I get asked when did I learn about or get into servers to which I respond about 28 years ago when I worked in IT as the customer.

Bottom line on Acadia, vblocks and VCE for now, I like the idea of a unified and bundled solution as long as they are open and flexible.

On the other hand, I have many questions and even skeptical in some areas including of how this plays out for Cisco and EMC in terms of if it can be a unifier or polarized causing market fragmentation.

For some this is or will be dejavu, back to the future, while for others it is a new, exciting and revolutionary approach while for others it will be new fodder for smack talk!

More to follow soon.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Optimize Data Storage for Performance and Capacity Efficiency

This post builds on a recent article I did that can be read here.

Even with tough economic times, there is no such thing as a data recession! Thus the importance of optimizing data storage efficiency addressing both performance and capacity without impacting availability in a cost effective way to do more with what you have.

What this means is that even though budgets are tight or have been cut resulting in reduced spending, overall net storage capacity is up year over year by double digits if not higher in some environments.

Consequently, there is continued focus on stretching available IT and storage related resources or footprints further while eliminating barriers or constraints. IT footprint constraints can be physical in a cabinet or rack as well as floorspace, power or cooling thresholds and budget among others.

Constraints can be due to lack of performance (bandwidth, IOPS or transactions), poor response time or lack of availability for some environments. Yet for other environments, constraints can be lack of capacity, limited primary or standby power or cooling constraints. Other constraints include budget, staffing or lack of infrastructure resource management (IRM) tools and time for routine tasks.

Look before you leap
Before jumping into an optimization effort, gain insight if you do not already have it as to where the bottlenecks exist, along with the cause and effect of moving or reconfiguring storage resources. For example, boosting capacity use to more fully use storage resources can result in a performance issue or data center bottlenecks for other environments.

An alternative scenario is that in the quest to boost performance, storage is seen as being under-utilized, yet when capacity use is increased, low and behold, response time deteriorates. The result can be a vicious cycle hence the need to address the issue as opposed to moving problems by using tools to gain insight on resource usage, both space and activity or performance.

Gaining insight means looking at capacity use along with performance and availability activity and how they use power, cooling and floor-space. Consequently an important tool is to gain insight and knowledge of how your resources are being used to deliver various levels of service.

Tools include storage or system resource management (SRM) tools that report on storage space capacity usage, performance and availability with some tools now adding energy usage metrics along with storage or system resource analysis (SRA) tools.

Cooling Off
Power and cooling are commonly talked about as constraints, either from a cost standpoint, or availability of primary or secondary (e.g. standby) energy and cooling capacity to support growth. Electricity is essential for powering IT equipment including storage enabling devices to do their specific tasks of storing data, moving data, processing data or a combination of these attributes.

Thus, power gets consumed, some work or effort to move and store data takes place and the by product is heat that needs to be removed. In a typical IT data center, cooling on average can account for about 50% of energy used with some sites using less.

With cooling being a large consumer of electricity, a small percentage change to how cooling consumes energy can yield large results. Addressing cooling energy consumption can be to discuss budget or cost issues, or to enable cooling capacity to be freed up to support installation of extra storage or other IT equipment.

Keep in mind that effective cooling relies on removing heat from as close to the source as possible to avoid over cooling which requires more energy. If you have not done so, have a facilities review or assessment performed that can range from a quick walk around, to a more in-depth review and thermal airflow analysis. A means of removing heat close to the sort are techniques such as intelligent, precision or smart cooling also known by other marketing names.

Powering Up, or, Powering Down
Speaking of energy or power, in addition to addressing cooling, there are a couple of ways of addressing power consumption by storage equipment (Figure 1). The most popular discussed approach towards efficiency is energy avoidance involving powering down storage when not used such as first generation MAID at the cost of performance.

For off-line storage, tape and other removable media give low-cost capacity per watt with low to no energy needed when not in use. Second generation (e.g. MAID 2.0) solutions with intelligent power management (IPM) capabilities have become more prevalent enabling performance or energy savings on a more granular or selective basis often as a standard feature in common storage systems.

GreenOptionsBalance
Figure 1:  How various RAID levels and configuration impact or benefit footprint constraints

Another approach to energy efficiency is seen in figure 1 which is doing more work for active applications per watt of energy to boost productivity. This can be done by using same amount of energy however doing more work, or, same amount of work with less energy.

For example instead of using larger capacity disks to improve capacity per watt metrics, active or performance sensitive storage should be looked at on an activity basis such as IOP, transactions, videos, emails or throughput per watt. Hence, a fast disk drive doing work can be more energy-efficient in terms of productivity than a higher capacity slower disk drive for active workloads, where for idle or inactive, the inverse should hold true.

On a go forward basis the trend already being seen with some servers and storage systems is to do both more work, while using less energy. Thus a larger gap between useful work (for active or non idle storage) and amount of energy consumed yields a better efficiency rating, or, take the inverse if that is your preference for smaller numbers.

Reducing Data Footprint Impact
Data footprint impact reduction tools or techniques for both on-line as well as off-line storage include archiving, data management, compression, deduplication, space-saving snapshots, thin provisioning along with different RAID levels among other approaches. From a storage access standpoint, you can also include bandwidth optimization, data replication optimization, protocol optimizers along with other network technologies including WAFS/WAAS/WADM to help improve efficiency of data movement or access.

Thin provisioning for capacity centric environments can be used to achieving a higher effective storage use level by essentially over booking storage similar to how airlines oversell seats on a flight. If you have good historical information and insight into how storage capacity is used and over allocated, thin provisioning enables improved effective storage use to occur for some applications.

However, with thin provisioning, avoid introducing performance bottlenecks by leveraging solutions that work closely with tools that providing historical trending information (capacity and performance).

For a technology that some have tried to declare as being dead to prop other new or emerging solutions, RAID remains relevant given its widespread deployment and transparent reliance in organizations of all size. RAID also plays a role in storage performance, availability, capacity and energy constraints as well as a relief tool.

The trick is to align the applicable RAID configuration to the task at hand meeting specific performance, availability, capacity or energy along with economic requirements. For some environments a one size fits all approach may be used while others may configure storage using different RAID levels along with number of drives in RAID sets to meet specific requirements.


Figure 2:  How various RAID levels and configuration impact or benefit footprint constraints

Figure 2 shows a summary and tradeoffs of various RAID levels. In addition to the RAID levels, how many disks can also have an impact on performance or capacity, such as, by creating a larger RAID 5 or RAID 6 group, the parity overhead can be spread out, however there is a tradeoff. Tradeoffs can be performance bottlenecks on writes or during drive rebuilds along with potential exposure to drive failures.

All of this comes back to a balancing act to align to your specific needs as some will go with a RAID 10 stripe and mirror to avoid risks, even going so far as to do triple mirroring along with replication. On the other hand, some will go with RAID 5 or RAID 6 to meet cost or availability requirements, or, some I have talked with even run RAID 0 for data and applications that need the raw speed, yet can be restored rapidly from some other medium.

Lets bring it all together with an example
Figure 3 shows a generic example of a before and after optimization for a mixed workload environment, granted you can increase or decrease the applicable capacity and performance to meet your specific needs. In figure 3, the storage configuration consists of one storage system setup for high performance (left) and another for high-capacity secondary (right), disk to disk backup and other near-line needs, again, you can scale the approach up or down to your specific need.

For the performance side (left), 192 x 146GB 15K RPM (28TB raw) disks provide good performance, however with low capacity use. This translates into a low capacity per watt value however with reasonable IOPs per watt and some performance hot spots.

On the capacity centric side (right), there are 192 x 1TB disks (192TB raw) with good space utilization, however some performance hot spots or bottlenecks, constrained growth not to mention low IOPS per watt with reasonable capacity per watt. In the before scenario, the joint energy use (both arrays) is about 15 kWh or 15,000 watts which translates to about $16,000 annual energy costs (cooling excluded) assuming energy cost of 12 cents per kWh.

Note, your specific performance, availability, capacity and energy mileage will vary based on particular vendor solution, configuration along with your application characteristics.


Figure 3: Baseline before and after storage optimization (raw hardware) example

Building on the example in figure 3, a combination of techniques along with technologies yields a net performance, capacity and perhaps feature functionality (depends on specific solution) increase. In addition, floor-space, power, cooling and associated footprints are also reduced. For example, the resulting solution shown (middle) comprises 4 x 250GB flash SSD devices, along with 32 x 450GB 15.5K RPM and 124 x 2TB 7200RPM enabling an 53TB (raw) capacity increase along with performance boost.

The previous example are based on raw or baseline capacity metrics meaning that further optimization techniques should yield improved benefits. These examples should also help to discuss the question or myth that it costs more to power storage than to buy it which the answer should be it depends.

If you can buy the above solution for say under $50,000 (cost to power), or, let alone, $100,000 (power and cool) for three years which would also be a good acquisition, then the myth of buying is more expensive than powering holds true. However, if a solution as described above costs more, than the story changes along with other variables include energy costs for your particular location re-enforcing the notion that your mileage will vary.

Another tip is that more is not always better.

That is, more disks, ports, processors, controllers or cache do not always equate into better performance. Performance is the sum of how those and other pieces working together in a demonstrable way, ideally your specific application workload compared to what is on a product data sheet.

Additional general tips include:

  • Align the applicable tool, technique or technology to task at hand
  • Look to optimize for both performance and capacity, active and idle storage
  • Consolidated applications and servers need fast servers
  • Fast servers need fast I/O and storage devices to avoid bottlenecks
  • For active storage use an activity per watt metric such as IOP or transaction per watt
  • For in-active or idle storage, a capacity per watt per footprint metric would apply
  • Gain insight and control of how storage resources are used to meet service requirements

It should go without saying, however sometimes what is understood needs to be restated.

In the quest to become more efficient and optimized, avoid introducing performance, quality of service or availability issues by moving problems.

Likewise, look beyond storage space capacity also considering performance as applicable to become efficient.

Finally, it is all relative in that what might be applicable to one environment or application need may not apply to another.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EPA Energy Star for Data Center Storage Update

EPA Energy Star

Following up on a recent post about Green IT, energy efficiency and optimization for servers, storage and more, here are some additional  thoughts, perspectives along with industry activity around the U.S. Environmental Protection Agency (EPA) Energy Star for Server, Data Center Storage and Data Centers.

First a quick update, Energy Star for Servers is in place with work now underway on expanding and extending beyond the first specification. Second is that Energy Star for Data Center storage definition is well underway including a recent workshop to refine the initial specification along with discussion for follow-on drafts.

Energy Star for Data Centers is also currently undergoing definition which is focused more on macro or facility energy (notice I did not say electricity) efficiency as opposed to productivity or effectiveness, items that the Server and Storage specifications are working towards.

Among all of the different industry trade or special interests groups, at least on the storage front the Storage Networking Industry Association (SNIA) Green Storage Initiative (GSI) and their Technical Work Groups (TWG) have been busily working for the past couple of years on taxonomies, metrics and other items in support of EPA Energy Star for Data Center Storage.

A challenge for SNIA along with others working on related material pertaining to storage and efficiency is the multi-role functionality of storage. That is, some storage simply stores data with little to no performance requirements while other storage is actively used for reading and writing. In addition, there are various categories, architectures not to mention hardware and software feature functionality or vendors with different product focus and interests.

Unlike servers that are either on and doing work, or, off or in low power mode, storage is either doing active work (e.g. moving data), storing in-active or idle data, or a combination of both. Hence for some, energy efficiency is about how much data can be stored in a given footprint with the least amount of power known as in-active or idle measurement.

On the other hand, storage efficiency is also about using the least amount of energy to produce the most amount of work or activity, for example IOPS or bandwidth per watt per footprint.

Thus the challenge and need for at least a two dimensional  model looking at, and reflecting different types or categories of storage aligned for active or in-active (e.g. storing) data enabling apples to apples, vs. apples to oranges comparison.

This is not all that different from how EPA looks at motor vehicle categories of economy cars, sport utility, work or heavy utility among others when doing different types of work, or, in idle.

What does this have to do with servers and storage?

Simple, when a server powers down where does its data go? That’s right, to a storage system using disk, ssd (RAM or flash), tape or optical for persistency. Likewise, when there is work to be done, where does the data get read into computer memory from, or written to? That’s right, a storage system. Hence the need to look at storage in a multi-tenant manner.

The storage industry is diverse with some vendors or products focused on performance or activity, while others on long term, low cost persistent storage for archive, backup, not to mention some doing a bit of both. Hence the nomenclature of herding cats towards a common goal when different parties have various interests that may conflict yet support needs of various customer storage usage requirements.

Figure 1 shows a simplified, streamlined storage taxonomy that has been put together by SNIA representing various types, categories and functions of data center storage. The green shaded areas are a good step in the right direction to simplify yet move towards realistic and achievable befits for storage consumers.


Figure 1 Source: EPA Energy Star for Data Center Storage web site document

The importance of the streamlined SNIA taxonomy is to help differentiate or characterize various types and tiers of storage (Figure 2) products facilitating apples to apples comparison instead of apples or oranges. For example, on-line primary storage needs to be looked at in terms of how much work or activity per energy footprint determines efficiency.


Figure 2: Tiered Storage Example

On other hand, storage for retaining large amounts of data that is in-active or idle for long periods of time should be looked at on a capacity per energy footprint basis. While final metrics are still being flushed out, some examples could be active storage gauged by IOPS or work or bandwidth per watt of energy per footprint while other storage for idle or inactive data could be looked at on a capacity per energy footprint basis.

What benchmarks or workloads to be used for simulating or measuring work or activity are still being discussed with proposals coming from various sources. For example SNIA GSI TWG are developing measurements and discussing metrics, as have the storage performance council (SPC) and SPEC among others including use of simulation tools such as IOmeter, VMware VMmark, TPC, Bonnie, or perhaps even Microsoft ESRP.

Tenants of Energy Star for Data Center Storage overtime hopefully will include:

  • Reflective of different types, categories, price-bands and storage usage scenarios
  • Measure storage efficiency for active work along with in-active or idle usage
  • Provide insight for both storage performance efficiency and effective capacity
  • Baseline or raw storage capacity along with effective enhanced optimized capacity
  • Easy to use metrics with more in-depth back ground or disclosure information

Ultimately the specification should help IT storage buyers and decision makers to compare and contrast different storage systems that are best suited and applicable to their usage scenarios.

This means measuring work or activity per energy footprint at a given capacity and data protection level to meet service requirements along with during in-active or idle periods. This also means showing storage that is capacity focused in terms of how much data can be stored in a given energy footprint.

One thing that will be tricky however will be differentiating GBytes per watt in terms of capacity, or, in terms of performance and bandwidth.

Here are some links to learn more:

Stay tuned for more on Energy Star for Data Centers, Servers and Data Center Storage.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

PUE, Are you Managing Power, Energy or Productivity?

With a renewed focus on Green IT including energy Efficiency and Optimization of servers, storage, networks and facilities, is your focus on managing power, energy, or, productivity?

For example, do you use or are interested in metrics such as Greengrid PUE or 80 Plus efficient power supplies along with initiatives such as EPA Energy Star for servers and emerging Energy Star for Data Center for Storage in terms of energy usage?

Or are you interested in productivity such as amount of work or activity that can be done in a given amount of time,or how much information can be stored in a given footprint (power, cooling, floor space, budget, management)?

For many organizations, there tends to be a focus and in both managing power along with managing productivity. The two are or should interrelated, however there are some disconnects with some emphasis and metrics. For example, the Green grid PUE is a macro facilities centric metric that does not show the productivity, quality or measure of services being delivered by a data center or information factory. Instead, PUE provides a gauge of how the habitat, that is the building and power distribution along with cooling are efficient with respect to the total energy consumption of IT equipment.

As a refresher, PUE is a macro metric that is essentially a ratio of how much total power or energy goes into a facility vs. the amount of energy used by IT equipment. For example, if 12Kw (smaller room/site) or 12Mw (larger site) are required to power an IT data center or computer room for that matter, and of that energy load, 6kWh or 6Mw, the PUE would be 2. A PUE of 2 is an indicator that 50% of energy going to power a facility or computer room goes towards IT equipment (servers, storage, networks, telecom and related equipment) with the balance going towards running the facility or environment which typically has had the highest percentage being HVAC/cooling.

In the case of EPA Energy Star for Data Centers which initially is focused on the habitat or facility efficiency, the answer is measuring and managing energy use and facility efficiency as opposed to productivity or useful work. The metric for EPA Energy Star for Data Center initially will be Energy Usage Effectiveness (EUE) that will be used to calculate a ratting for a data center facility. Those data centers in the top25 percentile will qualify for Energy Star certification.

Note the word energy and not power which means that the data center macro metric based on Green grid PUE rating looks at all source of energy used by a data center and not just electrical power. What this means is that a macro and holistic facilities energy consumption could be a combination of electrical power, diesel, propane or natural gas or other fuel sources to generate or create power for IT equipment, HVAC/Cooling and other needs. By using a metric that factor in all energy sources, a facility that uses solar, radiant, heat pumps, economizers or other techniques to reduce demands on energy will make a better rating.

By using a macro metric such as EUE or PUE (ratio = Total_Power_Used / IT_Power_Needs), a starting point is available to decide and compare efficiency and cost to power or energize a facility or room also known as a habitat for technology.

Managing Productivity of Information Factories (E.g. Data Centers)
What EUE and PUE do not reflect or indicate is how much data is processed, moved and stored by servers, storage and networks within a facility. On the other hand or extreme from macro metrics are micro or component metrics that gauge energy usage on an individual device basis. Some of these micro metrics may have activity or productivity indicator measurements associated with them, some don’t. Where these leave a big gap and opportunity is to fill the span between the macro and micro.

This is where work is being done by various industry groups including SNIA GSI, SPC and SPEC among others along with EPA Energy Star among others to move beyond macro PUE indicators to more granular effectiveness and efficiency metrics that reflect productivity. Ultimately productivity is important to gauge,  the return on investment and business value of how much data can be processed by servers, moved via networks or stored on storage devices in a given energy footprint or cost.

In Figure 1 are shown four basic approaches (in addition to doing nothing) to energy efficiency. One approach is to avoid energy usage, similar to following a rationing model, but this approach will affect the amount of work that can be accomplished. Another approach is to do more work using the same amount of energy, boosting energy efficiency, or do same amount of work (or storage data) however with less energy.

Tiered Storage
Figure 1 the Many Faces of Energy Efficiency Source: The Green and Virtual Data Center(CRC)

The energy efficiency gap is the difference between the amount of work accomplished or information stored in a given footprint and the energy consumed. In other words, the bigger the energy efficiency gap, the better, as seen in the fourth scenario, doing more work or storing more information in a smaller footprint using less energy. Clock here to read more about Shifting from energy avoidance to energy efficiency.

Watch for new metrics looking at productivity and activity for servers, storage and networks ranging from MHz or GHz per watt, transactions or IOPS per watt, bandwidth, frames or packets processed per watt or capacity stored per watt in a given footprint. One of the confusing metrics is Gbytes or Tbytes per watt in that it can mean storage capacity or bandwidth, thus, understand the context of the metric. Likewise watch for metrics that reflect energy usage for active along with in-active including idle or dormant storage common with archives, backup or fixed content data.

What this all means is that work continues on developing usable and relevant metrics and measurement not only for macro energy usage, also, to gauge the effectiveness of delivering IT services. The business value proposition of driving efficiency and optimization including increased productivity along with storing more information in a given footprint is to support density and business sustainability.

 

Additional resources and where to learn in addition to those mentioned above include:

EPA Energy Star for Data Center Storage

Storage Efficiency and Optimization – The Other Green

Performance = Availability StorageIOblog featured ITKE guest blog

SPC and Storage Benchmarking Games

Shifting from energy avoidance to energy efficiency

Green IT Confusion Continues, Opportunities Missed!

Green Power and Cooling Tools and Calculators

Determining Computer or Server Energy Use

Examples of Green Metrics

Green IT, Power, Energy and Related Tools or Calculators

Chapter 10 (Performance and Capacity Planning)
Resilient Storage Networks (Elsevier)

Chapter 5 (Measurement, Metrics and Management of IT Resources)
The Green and Virtual Data Center (CRC)

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Storage Efficiency and Optimization – The Other Green

For those of you in the New York City area, I will be presenting live in person at Storage Decisions September 23, 2009 conference The Other Green, Storage Efficiency and Optimization.

Throw out the "green“: buzzword, and you’re still left with the task of saving or maximizing use of space, power, and cooling while stretching available IT dollars to support growth and business sustainability. For some environments the solution may be consolation while others need to maintain quality of service response time, performance and availability necessitating faster, energy efficient technologies to achieve optimization objectives.

To accomplish these and other related issues, you can turn to the cloud, virtualization, intelligent power management, data footprint reduction and data management not to mention various types of tiered storage and performance optimization techniques. The session will look at various techniques and strategies to optimize either on-line active or primary as well as near-line or secondary storage environment during tough economic times, as well as to position for future growth, after all, there is no such thing as a data recession!

Topics, technologies and techniques that will be discussed include among others:

  • Energy efficiency (strategic) vs. energy avoidance (tactical), whats different between them
  • Optimization and the need for speed vs. the need for capacity, finding the right balance
  • Metrics & measurements for management insight, what the industry is doing (or not doing)
  • Tiered storage and tiered access including SSD, FC, SAS, tape, clouds and more
  • Data footprint reduction (archive, compress, dedupe) and thin provision among others
  • Best practices, financial incentives and what you can do today

This is a free event for IT professionals, however space I hear is limited, learn more and register here.

For those interested in broader IT data center and infrastructure optimization, check out the on-going seminar series The Infrastructure Optimization and Planning Best Practices (V2.009) – Doing more with less without sacrificing storage, system or network capabilities Seminar series continues September 22, 2009 with a stop in Chicago. This is also a free Seminar, register and learn more here or here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Back to School and Dedupe School

Summers is over hear in the northern hemisphere and its back to school time.

This coming week I will be the substitute teacher filling in for my friend Mr. Backup in Minneapolis and Toronto for TechTargets Dedupe School. If you are in either city and have not yet signed up, check out the link here to learn more.

Hope to see you this week, or, next week at Infrastructure Optimization in Chicago or Storage Decisions in NYC where I will also be presenting or teaching if you prefer, as well as listening and learning from the attendees whats on their minds.

Stay current on other upcoming activities on our events page, as well as see whats new or in the news here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Performance = Availability StorageIOblog featured ITKE guest blog

ITKE - IT Knowledge Exchange

Recently IT Knowledge Exchange named me and StorageIOblog as their weekly featured IT blog too which Im flattered and honored. Consequently, I did a guest blog for them titled Performance = Availability, Availability = Performance that you can read about here.

For those not familiar with ITKE, take a few minutes and go over and check it out, there is a wealth of information there on a diversity of topics that you can read about, or, you can also get involved and participate in the questions and answers discussions.

Speaking of ITKE, interested in “The Green and Virtual Data Center” (CRC), check out this link where you can download a free chapter of my book, along with information on how to order your own copy along with a special discount code from CRC press.

Thank you very much to Sean Brooks of ITKE and his social media team of Michael Morisy and Jenny Mackintosh for being named featured IT blogger, as well as for being able to do a guest post for them. It has been fantastic working them and particularly Jenny who helped with all of the logistics in putting together the various pieces including getting the post up on the web as well as in their news letter.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Upcoming Out and About Events

Following up on previous Out and About updates ( here and here ) of where I have been, heres where I’m going to be over the next couple of weeks.

On September 15th and 16th 2009, I will be the keynote speaker along with doing a deep dive discussion around data deduplication in Minneapolis, MN and Toronto ON. Free Seminar, register and learn more here.

The Infrastructure Optimization and Planning Best Practices (V2.009) – Doing more with less without sacrificing storage, system or network capabilities Seminar series continues September 22, 2009 with a stop in Chicago. Free Seminar, register and learn more here.

On September 23, 2009 I will be in New York City at Storage Decisions conference participating in the Ask the Experts during the expo session as well as presenting The Other Green — Storage Efficiency and Optimization.

Throw out the "green“: buzzword, and you’re still left with the task of saving or maximizing use of space, power, and cooling while stretching available IT dollars to support growth and business sustainability. For some environments the solution may be consolation while others need to maintain quality of service response time, performance and availability necessitating faster, energy efficient technologies to achieve optimization objectives. To accomplish these and other related issues, you can turn to the cloud, virtualization, intelligent power management, data footprint reduction and data management not to mention various types of tiered storage and performance optimization techniques. The session will look at various techniques and strategies to optimize either on-line active or primary as well as near-line or secondary storage environment during tough economic times, as well as to position for future growth, after all, there is no such thing as a data recession!

Topics, technologies and techniques that will be discussed include among others:

  • Energy efficiency (strategic) vs. energy avoidance (tactical)
  • Optimization and the need for speed vs. the need for capacity
  • Metrics and measurements for management insight
  • Tiered storage and tiered access including SSD, FC, SAS and clouds
  • Data footprint reduction (archive, compress, dedupe) and thin provision
  • Best practices, financial incentives and what you can do today

Free event, learn more and register here.

Check out the events page for other upcoming events and hope to see you this fall while Im out and about.

Cheers – gs

Greg Schulz – StorageIOblog, twitter @storageio Author “The Green and Virtual Data Center” (CRC)

Summer Book Update and Back to School Reading

August and thus Summer 2009 in the northern hemisphere are swiftly passing by and start of a new school year is just around the corner which means it is also time for final vacations, time at the beach, pool, golf course, amusement park or favorite fishing hole among other past times. In order to help get you ready for fall (or late summer) book shopping for those with IT interests, here are some Amazon lists (here, here and here) for ideas, after all, the 2009 holiday season is not that far away!

Here’s a link to my Amazon.com Authors page that includes coverage of both my books, "The Green and Virtual Data Center" (CRC) and "Resilient Storage Networks – Designing Scalable Flexible Data Infrastructures" (Elsevier).

The Green and Virtual Data Center (CRC)Resilient Storage Networks - Designing Flexible Scalable Data Infrastructures (Elsevier)

Click here to look inside "The Green and Virtual Data Center" (CRC) and or inside "Resilient Storage Networks" (Elsevier).

Its been six months since the launch announcement of my new book "The Green and Virtual Data Center" (CRC) and general availability at Amazon.com and other global venues here and here. In celebration of the six month anniversary of the book launch (thank you very much to all who have bought a copy!), here is some coverage including what is being said, related articles, interviews, book reviews and more.

Article: New Green Data Center: shifting from avoidance to becoming more efficient IT-World August 2009

wsradio.com interview discussing themes and topics covered in the book including closing the green gap and shifting towards an IT efficiency and productivity for business sustainability.

Closing the green gap: Discussion about expanding data centers with environmental benefits at SearchDataCenter.com

From Greg Brunton – EDS/An HP Company: “Greg Schulz has presented a concise and visionary perspective on the Green issues, He has cut through the hype and highlighted where to start and what the options are. A great place to start your green journey and a useful handbook to have as the journey continues.”

From Rick Bauer – Storage Networking Industry Association (SNIA) – Education and Technology Director”
“Greg is one of the smartest “good guys” in the storage industry.
He has been a voice of calm amid all the “green IT hype” over the past few years. So when he speaks of the possible improvements that Green Tech can bring, it’s a much more realistic approach…”

From CMG (Computer Measurement Group) MeasureIT
I must admit that I have been slightly skeptical at times, when it comes to what the true value is behind all of the discussions on “green” technologies in the data center. As someone who has seen both the end user and vendor side of things, I think my skepticism gets heightened more than it normally would be. This book really helped dispel my skepticism.

The book is extremely well organized and easy to follow. Each chapter has a very good introduction and comprehensive summary. This book could easily serve as a blueprint for organizations to follow when they look for ideas on how to design new data centers. It’s a great addition to an IT Bookshelf. – Reviewed by Stephen R. Guendert, PhD (Brocade and CMG MeasureIT). Click here to read the full review in CMG MeasureIT.

From Tom Becchetti – IT Architect: “This book is packed full of information. From ecological and energy efficiencies, to virtualization strategies and what the future may hold for many of the key enabling technologies. Greg’s writing style benefits both technologists and management levels.”

From MSP Business Journal: Greg Schulz named an Eco-Tech Warrior – April 2009

From David Marshall at VMblog.com: If you follow me on Linked in, you might have seen that I had been reading a new book that came out at the beginning of the year titled, “The Green and Virtual Data Center” by Greg Schulz. Rather than writing about a specific virtualization platform and how to get it up and running, Schulz takes an interesting approach at stepping back and looking at the big picture. After reading the book, I reached out to the author to ask him a few more questions and to share his thoughts with readers of VMBlog.com. I know I’m not Oprah’s Book Club, but I think everyone here will enjoy this book. Click here to read more what David Marshal has to say.

From Zen Kishimoto of Altaterra Research: Book Review May 2009

From Kurt Marko of Processor.com Green and Virtual Book Review – April 2009

From Serial Storage Wire (STA): Green and SASy = Energy and Economic, Effective Storage – March 2009

From Computer Technology Review: Recent Comments on The Green and Virtual Data Center – March 2009

From Alan Radding in Big Fat Finance Blog: Green IT for Finance Operations – April 2009

From VMblog: Comments on The Green and Virtual Data CenterMarch 2009

From StorageIO Blog: Recent Comments and Tips – March 2009

From VMblog: Comments on The Green and Virtual Data CenterMarch 2009

From Data Center Links John Rath comments on “The Green and Virtual Data Center

From InfoStor Dave Simpson comments on “The Green and Virtual Data Center

From Sys-Con Georgiana Comsa comments on “The Green and Virtual Data Center

From Ziff Davis Heather Clancy comments on “The Green and Virtual Data Center”

From Byte & Switch Green IT and the Green Gap February 2009

From GreenerComputing: Enabling a Green and Virtual Data Center February 2009

From Sys-con: Comments on The Green and Virtual Data Center – March 2009

From ServerWatch: Green IT: Myths vs. Realities – February 2009

From Byte & Switch: Going Green and the Economic Downturn – February 2009

From Business Wire: Comments on The Green and Virtual Data Center Book – January 2009

Additional content and news can be found here and here with upcoming events listed here.

Interested in Kindle? Here’s a link to get a Kindle copy of "Resilient Storage Networks" (Elsevier) or to send a message via Amazon to publisher CRC that you would like to see a Kindle version of "The Green and Virtual Data Center". While you are at it, I also invite you to become a fan of my books at Facebook.

Thanks again to everyone who has obtained their copy of either of my books, also thanks to all of those who have done reviews, interviews and helped in many other ways!

Enjoy the rest of your summer!

Cheers – gs

Greg Schulz – twitter @storageio