Data footprint reduction (Part 1): Life beyond dedupe and changing data lifecycles

Over the past couple of weeks there has been a flurry of IT industry activity around data footprint impact reduction with Dell buying Ocarina and IBM acquiring Storwize. For those who want the quick (compacted, reduced) synopsis of what Dell buying Ocarina as well as IBM acquiring Storwize means read this post here along with some of my comments here and here.

Now, before any Drs or Divas of Dedupe get concerned and feel the need to debate dedupes expanding role, success or applicability, relax, take a deep breath, then read on and take another breath before responding if so inclined.

The reason I mention this is that some may mistake this as a piece against or not in favor of dedupe as it talks about life beyond dedupe which could be mistaken as indicating dedupes diminished role which is not the case (read ahead and see figure 5 to see the bigger picture).

Likewise some might feel that since this piece talks about archiving for compliance and non regulatory situations along with compression, data management and other forms of data footprint reduction they may be compelled to defend dedupes honor and future role.

Again, relax, take a deep breath and read on, this is not about the death of dedupe.

Now for others, you might wonder why the dedupe tongue in check humor mentioned above (which is what it is) and the answer is quite simple. The industry in general is drunk on dedupe and in some cases thus having numbed its senses not to mention having blurred its vision of the even bigger opportunities for the business benefits of data footprint reduction beyond todays backup centric or vmware server virtualization dedupe discussions.

Likewise, it is time for the industry to wake (or sober) up and instead of trying to stuff everything under or into the narrowly focused dedupe bottle. Instead, realize that there is a broader umbrella called data footprint impact reduction which includes among other techniques, dedupe, archive, compression, data management, data deletion and thin provisioning across all types of data and applications. What this means is a broader opportunity or market than what exists or being discussed today leveraging different techniques, technologies and best practices.

Consequently this piece is about expanding the discussion to the larger opportunity for vendors or vars to extend their focus to the bigger world of overall data footprint impact reduction beyond where currently focused. Likewise, this is about IT customers realizing that there are more opportunities to address data and storage optimization across your entire organization using various techniques instead of just focusing on backup.

In other words, there is a very bright future for dedupe as well as other techniques and technologies that fall under the data footprint reduction umbrella including data stored online, offline, near line, primary, secondary, tertiary, virtual and in a public or private cloud..

Before going further however lets take a step back and look at some business along with IT issues, challenges and opportunities.

What is the business and IT issue or challenge?
Given that there is no such thing as a data or information recession shown in figure 1, IT organizations of all size are faced with the constant demand to store more data, including multiple copies of the same or similar data, for longer periods of time.


Figure 1: IT resource demand growth continues

The result is an expanding data footprint, increased IT expenses, both capital and operational, due to additional Infrastructure Resource Management (IRM) activities to sustain given levels of application Quality of Service (QoS) delivery shown in figure 2.

Some common IT costs associated with supporting an increased data footprint include among others:

  • Data storage hardware and management software tools acquisition
  • Associated networking or IO connectivity hardware, software and services
  • Recurring maintenance and software renewal fees
  • Facilities fees for floor space, power and cooling along with IT staffing
  • Physical and logical security for data and IT resources
  • Data protection for HA, BC or DR including backup, replication and archiving


Figure 2: IT Resources and cost balancing conflicts and opportunities

Figure 2 shows the result is that IT organizations of all size are faced with having to do more with what they have or with less including maximizing available resources. In addition, IT organizations often have to overcome common footprint constraints (available power, cooling, floor space, server, storage and networking resources, management, budgets, and IT staffing) while supporting business growth.

Figure 2 also shows that to support demand, more resources are needed (real or virtual) in a denser footprint, while maintaining or enhancing QoS plus lowering per unit resource cost. The trick is improving on available resources while maintaining QoS in a cost effective manner. By comparison, traditionally if costs are reduced, one of the other curves (amount of resources or QoS) are often negatively impacted and vice versa. Meanwhile in other situations the result can be moving problems around that later resurface elsewhere. Instead, find, identify, diagnose and prescribe the applicable treatment or form of data footprint reduction or other IT IRM technology, technique or best practices to cure the ailment.

What is driving the expanding data footprint?
Granted more data can be stored in the same or smaller physical footprint than in the past, thus requiring less power and cooling per Gbyte, Tbyte or PByte. Data growth rates necessary to sustain business activity, enhanced IT service delivery and enable new applications are placing continued demands to move, protect, preserve, store and serve data for longer periods of time.

The popularity of rich media and Internet based applications has resulted in explosive growth of unstructured file data requiring new and more scalable storage solutions. Unstructured data includes spreadsheets, Power Point, slide decks, Adobe PDF and word documents, web pages, video and audio JPEG, MP3 and MP4 files. This trend towards increasing data storage requirements does not appear to be slowing anytime soon for organizations of all sizes.

After all, there is no such thing as a data or information recession!

Changing data access lifecycles
Many strategies or marketing stories are built around the premise that shortly after data is created data is seldom, if ever accessed again. The traditional transactional model lends itself to what has become known as information lifecycle management (ILM) where data can and should be archived or moved to lower cost, lower performing, and high density storage or even deleted where possible.

Figure 3 shows as an example on the left side of the diagram the traditional transactional data lifecycle with data being created and then going dormant. The amount of dormant data will vary by the type and size of an organization along with application mix. 


Figure 3: Changing access and data lifecycle patterns

However, unlike the transactional data lifecycle models where data can be removed after a period of time, Web 2.0 and related data needs to remain online and readily accessible. Unlike traditional data lifecycles where data goes dormant after a period of time, on the right side of figure 3, data is created and then accessed on an intermittent basis with variable frequency. The frequency between periods of inactivity could be hours, days, weeks or months and, in some cases, there may be sustained periods of activity.

A common example is a video or some other content that gets created and posted to a web site or social networking site such as Face book, Linked in, or You Tube among others. Once the content is discussed, while it may not change, additional comment and collaborative data can be wrapped around the data as additional viewers discover and comment on the content. Solution approaches for the new category and data lifecycle model include low cost, relative good performing high capacity storage such as clustered bulk storage as well as leveraging different forms of data footprint reduction techniques.

Given that a large (and growing) percentage of new data is unstructured, NAS based storage solutions including clustered, bulk, cloud and managed service offerings with file based access are gaining in popularity. To reduce cost along with support increased business demands (figure 2), a growing trend is to utilize clustered, scale out and bulk NAS file systems that support NFS, CIFS for concurrent large and small IOs as well as optionally pNFS for large parallel access of files. These solutions are also increasingly being deployed with either built in or add on accessorized data footprint reduction techniques including archive, policy management, dedupe and compression among others.

What is your data footprint impact?
Your data footprint impact is the total data storage needed to support your various business application and information needs. Your data footprint may be larger than how much actual data storage you have as seen in figure 4. In Figure 4, an example is an organization that has 20TBytes of storage space allocated and being used for databases, email, home directories, shared documents, engineering documents, financial and other data in different formats (structured and unstructured) not to mention varying access patterns.


Figure 4: Expanding data footprint due to data proliferation and copies being retained

Of the 20TBytes of data allocated and used, it is very likely that the consumed storage space is not 100 percent used. Database tables may be sparsely (empty or not fully) allocated and there is likely duplicate data in email and other shared documents or folders. Additionally, of the 20TBytes, 10TBytes are duplicated to three different areas on a regular basis for application testing, training and business analysis and reporting purposes.

The overall data footprint is the total amount of data including all copies plus the additional storage required for supporting that data such as extra disks for Redundant Array of Independent Disks (RAID) protection or remote mirroring.

In this overly simplified example, the data footprint and subsequent storage requirement are several times that of the 20TBytes of data. Consequently, the larger the data footprint the more data storage capacity and performance bandwidth needed, not to mention being managed, protected and housed (powered, cooled, situated in a rack or cabinet on a floor somewhere).

Data footprint reduction techniques
While data storage capacity has become less expensive on a relative basis, as data footprint continue to expand in order to support business requirements, more IT resources will be needed to be made available in a cost effective, yet QoS satisfying manner (again, refer back to figure 2). What this means is that more IT resources including server, storage and networking capacity, management tools along with associated software licensing and IT staff time will be required to protect, preserve and serve information.

By more effectively managing the data footprint across different applications and tiers of storage, it is possible to enhance application service delivery and responsiveness as well as facilitate more timely data protection to meet compliance and business objectives. To realize the full benefits of data footprint reduction, look beyond backup and offline data improvements to include online and active data using various techniques such as those in table 1 among others.

There are several methods (shown in table 1) that can be used to address data footprint proliferation without compromising data protection or negatively impacting application and business service levels. These approaches include archiving of structured (database), semi structured (email) and unstructured (general files and documents), data compression (real time and offline) and data deduplication.

 

Archiving

Compression

Deduplication

When to use

Structured (database), email and unstructured

Online (database, email, file sharing), backup or archive

Backup or archiving or recurring and similar data

Characteristic

Software to identify and remove unused data from active storage devices

Reduce amount of data to be moved (transmitted) or stored on disk or tape.

Eliminate duplicate files or file content observed over a period of time to reduce data footprint

Examples

Database, email, unstructured file solutions with archive storage

Host software, disk or tape, (network routers) and compression appliances or software as well as appearing in some primary storage system solutions

Backup and archive target devices and Virtual Tape Libraries (VTLs), specialized appliances

Caveats

Time and knowledge to know what and when to archive and delete, data and application aware

Software based solutions require host CPU cycles impacting application performance

Works well in background mode for backup data to avoid performance impact during data ingestion

Table 1: Data footprint reduction approaches and techniques

Archiving for compliance and general data retention
Data archiving is often perceived as a solution for compliance, however, archiving can be used for many other non compliance purposes. These include general data footprint reduction, to boost performance and enhance routine data maintenance and data protection. Archiving can be applied to structured databases data, semi structured email data and attachments and unstructured file data.

A key to deploying an archiving solution is having insight into what data exists along with applicable rules and policies to determine what can be archived, for how long, how many copies and how data ultimately may be finally retired or deleted. Archiving requires a combination of hardware, software and people to implement business rules.

A challenge with archiving is having the time and tools available to identify what data should be archived and what data can be securely destroyed when no longer needed. Further complicating archiving is that knowledge of the data value is also needed; this may well include legal issues as to who is responsible for making decisions on what data to keep or discard.

If a business can invest in the time and software tools, as well as identify which data to archive to support an effective archive strategy, the returns can be very positive towards reducing the data footprint without limiting the amount of information available for use.

Data compression (real time and offline)
Data compression is a commonly used technique for reducing the size of data being stored or transmitted to improve network performance or reduce the amount of storage capacity needed for storing data. If you have used a traditional or TCP/IP based telephone or cell phone, watched either a DVD or HDTV, listened to an MP3, transferred data over the internet or used email you have most likely relied on some form of compression technology that is transparent to you. Some forms of compression are time delayed, such as using PKZIP to zip files, while others are real time or on the fly based such as when using a network, cell phone or listening to an MP3.

Two different approaches to data compression that vary in time delay or impact on application performance along with the amount of compression and loss of data are loss less (no data loss) and lossy (some data loss for higher compression ratio). In addition to these approaches, there are also different implementations of including real time for no performance impact to applications and time delayed where there is a performance impact to applications.

In contrast to traditional ZIP or offline, time delayed compression approaches that require complete decompression of data prior to modification, online compression allows for reading from, or writing to, any location within a compressed file without full file decompression and resulting application or time delay. Real time appliance or target based compression capabilities are well suited for supporting online applications including databases, OLTP, email, home directories, web sites and video streaming among others without consuming host server CPU or memory resources or degrading storage system performance.

Note that with the increase of CPU server processing performance along with multiple cores, server based compression running in applications such as database, email, file systems or operating systems can be a viable option for some environments.

A scenario for using real time data compression is for time sensitive applications that require large amounts of data such as online databases, video and audio media servers, web and analytic tools. For example, databases such as Oracle support NFS3 Direct IO (DIO) and Concurrent IO (CIO) capabilities to enable random and direct addressing of data within an NFS based file. This differs from traditional NFS operations where a file would be sequential read or written.

Another example of using real time compression is to combine a NAS file server configured with 300GB or 600GB high performance 15.5K Fibre Channel or SAS HDDs in addition to flash based SSDs to boost the effective storage capacity of active data without introducing a performance bottleneck associated with using larger capacity HDDs. Of course, compression would vary with the type of solution being deployed and type of data being stored just as dedupe ratios will differ depending on algorithm along with if text or video or object based among other factors.

Deduplication (Dedupe)
Data deduplication (also known as single instance storage, commonalty factoring, data difference or normalization) is a data footprint reduction technique that eliminates the occurrence of the same data. Deduplication works by normalizing the data being backed up or stored by eliminating recurring or duplicate copies of files or data blocks depending on the implementation.

Some data deduplication solutions boast spectacular ratios for data reduction given specific scenarios, such as backup of repetitive and similar files, while providing little value over a broader range of applications.

This is in contrast with traditional data compression approaches that provide lower, yet more predictable and consistent data reduction ratios over more types of data and application, including online and primary storage scenarios. For example, in environments where there is little to no common or repetitive data files, data deduplication will have little to no impact while data compression generally will yield some amount of data footprint reduction across almost all types of data.

Some data deduplication solution providers have either already added, or have announced plans to add, compression techniques to compliment and increase the data footprint effectiveness of their solutions across a broader range of applications and storage scenarios, attesting to the value and importance of data compression to reduce data footprint.

When looking at deduplication solutions, determine if the solution is designed to scale in terms of performance, capacity and availability over a large amount of data along with how restoration of data will be impacted by scaling for growth. Other items to consider include how data is reduplicated, such as real time using inline or some form of time delayed post processing, and the ability to select the mode of operation.

For example, a dedupe solution may be able to process data at a specific ingest rate inline until a certain threshold is hit and then processing reverts to post processing so as to not cause a performance degradation to the application writing data to the deduplication solution. The downside of post processing is that more storage is needed as a buffer. It can, however, also enable solutions to scale without becoming a bottleneck during data ingestion.

However, there is life beyond dedupe which is to in no way diminish dedupe or its very strong and bright future, one that Im increasingly convinced of having talked with hundreds of IT professionals (e.g. the customers) is that only the surface is being scratched for dedupe, not to mention larger data footprint impact opportunity seen in figure 5.


Figure 5: Dedupe adoption and deployment waves over time

While dedupe is a popular technology from a discussion standpoint and has good deployment traction, it is far from reaching mass customer adoption or even broad coverage in environments where it is being used. StorageIO research shows broadest adoption of dedupe centered around backup in smaller or SMB environments (dedupe deployment wave one in figure 5) with some deployment in Remote Office Branch Office (ROBO) work groups as well as departmental environments.

StorageIO research also shows that complete adoption in many of those SMB, ROBO, work group or smaller environments has yet to reach 100 percent. This means that there remains a large population that has yet to deploy dedupe as well as further opportunities to increase the level of dedupe deployment by those already doing so.

There has also been some early adoption in larger core IT environments where dedupe coexists with complimenting existing data protection and preservation practices. Another current deployment scenario for dedupe has been for supporting core edge deployments in larger environments that provide support for backup and data protection of ROBO, work group and departmental systems.

Note that figure 5 simply shows the general types of environments in which dedupe is being adopted and not any sort of indicators as to the degree of deployment by a given customer or IT environment.

What to do about your expanding data footprint impact?
Develop an overall data foot reduction strategy that leverages different techniques and technologies addressing online primary, secondary and offline data. Assess and discover what data exists and how it is used in order to effectively manage storage needs.

Determine policies and rules for retention and deletion of data combining archiving, compression (online and offline) and dedupe in a comprehensive data footprint strategy. The benefit of a broader, more holistic, data footprint reduction strategy is the ability to address the overall environment, including all applications that generate and use data as well as IRM or overhead functions that compound and impact the data footprint.

Data footprint reduction: life beyond (and complimenting) dedupe
The good news is that the Drs. and Divas of dedupe marketing (the ones who also are good at the disco dedupe dance debates) have targeted backup as an initial market sweet (and success) spot shown in figure 5 given the high degree of duplicate data.


Figure 6: Leverage multiple data footprint reduction techniques and technologies

However that same good news is bad news in that there is now a stigma that dedupe is only for backup, similar to how archive was hijacked by the compliance marketing folks in the post Y2K era. There are several techniques that can be used individually to address specific data footprint reduction issues or in combination as seen in figure 7 to implement a more cohesive and effective data footprint reduction strategy.


Figure 7: How various data footprint reduction techniques are complimentary

What this means is that both archive, dedupe as well as other forms of data footprint reduction can and should be used beyond where they have been target marketed using the applicable tool for the task at hand. For example, a common industry rule of thumb is that on average, ten percent of data changes per day (your mileage and rate of change will certainly vary given applications, environment and other factors).

Now assuming that you have 100TB (feel free to subtract a zero or two, or add as many as needed) of data (note I did not say storage capacity or percent utilized), ten percent change would be 10TB that needs to be backed up, replicated and so forth. Now with basic 2 to 1 streaming tape compression (2.5 to 1 in upcoming LTO enhancements) would reduce the daily backup footprint from 10TB to 5TB.

Using dedupe with 10 to 1 would get that from 10TB down to 1TB or about the size of a large capacity disk drive. With 20 to 1 that cuts the daily backup down to 500GB and so forth. The net effect is that more daily backups can be stored in the same footprint which in turn helps expedite individual file recover by having more options to choose from off of the disk based cache, buffer or storage pool.

On the other hand, if your objective is to reduce and eliminate storage capacity, then the same amount of backups can be stored on less disk freeing up resources. Now take the savings times the number of days in your backup retention and you should see the numbers start to add up.

Now what about the other 90 percent of the data that may not have changed, or, that did change and exists on higher performance storage?

Can its footprint impact be reduced?

The answer should be perhaps or it depends as well as prompts the question of what tool would be best. There is a popular thinking as is often the case with industry buzzwords or technologies to use it everywhere. After all goes the thinking, if it is a good thing why not use and deploy more of it everywhere?

Keep in mind that dedupe trades time to perform thinking and apply intelligence to further reduce data in exchange for space capacity. Thus trading time for space capacity can have a negative impact on applications that need lower response time, higher performance where the focus is on rates vs ratios. For example, the other 90 to 100 percent of the data in the above example may have to be on a mix of high and medium performance storage to meet QoS or service level agreement (SLA) objectives. While it would fun or perhaps cool to try and achieve a high data reduction ratio on the entire 100TB of active data with dedupe (e.g. trying to achieve primary dedupe), the performance impacts could have a negative impact.

The option is to apply a mix of different data footprint reduction techniques across the entire 100TB. That is, use dedupe where applicable and higher reduction ratios can be achieved while balancing performance, compression used for streaming data to tape for retention or archive as well as in databases or other applications software not to mention in networks. Likewise, use real time compression or what some refer to as primary dedupe for online active changing data along with online static read only data.

Deploy a comprehensive data footprint reduction strategy combining various techniques and technologies to address point solution needs as well as the overall environment, including online, near line for backup, and offline for archive data.

Lets not forget about archiving, thin provisioning, space saving snapshots, commonsense data management among other techniques across the entire environment. In other words, if your focus is just on dedupe for backup to
achieve an optimized and efficient storage environment, you are also missing

out on a larger opportunity. However, this also means having multiple tools or

technologies in your IT IRM toolbox as well as understanding what to use when, where and why.

Data transfer rates is a key metric for performance (time) optimization such as meeting backup or restore or other data protection windows. Data reduction ratios is a key metric for capacity (space) optimization where the focus is on storing as much data in a given footprint

Some additional take away points:

  • Develop a data footprint reduction strategy for online and offline data
  • Energy avoidance can be accomplished by powering down storage
  • Energy efficiency can be accomplished by using tiered storage to meet different needs
  • Measure and compare storage based on idle and active workload conditions
  • Storage efficiency metrics include IOPS or bandwidth per watt for active data
  • Storage capacity per watt per footprint and cost is a measure for in active data
  • Small percentage reductions on a large scale have big benefits
  • Align the applicable form of virtualization for the given task at hand

Some links for additional reading on the above and related topics

Wrap up (for now, read part II here)

For some applications reduction ratios are an important focus on the tools or modes of operations that achieve those results.

Likewise for other applications where the focus is on performance with some data reduction benefit, tools are optimized for performance first and reduction secondary.

Thus I expect messaging from some vendors to adjust (expand) to those capabilities that they have in their toolboxes (product portfolios) offerings

Consequently, IMHO some of the backup centric dedupe solutions may find themselves in niche roles in the future unless they can diversity. Vendors with multiple data footprint reduction tools will also do better than those with only a single function or focused tool.

However for those who only have a single or perhaps a couple of tools, well, guess what the approach and messaging will be.

After all, if all you have is a hammer everything looks like a nail, if all you have is a screw driver, well, you get the picture.

On the other hand, if you are still not clear on what all this means, send me a note, give a call, post a comment or a tweet and will be happy to discuss with you.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Availability or lack there of: Lessons From Our Frail & Aging Infrastructure

I have a new blog post over at Enterprise Efficiency about aging infrastructures including those involved with IT, Telcom and related ones.

As a society, we face growing problems repairing and maintaining the vital infrastructure we once took for granted.

Most of these incidents involve aging, worn-out physical infrastructure desperately in need of repair or replacement. But infrastructure doesn’t have to be old or even physical to cause problems when it fails.

The IT systems and applications all around us form a digital infrastructure that most enterprises take for granted until it’s not there.

Bottom line, there really isn’t much choice.

You can either pay up front now to update aging infrastructures, or, wait and pay more later. Either way, there will be a price to pay and you can not realize a cost savings until you actually embark on that endeavor.

Here is the link to the full blog post over at Enterprise Efficiency.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

July 2010 Odds and Ends: Perspectives, Tips and Articles

Here are some items that have been added to the main StorageIO website news, tips and articles, video podcast related pages that pertain to a variety of topics ranging from data storage, IO, networking, data centers, virtualization, Green IT, performance, metrics and more.

These content items include various odds and end pieces such as industry or technology commentary, articles, tips, ATEs (See additional ask the expert tips here) or FAQs as well as some video and podcasts for your mid summer (if in the northern hemisphere) enjoyment.

The New Green IT: Productivity, supporting growth, doing more with what you have

Energy efficient and money saving Green IT or storage optimization are often associated to mean things like MAID, Intelligent Power Management (IPM) for servers and storage disk drive spin down or data deduplication. In other words, technologies and techniques to minimize or avoid power consumption as well as subsequent cooling requirements which for some data, applications or environments can be the case. However there is also shifting from energy avoidance to that of being efficient, effective, productive not to mention profitable as forms of optimization. Collectively these various techniques and technologies help address or close the Green Gap and can reduce the amount of Green IT confusion in the form of boosting productivity (same goes for servers or networks) in terms of more work, IOPS, bandwidth, data moved, frames or packets, transactions, videos or email processed per watt per second (or other unit of time).

Click here to read and listen to my comments about boosting IOPs per watt, or here to learn more about the many facets of energy efficient storage and here on different aspects of storage optimization. Want to read more about the next major wave of server, storage, desktop and networking virtualization? Then click here to read more about virtualization life beyond consolidation where the emphasis or focus expands to abstraction, transparency, enablement in addition to consolidation for servers, storage, networks. If you are interested in metrics and measurements, Storage Resource Management (SRM) not to mention discussion about various macro data center metrics including PUE among others, click on the preceding links.

NAS and Shared Storage, iSCSI, DAS, SAS and more

Shifting gears to general industry trends and commentary, here are some comments on consumer and SOHO storage sharing, the role and importance Value Added Resellers (VARs) serve for SMB environments, as well as the top storage technologies that are in use and remain relevant. Here are some comments on iSCSI which continues to gain in popularity as well as storage options for small businesses.

Are you looking to buy or upgrade a new server? Here are some vendor and technology neutral tips to help determine needs along with requirements to help be a more effective informed buyer. Interested or do you want to know more about Serial Attached SCSI (6Gb/s SAS) including for use as external shared direct attached storage (DAS) for Exchange, Sharepoint, Oracle, VMware or HyperV clusters among other usage scenarios, check out this FAQ as well as podcast. Here are some other items including a podcast about using storage partitions in your data storage infrastructure, an ATE about what type of 1.5TB centralized storage to support multiple locations, and a video on scaling with clustered storage.

That is all for now, hope all is well and enjoy the content.

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
twitter @storageio

A Storage I/O Momentus Moment

I recently asked for and received from Seagate (See recent post about them moving their paper head quarters to Ireland here) a Momentus XT 500GB 7200 RPM 2.5 Hybrid Hard Disk Drive (HHDD) to use in an upcoming project. That project is not to test a bunch of different Hard Disk Drives (HDDs), HHDDs, Removable HDD (RHDDs) or Solid State Devices (read more about SSDs here and here or storage optimization here) in order to produce results for someone for a fee or some other consideration.

Do not worry, I am not jumping on the bandwagon of calling my office collection of computers, storage, networks and software the StorageIO Independent hands on test lab. Instead, my objective is to actually use the Momentus XT in conjunction with other storage I/O devices ranging from notebook or laptop, desktop or server, NAS and cloud based storage in conjunction with regular projects that Im working on both in the office as well as while traveling to various out and about activities.

More often than not these days, common thinking or perception is that if anybody is talking about a product or technology it must be a paid for activity as why would anyone write or talk about something without getting or expecting something in exchange (granted there are some exceptions). Given this era of transparency talk, lets walk the talk and here is my disclosure which for those who have read my content before hopefully you will realize that disclosures should be simple, straight forward, easy, fun and common sense based instead of having to dance around or hide what may be being done.

Disclosure moment:
This is not a paid for or sponsored blog (read my disclosure statement here) and in fact is no way connected to in conjunction with, endorsed, sanctioned or approved by Seagate for that matter nor have they been and currently are not a client. I did however ask them for and they offered to send to me a single 500GB Momentus XT Hybrid Hard Disk Drive (HHDD) with no enclosure, accessories, adapter, cables, software or other packaging to be used for a project I am working on. However I did buy from Amazon.com a Seagate GoFlex USB 3.0 to SATA 3 connection cable kit that I had been eyeing for some other projects. Nuff said about that.

What am I doing with a Seagate Momentus XT
As to the project I am working on, it has nothing to do with Seagate or any other vendors or clients for that matter as it is a new book that I will tell you more about in future posts. What I can share with you for now is that it is a follow on to my most previous books ( The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier) ). The new book will also be published by CRC Taylor and Francis.

Now for those who are interested in why would I request a Momentus XT Hybrid Hard Disk Drive (HHDD) from Seagate while turning down others offers of free hardware, software, services, trips and the like it is many fold. First I already own some Momentus (as perhaps you do and may not realize it) HDDs thus thought it would be fun and relatively straight forward to make some general comparisons. I needed some additional storage and I/O improvements to compliment and coexist with what I already have.

Does this mean that the book is going to be about flash Solid State Devices (SSD) since I am using a Momentus XT HHDD? The short answer is NO, it will be much more broadly focused however certainly various types of storage I/O control, public and private clouds, management, gaining control, networking, virtualization as well as other hardware, software, services techniques and technologies will be discussed building on my two previous books.

In addition, I want to see how compatible and useful in every day activities the HHDDs are as opposed to running a couple of standard iometer or other so called lab bench tests. After all, when you buy storage or any IT solutions, do you buy them to be used in your lab to run tests, or, do you buy them to do actual day to day tasks?

I also have been a fan of the HHDD as well as flash and DRAM based SSDs for many years (make that decades for SSDs) and see the opportunity to increase how I am actually using HDDs, HHDDs, SSDs as well as Removable Hard Disk Drives (RHDD) in conjunction with NAS, DAS and other storage to support my book writing as well as other projects that I have bought in the past.

What is the Seagate Momentus XT
The Seagate Momentus series of HDDs are positioned as desktop, notebook and laptop devices that vary in rotational speed (RPM), physical form factor, storage capacity as well as price. The XT is a Hybrid Hard Disk Drive (HHDD) that is essentially a best of breed (hence Hybrid) type device incorporating the high capacity and low cost of a traditional 2.5 7200 RPM HDD with performance boost of flash SSD memory. For example some initial testing of working with very large files have found that the XT can in some instances be as fast as a SSD while holding 10x the capacity with a favorable price.

In other words, an effective balance of cost per GByte capacity, cost per IOP and energy efficiency per IOP. This does not mean however that an XT should be used everywhere or for a replacement to DRAM or flash SSD quite to the contrary as those devices are good tools for specific needs or applications. Instead, the XT provides a good balance of performance and capacity to bridge the gap between traditional spinning HDDs price per capacity and performance per cost of SSD. (For those interested, here is a link to what Seagate is doing with SSD e.g. Pulsar in addition to HHDD and HDD).

Value proposition and business (or consumer) benefits moment
What is the benefit, why not just go all flash?

Simple and that is price unless your specific needs fit into the capacity space of an SSD and you need both the higher performance and lower energy draw (with subsequent heat generation). Note that I did not say heat elimination as during a recent quick test of copying 6GB of data to a flash based SSD it was warm just as the XT device was, however also a bit cooler than a comparable 7200 RPM 2.5 drive. If you can afford the full SSD flash or dram based device as well as it fits your needs and compatibility, go for it. However also make sure that you will see the full expected benefit of adding a SSD to your specific solutions as not all implementations are the same (e.g. do your homework).

Why not just go all HDD?

Simple, economics and performance which is why as I said back in 2005 that HHDDs had a very bright future and will IMHO drive a wedge between the traditional HDD and emerging flash based SSD markets at least for non consumer devices on a near term basis given their compatibility capabilities.

In other words, you could think of it as a compromise, or as a best of breed. For example I can see where for compatible not to mention cost and customer comfort ability of a known entity HHDD will gain some popularity in desktops, laptops, notebooks as well as other devices where a performance boost is needed however not at the expense of throwing out capacity or tight economic budgets.

I can also see some interesting scenarios for hosting virtual machines (VMs) to support server Virtualization with VMware, HyperV or Xen based solutions among others. Another scenario is for bulk storage or archive and backup solutions where the HHDD with their extended cache in the form of flash can help to boost performance of read or write operations on VTLs and dedupe devices, archive platforms, backup or other similar functions. Sure the Momentus XT is positioned as a desktop, notebook type device however has that ever stopped vendors or solution providers from using those types of devices in different roles other than what they were designed for? I am just sayin.

Speeds, feeds and buzzword bingo moment
Seagate has many different types of disk drives that can be found here. In general, the Momentus XT is a 2.5 small form factor (SFF) Hybrid Hard Disk Drive (HHDD) available in 500GB, 320GB and 250GB capacity (I have the 500GB model ST95005620AS) with 4GB SLC NAND (flash) SSD memory, 32MB of drive level cache, an underlying 7200RPM disk drive with SATA 3Gb/s interface including as well as Native Command Queuing (NCQ). Now if you want to say that the XT implements tiered storage in a single device (DRAM, flash and HDD) go ahead. Following are a couple of links of where you can learn more.

Seagate Seatools disk drive diagnostic software (free here)

Seagate FreeAgent Goflex Upgrade Cable (USB 3.0 to SATA 3 STAE104) (Seagate site and Amazon)

Seagate Momentus XT site with general information, product overview and data sheets as well as on Amazon

What does a Momentus XT have to do with writing a book?
If you have ever written a book, or for that matter, done a large development project of any type then things should be a bit familiar. These types of projects include the needs to keep organized as well as protected multiple copies of documents (a dedupers dream) including text, graphics or figures, spreadsheets not to mention project tracking material among others. Likewise as is the case with other authors who work for a living, much of these books are written, edited, proofed or thought about while traveling to different vents, client sites, conferences, meetings or on vacation for that matter. Hence the need to have multiple copies of data on different devices to help guard against when something happens (note that I did not say if).

This is nothing new as each of my last two solo book projects as well as when I was a coauthor contributing content to other books including The Resilient Enterprise (Veritas/Symantec). Much of the content was created while traveling relying on portable storage and backup while on the road. Something someone pointed out to me recently is that this is an example of eating your own dog food or eliminating the shoe makers children syndrome (where the shoe maker creates shoes for others however not for his own children).

Initial moments and general observations
From time to time I will post some notes and observations about how the Momentus XT is performing or behaving which if all goes as planned and so far has, it should be very transparent coexisting with some of my Removable Hard Disk Drives (RHDD) such as the Imation Odyssey which I bought several years ago for offsite bulk removable storage of data that goes to a secure vault somewhere.

Initial deployment other than a stupid mistake on my part has been smooth. What was the stupid mistake you ask? Simple, when I attached the drive via a USB 3.0 cable to SATA 3 connector to one of my XP SP3 systems, Windows saw the device however it did not show up in the list of available devices. Ok, I know I know, it was late in the evening however that is no excuse for realizing that the disk had not yet been initialized let alone formatted. A quick check using Seatools (free here) showed all was well. I then launched Windows Disk Manager, did the initialize, followed by format and all was good from that point on. Wow, wonder how much credibility I will lose over that gaff with the techno elite (that is a joke and a bit of humor btw).

I have already done some initial familiarization and compatibility testing with some of my other drives including a 2.5 64GB SATA flash SSD as well as a 2.5 7200RPM HDD both that I use for bulk data movement activities. At some point I also plan on attaching the XT to my Iomega IX4 NAS to try various things as I have done with other external devices in the past.

Granted these were not ideal conditions as I was in hurry and wanted to get some quick info. Given the probably less than ideal configuration as the format after the HDD was first initialized took about an hour using a FAT32 plug and play configuration. With NTFS and other optimizations I assume it can be better however this was again just to get an initial glimpse of the device in use.

Given that it is a HHDD that uses flash as a big buffer with a 500GB HDD plus 32MB of cache as a backing store, it was interesting attaching it to the computer, then waiting a few minutes, then launching a file copy. Where a normal HDD would start slightly vibrating due to rotation, it was a few moments before any vibration or noise was detected on the Momentus XT which should be of no surprise as the flash was doing its job acting as a buffer until the HDD spun up for work.

I did some initial file copying back and forth between different computers while LAN and NAS were busy doing other things including backups to the Mozy cloud. No discrete time or performance benchmarks to talk about yet, however overall, the XT not surprisingly does seem to be a bit faster than another external 7200 RPM 2.5 drive I use for bulk data moves both on reads and writes. Likewise, given that it is a hybrid HDD leveraging flash as an extended cache with an underlying HDD plus 32MB of cache, it may not always be as fast as my external 2.5 64GB flash SSD, however that is also a common apples to oranges comparison mistake (more on that in a future post).

For example, copying over 6GBytes of data (5 large files of various size) from a 7200 RPM 2.5 160GB Momentus drive in a laptop to the HHDD XT and a flash SSD both took about 8 to 9 minutes where as the normal copy to a 2.5 5400 RPM HDD takes at least 14 to 15 minutes if not longer. Note that these are very rough and far from accurate or reflective comparisons rather a quick gauge of benefits (e.g. getting data moved faster). When I get around to it, will do some more accurate comparisons and put into a follow up post. However I can see already where the XT has the performance similar to the SSD however with almost 10x the capacity which means it could possibly have an interesting role in supporting disk to disk (D2D) backups which I will give a try.

Eventually I will be removing the USB connector kit and actually installing the Momentus into a computer or two (not at the same time) however I am currently walking before running. Im still up in the air as to if I would install the XT into a computer with Windows XP SP3, or simply do a new install of Windows 7 on it to which Im open to thoughts, comments, feedback or applicable suggestions (besides switching to a Macbook or iPad).

Wrap up and fun moment

In the above photo, there is the Seagate Momentus (ST95005620AS), a Goflex USB 3.0 to SATA conversion attachment cable (docking device), a fortune cookie, couple of US quarters and Canadian two dollar coins (See out and about update), paper clips and fishing bobber on a note pad. Why the coins to show relative size and diversity across different geographies as this device will be traveling (it missed out on recent European trip to Holland).

Why the paper clips? Simple, why not, you never know when you will need one for something such as a MacGyver moment, or for pushing the tiny reset button on a device among other activities.

How about the fortune cookie? For good luck and I might need a quick snack while having a cup of coffee not to mention Chinese as well as Asian in general is one of my favorites cuisines to prepare or cook not to mention eat.

Oh, what about the fishing bobber? Why not, it was just laying around and you could also that Im fishing for information to see how the device fits into normal use or that it is there for fun or to add color to the photo.

Oh, and the note pad? Hmm, well, if you cannot figure that one out besides being a back drop, lets just say that the Momentus line in general as well as XT specifically are targeted for notebook, desktop, laptop or other deployment scenarios. If you still dont see the connection, ok fine, feel free to post a comment and I will happily clarify it for you.

That is all for the moment, however I will be following up with more soon.

In the meantime, enjoy your summer if in the northern hemisphere (or winter if in the south).

Take lots of photos, videos and make audio recordings to fill up those USB flash thumb drives (consumer SSD), SD memory cards, computer hard drives, cloud and online web hosting sites so that have you something to remember your special out and about moments by.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Gregs StorageIO Out and About Update: June 2010

With the 2010 summer solstice having occurred in the northern hemisphere that means it is time for a quick out and about update. It has been a busy winter and spring in the office, on the road as well as at home.

Some results of this recent activity have appeared in blog, on my web site as well as via other sites and venues. For example, activity or content ranges from Industry Trends and Perspectives white papers, reports, blogs, newsletter commentary, interviews, Internet TV, videos, web cast, pod casts (including several appearances on StorageMonkeys Infosmack as well as Rich Brambleys Virtumania), ask the expert (ATE) questions, twitter tweets, tips and columns. Then there were the many in person presentations, key note and seminar events, conferences, briefing sessions along with virtual conferencing and advisory consulting sessions (read and see more here).

Greg Schulz and StorageIO in the news

Regarding having new content appearing in different or new venues, Silicon Angle (including a video), Newstex and Enterprise Efficiencies join the long list of industry and vertical, traditional along with new world venues that my content as well as industry trends and perspective commentary appear in. Read more about events and activities here, content here or commentary here.

Speaking of books, there is also some news in that The Green and Virtual Data Center (CRC) is now available on Amazon Kindle (click on links below) as well as having been translated and published in China not to mention having undergone another round of printing keeping up with demand to make more copies available via global venues.

The Green and Virtual Data Center Chineese Edition: ISBN 978-7-115-21827-8

As for what am I seeing and hearing, check out the new series of Industry Trends and Perspective (ITP) short blog posts that compliment other posts as well as content found on the main web site. These ITP pieces capture what I am hearing and seeing (that is of those what I can talk about that are not under NDA of course) while out and about.

Some of the cities that I have been at while out and about doing keynote speaking and seminar events as well as for other meetings have included Minneapolis, Miami, San Diego, Beverly Hills, San Jose, San Diego (again), Hollywood (again), Austin, Miami (again), New York City, Reston, Minneapolis (again), Irvine, New York City (again), Boston, Toronto, Atlanta, Chicago, Columbus, Philadelphia, Mountain View, Mahtomedia (Minneapolis area), Boston (again) and Indianapolis, Calgary, Jasper (Alberta), Vancouver in Canada as well as Nijkerk (Netherlands) for a one day seminar covering Industry Trends and Perspectives in addition to changing planes in Atlanta, Detroit, Memphis and Las Vegas.

The Planes should be obvious, however what about automobiles you ask? How about the following taken from my rental car while driving north of LAX on the 405 after a January storm during my trip from San Diego after a morning event to Beverly Hills to do an evening keynote.

Rainbow seen from 405 north of LAX
Driving north of LAX on the 405 with a rainbow after rain storm

Another car trip a few weeks later after a different event in San Diego I had a driver from a service behind the wheel so that I could get some work done before an evening meeting. Also on the car front, after flying into Indianapolis there was a car ride to Indianapolis Motor Speedway (IMS) to do a keynote for a CDW sponsored event in gasoline alley a few days before the big race there. While we are on the topic of automobiles and technology, if you have not seen it, check out a post I did about what NAS, NASA and NASCAR have in common.

Gasoline Alley at Indy 500 Practice during a speaking eventIndy 500 Practice during a speaking event

What about trains you ask?

VIA Rail: The CanadianWaiting for morning Train at Nijkerk Station to take me to Amsterdam Airport

Besides the normal airport trams or trains, there was a fun Amtrak Acela ride from New York City Penn station after a morning event in the city up to Boston so as to be in place for a morning event the next day. Other train activity besides airport, subway or commuter light rail in the US and Europe (Holland), there was also an overnight trip on VIA Rail Canada the Canadian from Jasper Alberta to Vancouver (some business tied into a long weekend). If you have never been to the Canadian Rockies, let alone traveled via train, check this one, it was a blast and I highly recommend it.

Lake Louise Alberta CanadaBear family seen near Jasper Alberta
Lake Louise and Jasper area bear family in Alberta Canada

It just dawned on me, what about any out and about via boats?

Other than the Boston water taxi to Logan Airport from the convention center where EMCworld was held and that I did an Internet TV interview along with @Stu and @Scott_Lowe, boat activity has been so far relegated to relaxation.

However, as all work and no play could make for a dull boy (or girl), I can update you that the out and about via boat fishing and sightseeing activity has been very good so far this fall even with high (then low, then high) water on the scenic St. Croix river way.

Here are some scenes from out and about on the St. Croix river including an eagle in its nest tending to its young who can not be seen in this photo as well as fishing (and catching and releasing).

Greg and his Fish Guide: Out and About on St. Croix River Photos by Karen SchulzWaleye Fish: Out and About on St. Croix River Photos by Karen Schulz
This is Walter: Out and About on St. Croix River Photos by Karen SchulzOne of our Neighbors who had an addition to their family this year: Out and About on St. Croix River Photos by Karen Schulz

In between travels (as well as during on planes, trains and in hotel rooms) as well as relaxation breaks, I have been also working on several other projects. Some of these can be seen on the news or tips and articles as well as video and pod cast pages in addition to custom research as well as advisory consulting services. I have also been working on some other projects some of which will become visible over the next weeks and months, others not for a longer period of time yet and yet others that fall under the NDA category so that is all I have to say about that.

If you are not receiving or have seen them, the inaugural issue of the Server and StorageIO newsletter appeared in late February followed by the second edition (Spring 2010) this past week. Both can be found here and here as well as at www.storageio.com/newsletter or subscribing via newsletter@storageio.com.

StorageIO Newsletter

A question I often get asked is what am I hearing or seeing particularly with regards to IT customers as well as with vars during my travels. Here are some photos covering some of the things that I have seen so far this year while out and about.


Super TV or Visualization device at Texas Advanced Computing Center (TACC) in Austin
Note all of the dell servers side by side under the screens required to drive the image.


Taking a walk inside a supercomputer (left) and Texas Supercomputer (Note the horns)

View of MTC during one of stops part of a five city server virtualizaiton series I did
Microsoft Technology Center (MTC)

view from coach classFlight travel tools
View from the back of the plane (left), Airplane long haul essentials: water, food, ipod, coffee, eye shades

Dutch boats
Boats in Holland taken after dinner before recent seminar I did in Nijkerk

Dutch snack (yum yum) foodDutch Soccer or Pub Grub
Dutch Soccer (Pub) food and snacks being enjoyed after a recent seminar in Nijkerk

Waiting at AMS for flight to MSPAirplane food and maps
Airport waiting for planes in AMS (left), more airplane snacks and a map (right)

As to what am I seeing and hearing pertaining to IT, storage, networking and server trends or issues they include among others (see the newsletter):

Whats on deck and and that I am working on?

Having had a busy fun winter and spring Im going to get some relaxation time in during a couple of week period of no travel, however there is plenty to do and get ready for. The summer months will slow down a bit on the out and about travel events scene, however not to a complete stop. In between preparing for upcoming events, advisory and consulting activities as well as researching new material and topics not to mention working on some projects that you will see or hear more about in the weeks and months to come.

For example I will be a guest on a webcast sponsored by Viridity discussing the importance of data center metrics, measurement and insight for effective management to enable energy efficient and effective data centers on July 8th. In addition, I will also be doing another five city storage virtualization series in Stamford, Cleveland, Miami, Tampa and Louisville during mid to late July among other upcoming activities including VMworld in San Francisco.


Check out the events page for more details, specific dates and venues.

What about you?

What have you been doing or have planned for your summer?

Let me know what you are seeing or hearing as well as have been doing.

In the meantime however keep these hints and tips in mind:

  • Have plenty of reading material (real physical books or magazines) or virtual (Kindle or other) as well as via Internet or online to read while at the beach (make sure your computer or PDA is backed up), pool side, in the backyard or elsewhere
  • Remember your eye shades (sun glasses or eye wear), hat and sun screen and if applicable, inspect or bug repellant (e.g. RAID is still useful)
  • Drink plenty of liquid fluids while outside in the summer heat including non alcoholic ones that do not have umbrellas or other interesting garnish
  • Have a place to backup and protect all those summer photos, videos and audio clips that you record while on your out and about adventure. However, keep in mind privacy concerns when uploading them to various social mediums. After all, what happens in Vegas stays in Vegas and what happens on the web stays on the web!

Thanks to everyone involved in the recent events which can be seen here, as well for those who will be participating in upcoming ones I look forward to meeting and talking with you.

Until next time have a fun, safe and relaxing summer if you are in the northern hemisphere and for those down under, not to worry, spring is on the way soon for you as well.

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
twitter @storageio

Follow via Google Feedburner here or via email subscription here.

VMware vExpert 2010: Thank You, Im Honored to be named a Member

This week while traveling I received an email note from John Troyer of VMware informing me that I have been nominated and selected as a VMware vExpert for 2010.


To say that I was surprised and honored would be an understatement.

Thus, I would like to thank all those involved in the nominations, evaluation and selection process for being named to this esteemed group.

I would also like to say congratulations, best wishes and hello to all of the other 2010 vExperts. Im Looking forward to being involved and participating in the VMware vExpert community.

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
twitter @storageio

Industry Trends and Perspectives: Storage Virtualization and Virtual Storage

This is part of an ongoing series of short industry trends and perspectives blog posts briefs.

These short posts compliment other longer posts along with traditional industry trends and perspective white papers, research reports, solution brief content found at www.storageioblog.com/reports.

The topic of this post is a trend server virtualization and recent EMC virtual storage announcements.

Virtual storage or storage virtualization has been as a technology and topic around for some time now. Some would argue that storage virtualization is several years old while others would say many decades depending on your view or definition which will vary by preferences, product, vendor, open or closed, hardware, network, software not to mention feature and functionality.

Consequently there are many different views and definitions of storage virtualization some tied to that of product specifications often leading to apples and oranges comparison.

Back in the early to mid 2000s, there was plenty of talk around storage virtualization which then gave way to a relative quiet period before seeing adoption pickup in terms of deployment later in the decade (at least for block based).

More recently there has a been a renewed flurry of storage virtualization activity with many vendors now shipping their latest versions of tools and functionality, EMC announcing VPLEX as well as the file virtualization vendors continuing to try and create a market for their wares (give it time, like block based, it will evolve).

One of the trends around storage virtualization and part of the play on words EMC is using is to change the order of the words. That is where storage virtualization is often aligned with product implementation (e.g. software on an appliance or switch or in a storage system) used primarily for aggregation of heterogeneous storage, with VPLEX EMC is referring to it as virtual storage.

What is interesting here is the play on life beyond consolidation a trend that is also occurring with servers or using virtualization for agility, flexibility and ease of management for upgrades, add, move and changes as opposed to simply pooling of LUNs and underlying storage devices. Stay tuned and watch for more in this space as well as read the blog post below about VPLEX for more on this topic.

Related and companion material:
Blog: EMC VPLEX: Virtual Storage Redefined or Respun?

That is all for now, hope you find this ongoing series of current and emerging Industry Trends and Perspectives interesting.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Upcoming Event: Industry Trends and Perspective European Seminar

Event Seminar Announcement:

IT Data Center, Storage and Virtualization Industry Trends and Perspective
June 16, 2010 Nijkerk, GELDERLAND Netherlands

Event TypeTraining/Seminar
Event TypeSeminar Training with Greg Schulz of US based Server and StorageIO
SponsorBrouwer Storage Consultancy
Target AudienceStorage Architects, Consultants, Pre-Sales, Customer (technical) decison makers
KeywordsCloud, Grid, Data Protection, Disaster Recovery, Storage, Green IT, VTL, Encryption, Dedupe, SAN, NAS, Backup, BC, DR, Performance, Virtualization, FCoE
Location and VenueAmpt van Nijkerk Berencamperweg
Nijkerk, GELDERLAND NL
WhenWed. June 16, 2010 9AM-5PM Local
Price€ 450,=
Event URLLinkedIn: https://storageioblog.com/book4.html
ContactGert Brouwer
Olevoortseweg 43
3861 MH Nijkerk
The Netherlands
Phone: +31-33-246-6825
Fax: +31-33-245-8956
Cell Phone: +31-652-601-309

info@brouwerconsultancy.com

AbstractGeneral items that will be covered include: What are current and emerging macro trends, issues, challenges and opportunities. Common IT customer and IT trends, issues and challenges. Opportunities for leveraging various current, new and emerging technologies, techniques. What are some new and improved technologies and techniques. The seminar will provide insight on how to address various IT and data storage management challenges, where and how new and emerging technologies can co-exist as well as compliment installed resources for maximum investment protection and business agility. Additional themes include cost and storage resource management, optimization and efficiency approaches along with where and how cloud, virtualizaiton and other topics fit into existing environments.

Buzzwords and topics to be discussed include among others: FC and FCoE, SAS, SATA, iSCSI and NAS, I/O Vritualization (IOV) and convergence SSD (Flash and RAM), RAID, Second Generation MAID and IPM, Tape Performance and Capacity planning, Performance and Capacity Optimization, Metrics IRM tools including DPM, E2E, SRA, SRM, as Well as Federated Management Data movement and migration including automation or policy enabled HA and Data protection including Backup/Restore, BC/DR , Security/Encryption VTL, CDP, Snapshots and replication for virtual and non virtual environments Dynamic IT and Optimization , the new Green IT (efficiency and productivity) Distributed data protection (DDP) and distributed data caching (DDC) Server and Storage Virtualization along with discussion about life beyond consolidation SAN, NAS, Clusters, Grids, Clouds (Public and Private), Bulk and object based Storage Unified and vendor prepackaged stacked solutions (e.g. EMC VCE among others) Data footprint reduction (Servers, Storage, Networks, Data Protection and Hypervisors among others.

Learn about other events involving Greg Schulz and StorageIO at www.storageio.com/events

EMC VPLEX: Virtual Storage Redefined or Respun?

In a flurry of announcements that coincide with EMCworld occurring in Boston this week of May 10 2010 EMC officially unveiled the Virtual Storage vision initiative (aka twitter hash tag of #emcvs) and initial VPLEX product. The Virtual Storage initiative was virtually previewed back in March (See my previous post here along with one from Stu Miniman (twitter @stu) of EMC here or here) and according to EMC the VPLEX product was made generally available (GA) back in April.

The Virtual Storage vision and associated announcements consisted of:

  • Virtual Storage vision – Big picture  initiative view of what and how to enable private clouds
  • VPLEX architecture – Big picture view of federated data storage management and access
  • First VPLEX based product – Local and campus (Metro to about 100km) solutions
  • Glimpses of how the architecture will evolve with future products and enhancements


Figure 1: EMC Virtual Storage and Virtual Server Vision and Big Pictures

The Big Picture
The EMC Virtual Storage vision (Figure 1) is the foundation of a private IT cloud which should enable characteristics including transparency, agility, flexibility, efficient, always on, resiliency, security, on demand and scalable. Think of it this way, EMC wants to enable and facilitate for storage what is being done by server virtualization hypervisor vendors including VMware (which happens to be owned by EMC), Microsoft HyperV and Citrix/Xen among others. That is, break down the physical barriers or constraints around storage similar to how virtual servers release applications and their operating systems from being tied to a physical server.

While the current focus of desktop, server and storage virtualization has been focused on consolidation and cost avoidance, the next big wave or phase is life beyond consolidation where the emphasis expands to agility, flexibility, ease of use, transparency, and portability (Figure 2). In the next phase which puts an emphasis around enablement and doing more with what you have while enhancing business agility focus extends from how much can be consolidated or the number of virtual machines per physical machine to that of using virtualization for flexibility, transparency (read more here and here or watch here).


Figure 2: Virtual Storage Big Picture

That same trend will be happening with storage where the emphasis also expands from how much data can be squeezed or consolidated onto a given device to that of enabling flexibility and agility for load balancing, BC/DR, technology upgrades, maintenance and other routine Infrastructure Resource Management (IRM) tasks.

For EMC, achieving this vision (both directly for storage, and indirectly for servers via their VMware subsidiary) is via local and distributed (metro and wide area) federation management of physical resources to support virtual data center operations. EMC building blocks for delivering this vision including VPLEX, data and storage management federation across EMC and third party products, FAST (fully automated storage tiering), SSD, data protection and data footprint reduction and data protection management products among others.

Buzzword bingo aside (e.g. LAN, SAN, MAN, WAN, Pots and Pans) along with Automation, DWDM, Asynchronous, BC, BE or Back End, Cache coherency, Cache consistency, Chargeback, Cluster, db loss, DCB, Director, Distributed, DLM or Distributed Lock Management, DR, Foe or Fibre Channel over Ethernet, FE or Front End, Federated, FAST, Fibre Channel, Grid, HyperV, Hypervisor, IRM or Infrastructure Resource Management, I/O redirection, I/O shipping, Latency, Look aside, Metadata, Metrics, Public/Private Cloud, Read ahead, Replication, SAS, Shipping off to Boston, SRA, SRM, SSD, Stale Reads, Storage virtualization, Synchronization, Synchronous, Tiering, Virtual storage, VMware and Write through among many other possible candidates the big picture here is about enabling flexibility, agility, ease of deployment and management along with boosting resource usage effectiveness and presumably productivity on a local, metro and future global basis.


Figure 3: EMC Storage Federation and Enabling Technology Big Picture

The VPLEX Big Picture
Some of the tenants of the VPLEX architecture (Figure 3) include a scale out cluster or grid design for local and distributed (metro and wide area) access where you can start small and evolve as needed in a predictable and deterministic manner.


Figure 4: Generic Virtual Storage (Local SAN and MAN/WAN) and where VPLEX fits

The VPLEX architecture is targeted towards enabling next generation data centers including private clouds where ease and transparency of data movement, access and agility are essential. VPLEX sits atop existing EMC and third party storage as a virtualization layer between physical or virtual servers and in theory, other storage systems that rely on underlying block storage. For example in theory a NAS (NFS, CIFS, and AFS) gateway, CAS content archiving or Object based storage system or purpose specific database machine could sit between actual application servers and VPLEX enabling multiple layers of flexibility and agility for larger environments.

At the heart of the architecture is an engine running a highly distributed data caching algorithm that uses an approach where a minimal amount of data is sent to other nodes or members in the VPLEX environment to reduce overhead and latency (in theory boosting performance). For data consistency and integrity, a distributed cache coherency model is employed to protect against stale reads and writes along with load balancing, resource sharing and failover for high availability. A VPLEX environment consists of a federated management view across multiple VPLEX clusters including the ability to create a stretch volume that is accessible across multiple VPLEX clusters (Figure 5).


Figure 5: EMC VPLEX Big Picture


Figure 6: EMC VPLEX Local with 1 to 4 Engines

Each VPLEX local cluster (Figure 6) is made up of 1 to 4 engines (Figure 7) per rack with each engine consisting of two directors each having 64GByte of cache, localized compute Intel processors, 16 Front End (FE) and 16 Back End (BE) Fibre Channel ports configured in a high availability (HA). Communications between the directors and engines is Fibre Channel based. Meta data is moved between the directors and engines in 4K blocks to maintain consistency and coherency. Components are fully redundant and include phone home support.


Figure 7: EMC VPLEX Engine with redundant directors

VPLEX initially host servers supported include VMware, Cisco UCS, Windows, Solaris, IBM AIX, HPUX and Linux along with EMC PowerPath and Windows multipath management drivers. Local server clusters supported include Symantec VCS, Microsoft MSCS and Oracle RAC along with various volume mangers. SAN fabric connectivity supported includes Brocade and Cisco as well as Legacy McData based products.

VPLEX also supports cache (Figure 8 ) write thru to preserve underlying array based functionality and performance with 8,000 total virtualized LUNs per system. Note that underlying LUNs can be aggregated or simply passed through the VPLEX. Storage that attaches to the BE Fibre Channel ports include EMC Symmetrix VMAX and DMX along with CLARiiON CX and CX4. Third party storage supported includes HDS9000 and USPV/VM along with IBM DS8000 and others to be added as they are certified. In theory given that the VPLEX presents block based storage to hosts; one would also expect that NAS, CAS or other object based gateways and servers that rely on underlying block storage to also be supported in the future.


Figure 8: VPLEX Architecture and Distributed Cache Overview

Functionality that can be performed between the cluster nodes and engines with VPLEX include data migration and workload movement across different physical storage systems or sites along with shared access with read caching on a local and distributed basis. LUNS can also be pooled across different vendors underlying storage solutions that also retain their native feature functionality via VPLEX write thru caching.

Reads from various servers can be resolved by any node or engine that checks their cache tables (Figure 8 ) to determine where to resolve the actual I/O operation from. Data integrity checks are also maintained to prevent stale reads or write operations from occurring. Actual meta data communications between nodes is very small to enable state fullness while reducing overhead and maximizing performance. When a change to cache data occurs, meta information is sent to other nodes to maintain the distributed cache management index schema. Note that only pointers to where data and fresh cache entries reside are what is stored and communicated in the meta data via the distributed caching algorithm.


Figure 9: EMC VPLEX Metro Today

For metro deployments, two clusters (Figure 9) are utilized with distances supported up to about 100km or about 5ms of latency in a synchronous manner utilizing long distance Fibre Channel optics and transceivers including Dense Wave Division Multiplexing (DWDM) technologies (See Chapter 6: Metropolitan and Wide Area Storage Networking in Resilient Storage Networking (Elsevier) for additional details on LAN, MAN and WAN topics).

Initially EMC is supporting local or Metro including Campus based VPLEX deployments requiring synchronous communications however asynchronous (WAN) Geo and Global based solutions are planned for the future (Figure 10).


Figure 10: EMC VPLEX Future Wide Area and Global

Online Workload Migration across Systems and Sites
Online workload or data movement and migration across storage systems or sites is not new with solutions available from different vendors including Brocade, Cisco, Datacore, EMC, Fujitsu, HDS, HP, IBM, LSI and NetApp among others.

For synchronization and data mobility operations such as a VMware Vmotion or Microsoft HyperV Live migration over distance, information is written to separate LUNs in different locations across what are known as stretch volumes to enable non disruptive workload relocation across different storage systems (arrays) from various vendors. Once synchronization is completed, the original source can be disconnected or taken offline for maintenance or other common IRM tasks. Note that at least two LUNs are required, or put another way, for every stretch volume, two LUNs are subtracted from the total number of available LUNs similar to how RAID 1 mirroring requires at least two disk drives.

Unlike other approaches that for coherency and performance rely on either no cached data, or, extensive amounts of cached data along with subsequent overhead for maintaining state fullness (consistency and coherency) including avoiding stale reads or writes, VPLEX relies on a combination of distributed cache lookup tables along with pass thru access to underlying storage when or where needed. Consequently large amounts of data does not need to be cached as well as shipped between VPLEX devices to maintain data consistency, coherency or performance which should also help to keep costs affordable.

Approach is not unique, it is the implementation
Some storage virtualization solutions that have been software based running on an appliance or network switch as well as hardware system based have had a focus of emulating or providing competing capabilities with those of mid to high end storage systems. The premise has been to use lower cost, less feature enabled storage systems aggregated behind the appliance, switch or hardware based system to provide advanced data and storage management capabilities found in traditional higher end storage products.

VPLEX while like any tool or technology could be and probably will be made to do other things than what it is intended for is really focused on, flexibility, transparency and agility as opposed to being used as a means of replacing underlying storage system functionality. What this means is that while there is data movement and migration capabilities including ability to synchronize data across sites or locations, VPLEX by itself is not a replacement for the underlying functionality present in both EMC and third party (e.g. HDS, HP, IBM, NetApp, Oracle/Sun or others) storage systems.

This will make for some interesting discussions, debates and applies to oranges comparisons in particular with those vendors whose products are focused around replacing or providing functionality not found in underlying storage system products.

In a nut shell summary, VPLEX and the Virtual Storage story (vision) is about enabling agility, resiliency, flexibility, data and resource mobility to simply IT Infrastructure Resource Management (IRM). One of the key themes of global storage federation is anywhere access on a local, metro, wide area and global basis across both EMC and heterogeneous third party vendor hardware.

Lets Put it Together: When and Where to use a VPLEX
While many storage virtualization solutions are focused around consolidation or pooling, similar to first wave server and desktop virtualization, the next general broad wave of virtualization is life beyond consolidation. That means expanding the focus of virtualization from consolidation, pooling or LUN aggregation to that of enabling transparency for agility, flexibility, data or system movement, technology refresh and other common time consuming IRM tasks.

Some applications or usage scenarios in the future should include in addition to VMware Vmotion, Microsoft HypverV and Microsoft Clustering along with other host server closuring solutions.


Figure 11: EMC VPLEX Usage Scenarios

Thoughts and Industry Trends Perspectives:

The following are various thoughts, comments, perspectives and questions pertaining to this and storage, virtualization and IT in general.

Is this truly unique as is being claimed?

Interestingly, the message Im hearing out of EMC is not the claim that this is unique, revolutionary or the industries first as is so often the case by vendors, rather that it is their implementation and ability to deploy on a broad perspective basis that is unique. Now granted you will probably hear as is often the case with any vendor or fan boy/fan girl spins of it being unique and Im sure this will also serve up plenty of fodder for mudslinging in the blogsphere, YouTube galleries, twitter land and beyond.

What is the DejaVu factor here?

For some it will be nonexistent, yet for others there is certainly a DejaVu depending on your experience or what you have seen and heard in the past. In some ways this is the manifestation of many vision and initiatives from the late 90s and early 2000s when storage virtualization or virtual storage in an open context jumped into the limelight coinciding with SAN activity. There have been products rolled out along with proof of concept technology demonstrators, some of which are still in the market, others including companies have fallen by the way side for a variety of reasons.

Consequently if you were part of or read or listened to any of the discussions and initiatives from Brocade (Rhapsody), Cisco (SVC, VxVM and others), INRANGE (Tempest) or its successor CNT UMD not to mention IBM SVC, StorAge (now LSI), Incipient (now part of Texas Memory) or Troika among others you should have some DejaVu.

I guess that also begs the question of what is VPLEX, in band, out of band or hybrid fast path control path? From what I have seen it appears to be a fast path approach combined with distributed caching as opposed to a cache centric inband approaches such as IBM SVC (either on a server or as was tried on the Cisco special service blade) among others.

Likewise if you are familiar with IBM Mainframe GDPS or even EMC GDDR as well as OpenVMS Local and Metro clusters with distributed lock management you should also have DejaVu. Similarly if you had looked at or are familiar with any of the YottaYotta products or presentations, this should also be familiar as EMC acquired the assets of that now defunct company.

Is this a way for EMC to sell more hardware along with software products?

By removing barriers enabling IT staffs to support more data on more storage in a denser and more agile footprint the answer should be yes, something that we may see other vendors emulate, or, make noise about what they can or have been doing already.

How is this virtual storage spin different from the storage virtualization story?

That all depends on your view or definition as well as belief systems and preferences for what is or what is not virtual storage vs. storage virtualization. For some who believe that storage virtualization is only virtualization if and only if it involves software running on some hardware appliance or vendors storage system for aggregation and common functionality than you probably wont see this as virtual storage let alone storage virtualization. However for others, it will be confusing hence EMC introducing terms such as federation and avoiding terms including grid to minimize confusion yet play off of cloud crowd commotion.

Is VPLEX a replacement for storage system based tiering and replication?

I do not believe so and even though some vendors are making claims that tiered storage is dead, just like some vendors declared a couple of years ago that disk drives were going to be dead this year at the hands of SSD, neither has come to life so to speak pun intended. What this means for VPLEX is that it leverages underlying automated or manual tiering found in storage systems such as EMC FAST enabled or similar policy and manual functions in third party products.

What VPLEX brings to the table is the ability to transparently present a LUN or volume locally or over distance with shared access while maintaining cache and data coherency. This means that if a LUN or volume moves the applications or file system or volume managers expecting to access that storage will not be surprised, panic or encounter failover problems. Of course there will be plenty of details to be dug into and seen how it all actually works as is the case with any new technology.

Who is this for?

I see this as for environments that need flexibility and agility across multiple storage systems either from one or multiple vendors on a local or metro or wide area basis. This is for those environments that need ability to move workloads, applications and data between different storage systems and sites for maintenance, upgrades, technology refresh, BC/DR, load balancing or other IRM functions similar to how they would use virtual server migration such as VMotion or Live migration among others.

Do VPLEX and Virtual Storage eliminate need for Storage System functionality?

I see some storage virtualization solutions or appliances that have a focus of replacing underlying storage system functionality instead of coexisting or complementing. A way to test for this approach is to listen or read if the vendor or provider says anything along the lines of eliminating vendor lock in or control of the underlying storage system. That can be a sign of the golden rule of virtualization of whoever controls the virtualization functionality (at the server hypervisor or storage) controls the gold! This is why on the server side of things we are starting to see tiered hypervisors similar to tiered servers and storage where mixed hypervisors are being used for different purposes. Will we see tiered storage hypervisors or virtual storage solutions the answer could be perhaps or it depends.

Was Invista a failure not going into production and this a second attempt at virtualization?

There is a popular myth in the industry that Invista never saw the light of day outside of trade show expo or other demos however the reality is that there are actual customer deployments. Invista unlike other storage virtualization products had a different focus which was that around enabling agility and flexibility for common IRM tasks, similar the expanded focus of VPLEX. Consequently Invista has often been in apples to oranges comparison with other virtualization appliances that have as focus pooling along with other functions or in some cases serving as an appliance based storage system.

The focus around Invista and usage by those customers who have deployed it that I have talked with is around enabling agility for maintenance, facilitating upgrades, moves or reconfiguration and other common IRM tasks vs using it for pooling of storage for consolidation purposes. Thus I see VPLEX extending on the vision of Invista in a role of complimenting and leveraging underlying storage system functionality instead of trying to replace those capabilities with that of the storage virtualizer.

Is this a replacement for EMC Invista?

According to EMC the answer is no and that customers using Invista (Yes, there are customers that I have actually talked to) will continue to be supported. However I suspect that over time Invista will either become a low end entry for VPLEX, or, an entry level VPLEX solution will appear sometime in the future.

How does this stack up or compare with what others are doing?

If you are looking to compare to cache centric platforms such as IBMs SVC that adds extensive functionality and capabilities within the storage virtualization framework this is an apples to oranges comparison. VPLEX is providing cache pointers on a local and global basis functioning in a compliment to underlying storage system model where SVC caches at the specific cluster basis and enhancing functionality of underlying storage system. Rest assured there will be other apples to oranges comparisons made between these platforms.

How will this be priced?

When I asked EMC about pricing, they would not commit to a specific price prior to the announcement other than indicating that there will be options for on demand or consumption (e.g. cloud pricing) as well as pricing per engine capacity as well as subscription models (pay as you go).

What is the overhead of VPLEX?

While EMC runs various workload simulations (including benchmarks) internally as well as some publicly (e.g. Microsoft ESRP among others) they have been opposed to some storage simulation benchmarks such as SPC. The EMC opposition to simulations such as SPC have been varied however this could be a good and interesting opportunity for them to silence the industry (including myself) who continue ask them (along with a couple of other vendors including IBM and their XIV) when they will release public results.

What the interesting opportunity I think is for EMC is that they do not even have to benchmark one of their own storage systems such as a CLARiiON or VMAX, instead simply show the performance of some third party product that already is tested on the SPC website and then a submission with that product running attached to a VPLEX.

If the performance or low latency forecasts are as good as they have been described, EMC can accomplish a couple of things by:

  • Demonstrating the low latency and minimal to no overhead of VPLEX
  • Show VPLEX with a third party product comparing latency before and after
  • Provide a comparison to other virtualization platforms including IBM SVC

As for EMC submitting a VMAX or CLARiiON SPC test in general, Im not going to hold my breath for that, instead, will continue to look at the other public workload tests such as ESRP.

Additional related reading material and links:

Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier)
Chapter 3: Networking Your Storage
Chapter 4: Storage and IO Networking
Chapter 6: Metropolitan and Wide Area Storage Networking
Chapter 11: Storage Management
Chapter 16: Metropolitan and Wide Area Examples

The Green and Virtual Data Center (CRC)
Chapter 3: (see also here) What Defines a Next-Generation and Virtual Data Center
Chapter 4: IT Infrastructure Resource Management (IRM)
Chapter 5: Measurement, Metrics, and Management of IT Resources
Chapter 7: Server: Physical, Virtual, and Software
Chapter 9: Networking with your Servers and Storage

Also see these:

Virtual Storage and Social Media: What did EMC not Announce?
Server and Storage Virtualization – Life beyond Consolidation
Should Everything Be Virtualized?
Was today the proverbial day that he!! Froze over?
Moving Beyond the Benchmark Brouhaha

Closing comments (For now):
As with any new vision, initiative, architecture and initial product there will be plenty of questions to ask, items to investigate, early adopter customers or users to talk with and determine what is real, what is future, what is usable and practical along with what is nice to have. Likewise there will be plenty of mud ball throwing and slinging between competitors, fans and foes which for those who enjoy watching or reading those you should be well entertained.

In general, the EMC vision and story builds on and presumably delivers on past industry hype, buzz and vision with solutions that can be put into environments as productivity tool that works for the customer, instead of the customer working for the tool.

Remember the golden rule of virtualization which is in play here is that whoever controls the virtualization or associated management controls the gold. Likewise keep in mind that aggregation can cause aggravation. So do not be scared, however look before you leap meaning do your homework and due diligence with appropriate levels of expectations, aligning applicable technology to the task at hand.

Also, if you have seen or experienced something in the past, you are more likely to have DejaVu as opposed to seeing things as revolutionary. However it is also important to leverage lessons learned for future success. YottaYotta was a lot of NaddaNadda, lets see if EMC can leverage their past experiences to make this a LottaLotta.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Happy Earth Day 2010!

Here in the northern hemisphere it is late April and thus mid spring time.

That means the trees sprouting their buds, leaves and flowering while other plants and things come to life.

In Minnesota where I live, there is not a cloud in the sky today, the sun is out and its going to be another warm day in the 60s, a nice day to not be flying or traveling and thus enjoy the fine weather.

Among other things of note on this earth day 2010 include:

  • Minnesota Twins new home Target Field was just named the most Green Major League Baseball (MLB) stadium as well as greenest in the US with its LEED (or see here) certification.
  • Icelands Eyjafjallajokull volcano continues to spew water vapor steam, CO2 and ash at a slower rate than last week when it first erupted with some speculating that there could be impending activity from other Icelandic volcanos. Some estimates placed the initial eruption CO2 impact and subsequent flight cancellations to be neutral, essentially canceling each other out, however Im sure we will be hearing many different stories in the weeks to come.

  • Image of Iceland Eyjafjallajokull Volcano Eruption via Boston.com

  • Flights to/from and within Europe and the UK are returning to normal
  • Toyota continues to deal with recalls on some of their US built automobiles including the energy efficient Prius, some of which may have been purchased during the recent US cash for clunkers (CFC) program (hmm, is that ironic or what?)
  • Greenpeace in addition to using a Facebook page to protest Facebook data center practices is now targeting cloud IT in general including just before the Apple iPad launch (Heres some comments from Microsoft).
  • Vendors in all industries are lining up for the second coming of Green marketing or perhaps Green Washing 2.0

The new Green IT, moving beyond Green wash and hype

Speaking of Green IT including Green Computing, Green Storage, Virtualization, Cloud, Federation and more, here is a link to a post that I did back in February discussing how the Green Gap continues to exist.

The green gap exists and centers around the confusion of what Green means along with the common disconnects between core IT issues or barriers to becoming more efficient, effective, flexible and optimized from both an economic as well as environmental basis to those commonly messaged to under the green umbrella (read more here).

Regardless of where you stand on Green, Green washing, Green hype, environmentalism, eco-tech and other related themes, for at least a moment, set aside the politics and science debates and think in terms of practicality and economics.

That is, look for simple, recurring things that can be done to stretch your dollar or spending ability in order to support demand (See figure below) in a more effective manner along with reducing waste. For example to meet growing demand requirements in the face of shrinking or stagnate budgets, the action is to stretch available resources to do more work when needed, or retain more where applicable with the same or less footprint. What this means is that while common messaging is around reducing costs, look at the inverse which is to do more with available budgets or resources. The result is green in terms of economic and environmental benefits.

IT Resource demand
Increasing IT Resource Demand

Green IT wheel of oppourtunity
Green IT enablement techniques and technologies

Look at and understand the broader aspects of being green which has both economical and environmental benefits without compromising on productivity or functionality. There are many aspects or facets of being green beyond those commonly discussed or perceived to be so (See Green IT enablement techniques and technologies figure above).

Certainly recycling of paper, water, aluminum, plastics and other items including technology equipment are important to reduce waste and are things to consider. Another aspect of reducing waste particularly in IT is to avoid rework that can range from finding network bottlenecks or problems that result in continuous retransmission of data for failed backup, replication or data transfers that cause lost opportunity or resource consumption. Likewise programming errors (bugs) or miss configuration that results in rework or lost productivity also are forms of waste among others.

Another theme is that of shifting from energy avoidance to energy efficiency and effectiveness which are often thought to the same. However the expanded focus is also about getting more work done when needed with the same or less resources (See figure below) for example increasing activity (IOPS, transactions, emails or video served, bandwidth or messages) per watt of energy consumed.

From energy avoidence to effectiveness
Shifting from energy avoidance to effectiveness

One of the many techniques and approaches for addressing energy including stretching resources and being green include intelligent power management (IPM). With IPM, the focus is not strictly centered around energy avoidance, instead about inteligently adapting to different workloads or activity balancing performance and energy. Thus when there is work to be done, get the work done quickly with as little energy as possible (IOP or activity per watt), when there is less work, provide lower performance and thus smaller energy requirements, or when no work to be done, going into additional energy saving modes. Thus power management does not have to be exclusively about turrning off the lights or IT equipment in order to be green.

The following two figures look at Green IT past, present and future with an expanding focus around optimization and effectiveness meaning getting more work done, storing more data for longer periods of time, meeting growth demands with what appears to be additional resources however at a lower per unit cost without compromising on performance, availability or economics.

Green IT wheel of oppourtunity
Green IT: Past, present and future shift from avoidance to efficiency and effectiveness

Green IT wheel of oppourtunity
The new Green IT: Boosting business effectiveness, maximize ROI while helping the environment

If you think about going green as simply doing or using things more effectively, reducing waste, working more intelligently or effectively the benefits are both economical and environmentally positive (See the two figures above).

Instead of finding ways to fund green initiatives, shift the focus to how you can enable enhanced productivity, stretching resources further, doing more in the same or smaller footprint (floor space, power, cooling, energy, personal, licensing, budgets) for business economic and environmental sustainability with the result being environmental encampments.

Also keep in mind that small percentage changes on a large or recurring basis have significant benefits. For example a small change in cooling temperatures while staying within vendor guideline recommendations can result in big savings for large environments.

 

Bottom line

If you are a business and discounting green as simply a fad, or perhaps as a public relations (PR) initiative or activity tied to reducing carbon footprints and recycling then you are missing out on economic (top and bottom line) enhancement opportunities.

Likewise if you think that going green is only about the environment, then there is a missed opportunity to boost economic opportunities to help fund those inititiaves.

Going green means many different things to various people and is often more broad and common sense based than most realize.

That is all for now, happy earth day 2010

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
twitter @storageio

Spring 2010 StorageIO Newsletter

Welcome to the spring 2010 edition of the Server and StorageIO (StorageIO) news letter.

This edition follows the inaugural issue (Winter 2010) incorporating feedback and suggestions as well as building on the fantastic responses received from recipients.

A couple of enhancements included in this issue (marked as New!) include a Featured Related Site along with Some Interesting Industry Links. Another enhancement based on feedback is to include additional comment that in upcoming issues will expand to include a column article along with industry trends and perspectives.

StorageIO News Letter Image
Spring 2010 Newsletter

You can access this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions. Click on the following links to view the spring 2010 newsletter as HTML or PDF or, to go to the newsletter page.

Follow via Goggle Feedburner here or via email subscription here.

You can also subscribe to the news letter by simply sending an email to newsletter@storageio.com

Enjoy this edition of the StorageIO newsletter, let me know your comments and feedback.

Also, a very big thank you to everyone who has helped make StorageIO a success!.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Its US Census time, What about IT Data Centers?

It is that once a decade activity time this year referred to as the US 2010 Census.

With the 2010 census underway, not to mention also time for completing and submitting your income tax returns, if you are in IT, what about measuring, assessing, taking inventory or analyzing your data and data center resources?

US 2010 Cenus formsUS 2010 Cenus forms
Figure 1: IT US 2010 Census forms

Have you recently taken a census of your data, data storage, servers, networks, hardware, software tools, services providers, media, maintenance agreements and licenses not to mention facilities?

Likewise have you figured out what if any taxes in terms of overhead or burden exists in your IT environment or where opportunities to become more optimized and efficient to get an IT resource refund of sorts are possible?

If not, now is a good time to take a census of your IT data center and associated resources in what might also be called an assessment, review, inventory or survey of what you have, how its being used, where and who is using and when along with associated configuration, performance, availability, security, compliance coverage along with costs and energy impact among other items.

IT Data Center Resources
Figure 2: IT Data Center Metrics for Planning and Forecasts

How much storage capacity do you have, how is it allocated along with being used?

What about storage performance, are you meeting response time and QoS objectives?

Lets not forget about availability, that is planned and unplanned downtime, how have your systems been behaving?

From an energy or power and cooling standpoint, what is the consumption along with metrics aligned to productivity and effectiveness. These include IOPS per watt, transactions per watt, videos or email along with web clicks or page views per watt, processor GHz per watt along with data movement bandwidth per watt and capacity stored per watt in a given footprint.

Other items to look into for data centers besides storage include servers, data and I/O networks, hardware, software, tools, services and other supplies along with physical facility with metrics such as PUE. Speaking of optimization, how is your environment doing, that is another advantage of doing a data center census.

For those who have completed and sent in your census material along with your 2009 tax returns, congratulations!

For others in the US who have not done so, now would be a good time to get going on those activities.

Likewise, regardless of what country or region you are in, its always a good time to take a census or inventory of your IT resources instead of waiting every ten years to do so.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

March Metric Madness: Fun with Simple Math

Its March and besides being spring in north America, it also means tournament season including the NCAA basket ball series among others known as March Madness.

Given the office pools and other forms of playing with numbers tied to the tournaments and real or virtual money, here is a quick timeout looking at some fun with math.

The fun is showing how simple math can be used to show relative growth for IT resources such as data storage. For example, say that you have 10Tbytes of storage or data and that it is growing at only 10 percent per year, in five years with simple math yields 14.6Tbytes.

Now lets assume growth rate is 50 percent per year and in the course of five years, instead of having 10Tbytes, that now jumps to 50.6Tbytes. If you have 100Tbytes today and at 50 percent growth rate, that would yield 506.3 Tbytes or about half of a petabyte in 5 years. If by chance you have say 1Pbyte or 1,000Tbytes today, at 25% year of year growth you would have 2.44Pbytes in 5 years.
Basic Storage Forecast
Figure 1 Fun with simple math and projected growth rates

Granted this is simple math showing basic examples however the point is that depending on your growth rate and amount of either current data or storage, you might be surprised at the forecast or projected needs in only five years.

In a nutshell, these are examples of very basic primitive capacity forecasts that would vary by other factors including if the data is 10Tbytes and your policies is for 25 percent free space, that would require even more storage than the base amount. Go with a different RAID level, some extra space for replication, snapshots, disk to disk backups and replication not to mention test development and those numbers go up even higher.

Sure those amounts can be offset with thin provisioning, dedupe, archiving, compression and other forms of data footprint reduction, however the point here is to realize how simple math can portray a very basic forecast and picture of growth.

Read more about performance and capacity in Chapter 10 – Performance and capacity planning for storage networks – Resilient Storage Networks (Elsevier) as well as at www.cmg.org (Computer Measurement Group)..

And that is all I have to say about this for now, enjoy March madness and fun with numbers.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Virtual Storage and Social Media: What did EMC not Announce?

Synopsis: EMC made a vision statement in a recent multimedia briefing that has a social networking angle as well as storage virtualization, virtual storage, public and private clouds.

Basically EMC provided a vision preview of in a social media networking friendly manner of a vision being refereed to initially as EMC Virtual Storage (aka twitter hash tag #emcvs) which of course sounds similar to a pharmacy chain.

The vision includes stirring up the industry with a new discussion around virtual storage compared to the decade old coverage of storage virtualization.

The underlying theme of this vision is similar to that of virtual serves vs. server virtualization including the ability to move servers around, so to should there be the ability to move data around more freely on a local or global basis and in real or near real time. In other words, breaking the decades long affinity that has existed between data storage and the data that exists on it (Figure 1). Buzzword bingo themes include federated storage, virtual storage, public and private cloud along with global cache coherency among others.


Figure 1: EMC Virtual Storage (EMCVS) Vision

The rest of the story

On Thursday March 11th 2010 Pat Gelsinger (EMC President and COO, Information Infrastructure Products) held an interactive briefing with the global analyst community pertaining to future EMC trajectory or visions. One of the interesting things about this session was that it was not unique to industry analysts nor was it under NDA.

For example, here is a link that if still active, should provide access to the briefing material.

The vision being talked about include those that EMC has talked about in the past such as virtualized data centers, or, putting a spin on the phrase data center virtualization, along with public and private clouds as well as  infrastructure  resource management virtualization (Figure 2):


Figure 2: Public and Private Clouds along with Virtual Data Centers

Figure 2 is a fairly common slide used in many EMC discussions positing public and private clouds along with virtualized data centers.


Figure 3: Tenants of the EMC Virtual Storage (EMCVS) vision


Figure 4: Enabling mobile data, breaking data and storage affinity


Figure 5: Enabling teleporting and virtual storage

Thus setting up the story for the need and benefit of distributed cache coherency, similar to distributed lock management (DLM) used on local and wide area clustered file systems for maintain data integrity.


Figure 6: Leveraging distributed cache coherency

This discussion around distributed cache coherency should ring Dejavu of IBM GDPS (Global Dispersed Parallel Sysplex) for Mainframe, OpenVMS distributed lock management for VAX and Alpha clusters, Oracle RAC, or other parallel and clustered file systems among others. Likewise for those familiar with technology from Yotta Yotta, this should also ring familiar.

However while many are jumping on the Yotta Yotta familiarity bandwagon given comments made by Pat Gelsinger, something that came to mind is what about EMC GDDR? Do not worry if that is an acronym or product you are not up on as an EMC follower as it stands for EMC Geographically Dispersed Disaster (GDDR) solution that is an alternative to IBMs proprietary GDPS. Perhaps there is none, perhaps this is some, however what role if any including lessons learned will come from EMCs experience with GDDR not to mention other clustered file systems?


Figure 7: The EMC vision as presented

One of the interesting things about the vision announcement and perhaps part of floating it out for discussion was a comment made by Pat Gelsinger. That comment was about enabling the wild Wild West for IT, something that perhaps one generation might enjoy, however a notion another would soon forget. Im sure the EMC marke3ting team including their new chief marketing officer (CMO) Jeremy Burton can fine tune with time.
 

More on the social networking and non NDA angle

As is often the case with many other vendors, these types of customer, partner, analyst or media briefings (either online or in person) are under some form of NDA or embargo as they contain forward looking, yet to be announced products, solutions, technologies or other business initiatives. Note, these types of NDA discussions are not typically the same as those that portray or pretend to be NDA in order to sound more important a few days before an announcement that has already been leaked to get extra coverage or what are also known as media embargos.

After some amount of time, usually the information is formerly made public that was covered in advanced briefings, along with additional details. Sometimes material covered under NDA is done so in advanced such that third parties can prepare reports, deep dive analysis or assessment and other content that is made available at announcement or shortly there. The material is often prepared partners, vars, media, analysts, consultants, customers or others outside of the announcing company via different venues ranging from print, online columns, blogs, tweets videos and more.

Lately there has been some confusion in the broader IT as well as other industries as to where and how to classify bloggers, tweeters or other social media practionier. After all, is a blogger an analyst, journalist, free lance writer, advisor, vendor, consultant, customer, var, investor, hobbyist, competitor not to mention how does information get feed to them?

Likewise, NDAs and embargo have joined the list of fodder topics that some do not like for various reasons yet like to complain about for others. There is a time and place for real NDAs that cover and address material, discussions and other information that should not be shared. However all to often NDAs get watered down particularly on the press release games where a vendor or public relations firm (PR) will dangle an announcement briefing a couple of days or perhaps a week or two prior to an announcement under the guise that it not be disclosed prior to formal announcement.

Where these NDAs get tricky is that often they are honored by some and ignored by others, thus, those who honor the agreement get left behind by those who break the story. Personally I do not mind real NDA that are tied to real confidential material, discussion or other information that needs to be kept under wraps for various reasons. However the value or issues of NDA is whole different discussion, for now, lets get back to what EMC did not announce in their recent non-NDA briefing.

Different organizations are addressing social media in various ways, some ignoring it, others embracing it regardless of what it is. EMC is an example of a vendor who has embraced social networking and social media along with traditional means of developing and maintaining relations with the media (media or press relations), customers, partners, vars, consultants, investors (e.g. investor relations) as well as analysts (analyst relations).

For example, EMC works with analysts in traditional ways as they do with the media and other groups, however they also recognize that while some analysts (or media or investors or partners or customers or vars etc) blog and tweet (among other social networking mediums), not all do (as is also the case with media, customers, vars and so forth). Likewise EMC from a social media and networking perspective does not appear to define audiences based on the medium or tool that they use, rather, in a matrix or multi dimensional approach.

That is, an analyst with a blog is a blogger, a var or independent consultant with a blog is a blogger, or a media person including free lance writers, journalist, reporters or publisher with a blog is a blogger as are vars, advisors, partners and competitors with blogs also treated as bloggers.



Some of the 2009 EMC Bloggers Lounge Visitors

Thus at their EMCworld event, admission to the bloggers lounge is as simple and non exclusive as having a blog to join regardless of what your role or usage of a blog happens to be. On the other hand, information is communicated via different channels such as for traditional press via public relations folks, investors through investors relations, analysts via analyst relations, partners and customers through their venues and so forth.

When you think about it, makes sense as after all, EMC sells and attaches storage to mainframes, open systems Windows, UNIX, Linux as well as virtual servers that use different tools, protocols, languages and points of interest. Thus it should not be surprising that their approach to communicating with different audiences leverage various mediums for diverse messages at multiple points in time.

 

What does all of this social media discussion have to do with the March 11 EMC event?

In my opinion, this was an experiment of sorts of EMC to test the waters by floating a new vision to their traditional  pre brief audience in advance of talking with media prior to an actual announcement.

That is, EMC did not announce a new product, technology, initiative, business alliance or customer event, rather a vision and trajectory or signaling what they may be doing in the future.

How this ties to social media and networking is that rather than being an event only for those media, bloggers, tweeters, customers, consultants, vars, free lancers, partners or others who agreed to do so under NDA, EMC used the venue as an advance sounding board of sorts.

That is, by sticking to broad vision vs. propriety and confidential or sensitive topics, the discussion has been put out in advance in the open to stimulate discussion in traditional reports, articles, columns or related venues not to mention in temporal real time via twitter not to mention via blogs and beyond.

Does this mean EMC will be moving away from NDAs anytime soon? I do not think so as there is still very much a need for advanced (and not a couple of weeks prior to announcement) types of discussion around sensitive information. For example with the trajectory or visionary discussion last week by EMC, the short presentation and discussion, limited slides prompt more questions than they address.

Perhaps what we are seeing is a new approach or technique of how organizations can use and bring social networking mediums into the mainstream business process as opposed to being perceived as niche or experimental mediums.

The reason I think it was an experiment is that EMC practices both traditional analyst/media relations along with emerging social media networking relations that includes practioners that span both audiences. For some the social media bloggers and tweeters are a different audience than traditional media, writers, consultants or analysts, that is, they are a separate and unique audience.

Thus, it is in my opinion and like human knees, elbows, feet, hands, ears as well as, well, you get the picture I think that there are many different views or thoughts not to mention interpretations of social media, social networking, blogging, analysts, consultants, advisors, media or press, customers, partners, and so on with diverse roles, functions and needs.

Where this comes back to the topic of last weeks discussion is that of storage virtualization vs. virtual storage. Rest assured in the time since the EMC briefing and certainly in the weeks or months to come, there will be penalty of knees, elbows, hands and other body parts flying and signaling what is a particular view or definition of storage virtualization vs. virtual storage.

Of course, some of these will be more entertaining than others ranging from well rehearsed, in some cases over the past decade or more to new and perhaps even revolutionary ones of what is and what is not storage virtualization vs. virtual storage, let alone cloud vs. cluster vs. grid vs. federated and beyond.

 

Additional Comments and thoughts

In general, I like the trajectory vision EMC is rolling out even if it causes confusion between what is virtual storage vs. storage virtualization, after all, we have been hearing about storage virtualization for over a decade now if not longer. Likewise, there has been plenty of talk about public clouds so it is refreshing to see more discussion and less cloud ware or cloud marketecture and how to actually leverage what you have to adopt private cloud practices.

I suspect that as the EMC competition starts to hear or piece together what they think this vision is or is not, we should also start to hear some interesting stories, spins, counter pitches, debates, twitter fights, blog slams and YouTube videos, all of which also happen to consume more storage.

I also like what EMC is doing with social media and networking as a means or medium for building and maintain relationships as well as for information exchange complimenting traditional means and mediums.  

In other words, EMC is succeeding with social networking by not using it just as another megaphone to talk at or over people, rather, as a means to engage, to get to know, to challenge, to exchange regardless of if you are a so called independent blogger, twitter, analyst, medial, constant, customer, var, investor, partner among others.

If you are not already doing so, here are some EMC folks who actively participate in two way dialogues across different areas with @lendevanna helping to facilitate and leverage the masses of various people and subject matter experts including @chuckhollis @c_weil @cxi @davegraham @gminks @mike_fishman @stevetodd @storageanarchy @storagezilla @Stu and @vcto among many others.

Note that for you non twitter types, the previous are twitter handles (names or addresses) that can be accessed by putting https://twitter.com in place of the @ sign. For example @storageio = https://twitter.com/storageio

 

Additional Comments and thoughts:

Some comments and thoughts among others that I posted via twitter last week during the briefing event:

Here are some twitter comments that I posted last week during the event with hash tag #emcvs:

Is what was presented on the #emcvs #it #storage #virtualization call NDA material = Negative
Is what was presented on the #emcvs #it #storage #virtualization call a product announcement = NOpe
Is what was presented on the #emcvs #it #storage #virtualization call a statement of direction = Kind of
Is what was presented on the #emcvs #it #storage #virtualization call a hint of future functionality = probably
Is what was presented on the #emcvs #it #storage #virtualization call going to be shared with general public = R U reading this?
Is what was presented on the #emcvs #it #storage #virtualization call going to be discussed further = Yup
Is what was presented on the #emcvs #it #storage #virtualization call going to confuse the industry = Maybe
Is what was presented on the #emcvs #it #storage #virtualization call going to confuse customers = Depends on story teller
Is what was presented on the #emcvs #it #storage #virtualization call going to confuse competition = probably
Is what was presented on the #emcvs #it #storage #virtualization call going to provide fodder/fuel for bloggers = Yup
Anything else to add about #emcvs #it #storage #virtualization call today = Stay tuned, watch and listen for more!

Some additional questions and my perspectives on those include:

  • What did EMC announce? Nothing, it was not an announcement; it was a statement of vision.
  • Why did EMC hold a briefing without an NDA and yet nothing was announced? It is my opinion that EMC has a vision that they want to float an idea or direction, thus, sharing a vision to get discussions going without actually announcing a specific product or technology.
  • Is this going to be a repackaged version of the Invista storage virtualization platform? I do not believe so.
  • Is this going to be a repackaged version of the intellectual property (IP) assets that EMC picked up from the defunct startup called Yotta Yotta? Given some references to, along with what some of the themes and discussions center around, it is my guess that there is some Yotta Yotta IP along with other technologies that may be part of any future possible solution.
  • Who or what is YottaYotta? They were a late dot com startup founded in 2000 that went through various incarnations and value propositions with some solutions that shipped. Some of the late era IP included distributed cache coherency and distance enablement of large scale federated storage on a global basis.
  • Can the Yotta Yotta (or here) technology really scale? That remains to be seen, Yotta Yotta had some interesting demos, proof of concept, early adopters and big plans, however they also amounted to Nada Nada, perhaps EMC can make a Lotta Lotta out of it!

 

Other questions are still waiting for answers including among others:

  • Will EMC Virtual Storage (aka emcvs) become a common cure for typical IT infrastructure ailments?
  • Will this restart the debate around the golden rule of virtualization being whoever controls the virtualization controls the gold and thus vendors lock in?
  • Will this be a members only vision where only certain partners can participate?
  • What will other competitors respond with, technology, and marketecture, FUD or something else?
  • What are the specific details of when, where and how the vision is implemented?
  • What will all of this cost, will it work with existing products or is a forklift upgrade needed?
  • Has EMC bitten off more than they can chew or deliver on or is Pat Gelsinger and his crew racing down a mountain and out in front of their skis, or, is this brilliance beyond what we mere mortals can yet comprehend?
  • Can global data cache coherency really be deployed with data integrity on a global and large scale without negatively impacting performance?
  • Can EMC make Lotta Lotta with this vision?

 

Here is what some of the EMC bloggers have had to say so far:

Chuck Hollis aka @chuckhollis had this to say

Stuart Miniman aka @stu had this to say

 

Summing it up for now

Lets see how the rest of the industry responds to this as the vision rolls out and perhaps sooner vs. later becomes technology that gets deployed and used.

Im skeptical until more details are understood, however I also like it and intrigued by it if it can actually jump from Yotta Yotta slide ware to Lotta Lotta deployments.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved