A Storage I/O Momentus Moment

I recently asked for and received from Seagate (See recent post about them moving their paper head quarters to Ireland here) a Momentus XT 500GB 7200 RPM 2.5 Hybrid Hard Disk Drive (HHDD) to use in an upcoming project. That project is not to test a bunch of different Hard Disk Drives (HDDs), HHDDs, Removable HDD (RHDDs) or Solid State Devices (read more about SSDs here and here or storage optimization here) in order to produce results for someone for a fee or some other consideration.

Do not worry, I am not jumping on the bandwagon of calling my office collection of computers, storage, networks and software the StorageIO Independent hands on test lab. Instead, my objective is to actually use the Momentus XT in conjunction with other storage I/O devices ranging from notebook or laptop, desktop or server, NAS and cloud based storage in conjunction with regular projects that Im working on both in the office as well as while traveling to various out and about activities.

More often than not these days, common thinking or perception is that if anybody is talking about a product or technology it must be a paid for activity as why would anyone write or talk about something without getting or expecting something in exchange (granted there are some exceptions). Given this era of transparency talk, lets walk the talk and here is my disclosure which for those who have read my content before hopefully you will realize that disclosures should be simple, straight forward, easy, fun and common sense based instead of having to dance around or hide what may be being done.

Disclosure moment:
This is not a paid for or sponsored blog (read my disclosure statement here) and in fact is no way connected to in conjunction with, endorsed, sanctioned or approved by Seagate for that matter nor have they been and currently are not a client. I did however ask them for and they offered to send to me a single 500GB Momentus XT Hybrid Hard Disk Drive (HHDD) with no enclosure, accessories, adapter, cables, software or other packaging to be used for a project I am working on. However I did buy from Amazon.com a Seagate GoFlex USB 3.0 to SATA 3 connection cable kit that I had been eyeing for some other projects. Nuff said about that.

What am I doing with a Seagate Momentus XT
As to the project I am working on, it has nothing to do with Seagate or any other vendors or clients for that matter as it is a new book that I will tell you more about in future posts. What I can share with you for now is that it is a follow on to my most previous books ( The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier) ). The new book will also be published by CRC Taylor and Francis.

Now for those who are interested in why would I request a Momentus XT Hybrid Hard Disk Drive (HHDD) from Seagate while turning down others offers of free hardware, software, services, trips and the like it is many fold. First I already own some Momentus (as perhaps you do and may not realize it) HDDs thus thought it would be fun and relatively straight forward to make some general comparisons. I needed some additional storage and I/O improvements to compliment and coexist with what I already have.

Does this mean that the book is going to be about flash Solid State Devices (SSD) since I am using a Momentus XT HHDD? The short answer is NO, it will be much more broadly focused however certainly various types of storage I/O control, public and private clouds, management, gaining control, networking, virtualization as well as other hardware, software, services techniques and technologies will be discussed building on my two previous books.

In addition, I want to see how compatible and useful in every day activities the HHDDs are as opposed to running a couple of standard iometer or other so called lab bench tests. After all, when you buy storage or any IT solutions, do you buy them to be used in your lab to run tests, or, do you buy them to do actual day to day tasks?

I also have been a fan of the HHDD as well as flash and DRAM based SSDs for many years (make that decades for SSDs) and see the opportunity to increase how I am actually using HDDs, HHDDs, SSDs as well as Removable Hard Disk Drives (RHDD) in conjunction with NAS, DAS and other storage to support my book writing as well as other projects that I have bought in the past.

What is the Seagate Momentus XT
The Seagate Momentus series of HDDs are positioned as desktop, notebook and laptop devices that vary in rotational speed (RPM), physical form factor, storage capacity as well as price. The XT is a Hybrid Hard Disk Drive (HHDD) that is essentially a best of breed (hence Hybrid) type device incorporating the high capacity and low cost of a traditional 2.5 7200 RPM HDD with performance boost of flash SSD memory. For example some initial testing of working with very large files have found that the XT can in some instances be as fast as a SSD while holding 10x the capacity with a favorable price.

In other words, an effective balance of cost per GByte capacity, cost per IOP and energy efficiency per IOP. This does not mean however that an XT should be used everywhere or for a replacement to DRAM or flash SSD quite to the contrary as those devices are good tools for specific needs or applications. Instead, the XT provides a good balance of performance and capacity to bridge the gap between traditional spinning HDDs price per capacity and performance per cost of SSD. (For those interested, here is a link to what Seagate is doing with SSD e.g. Pulsar in addition to HHDD and HDD).

Value proposition and business (or consumer) benefits moment
What is the benefit, why not just go all flash?

Simple and that is price unless your specific needs fit into the capacity space of an SSD and you need both the higher performance and lower energy draw (with subsequent heat generation). Note that I did not say heat elimination as during a recent quick test of copying 6GB of data to a flash based SSD it was warm just as the XT device was, however also a bit cooler than a comparable 7200 RPM 2.5 drive. If you can afford the full SSD flash or dram based device as well as it fits your needs and compatibility, go for it. However also make sure that you will see the full expected benefit of adding a SSD to your specific solutions as not all implementations are the same (e.g. do your homework).

Why not just go all HDD?

Simple, economics and performance which is why as I said back in 2005 that HHDDs had a very bright future and will IMHO drive a wedge between the traditional HDD and emerging flash based SSD markets at least for non consumer devices on a near term basis given their compatibility capabilities.

In other words, you could think of it as a compromise, or as a best of breed. For example I can see where for compatible not to mention cost and customer comfort ability of a known entity HHDD will gain some popularity in desktops, laptops, notebooks as well as other devices where a performance boost is needed however not at the expense of throwing out capacity or tight economic budgets.

I can also see some interesting scenarios for hosting virtual machines (VMs) to support server Virtualization with VMware, HyperV or Xen based solutions among others. Another scenario is for bulk storage or archive and backup solutions where the HHDD with their extended cache in the form of flash can help to boost performance of read or write operations on VTLs and dedupe devices, archive platforms, backup or other similar functions. Sure the Momentus XT is positioned as a desktop, notebook type device however has that ever stopped vendors or solution providers from using those types of devices in different roles other than what they were designed for? I am just sayin.

Speeds, feeds and buzzword bingo moment
Seagate has many different types of disk drives that can be found here. In general, the Momentus XT is a 2.5 small form factor (SFF) Hybrid Hard Disk Drive (HHDD) available in 500GB, 320GB and 250GB capacity (I have the 500GB model ST95005620AS) with 4GB SLC NAND (flash) SSD memory, 32MB of drive level cache, an underlying 7200RPM disk drive with SATA 3Gb/s interface including as well as Native Command Queuing (NCQ). Now if you want to say that the XT implements tiered storage in a single device (DRAM, flash and HDD) go ahead. Following are a couple of links of where you can learn more.

Seagate Seatools disk drive diagnostic software (free here)

Seagate FreeAgent Goflex Upgrade Cable (USB 3.0 to SATA 3 STAE104) (Seagate site and Amazon)

Seagate Momentus XT site with general information, product overview and data sheets as well as on Amazon

What does a Momentus XT have to do with writing a book?
If you have ever written a book, or for that matter, done a large development project of any type then things should be a bit familiar. These types of projects include the needs to keep organized as well as protected multiple copies of documents (a dedupers dream) including text, graphics or figures, spreadsheets not to mention project tracking material among others. Likewise as is the case with other authors who work for a living, much of these books are written, edited, proofed or thought about while traveling to different vents, client sites, conferences, meetings or on vacation for that matter. Hence the need to have multiple copies of data on different devices to help guard against when something happens (note that I did not say if).

This is nothing new as each of my last two solo book projects as well as when I was a coauthor contributing content to other books including The Resilient Enterprise (Veritas/Symantec). Much of the content was created while traveling relying on portable storage and backup while on the road. Something someone pointed out to me recently is that this is an example of eating your own dog food or eliminating the shoe makers children syndrome (where the shoe maker creates shoes for others however not for his own children).

Initial moments and general observations
From time to time I will post some notes and observations about how the Momentus XT is performing or behaving which if all goes as planned and so far has, it should be very transparent coexisting with some of my Removable Hard Disk Drives (RHDD) such as the Imation Odyssey which I bought several years ago for offsite bulk removable storage of data that goes to a secure vault somewhere.

Initial deployment other than a stupid mistake on my part has been smooth. What was the stupid mistake you ask? Simple, when I attached the drive via a USB 3.0 cable to SATA 3 connector to one of my XP SP3 systems, Windows saw the device however it did not show up in the list of available devices. Ok, I know I know, it was late in the evening however that is no excuse for realizing that the disk had not yet been initialized let alone formatted. A quick check using Seatools (free here) showed all was well. I then launched Windows Disk Manager, did the initialize, followed by format and all was good from that point on. Wow, wonder how much credibility I will lose over that gaff with the techno elite (that is a joke and a bit of humor btw).

I have already done some initial familiarization and compatibility testing with some of my other drives including a 2.5 64GB SATA flash SSD as well as a 2.5 7200RPM HDD both that I use for bulk data movement activities. At some point I also plan on attaching the XT to my Iomega IX4 NAS to try various things as I have done with other external devices in the past.

Granted these were not ideal conditions as I was in hurry and wanted to get some quick info. Given the probably less than ideal configuration as the format after the HDD was first initialized took about an hour using a FAT32 plug and play configuration. With NTFS and other optimizations I assume it can be better however this was again just to get an initial glimpse of the device in use.

Given that it is a HHDD that uses flash as a big buffer with a 500GB HDD plus 32MB of cache as a backing store, it was interesting attaching it to the computer, then waiting a few minutes, then launching a file copy. Where a normal HDD would start slightly vibrating due to rotation, it was a few moments before any vibration or noise was detected on the Momentus XT which should be of no surprise as the flash was doing its job acting as a buffer until the HDD spun up for work.

I did some initial file copying back and forth between different computers while LAN and NAS were busy doing other things including backups to the Mozy cloud. No discrete time or performance benchmarks to talk about yet, however overall, the XT not surprisingly does seem to be a bit faster than another external 7200 RPM 2.5 drive I use for bulk data moves both on reads and writes. Likewise, given that it is a hybrid HDD leveraging flash as an extended cache with an underlying HDD plus 32MB of cache, it may not always be as fast as my external 2.5 64GB flash SSD, however that is also a common apples to oranges comparison mistake (more on that in a future post).

For example, copying over 6GBytes of data (5 large files of various size) from a 7200 RPM 2.5 160GB Momentus drive in a laptop to the HHDD XT and a flash SSD both took about 8 to 9 minutes where as the normal copy to a 2.5 5400 RPM HDD takes at least 14 to 15 minutes if not longer. Note that these are very rough and far from accurate or reflective comparisons rather a quick gauge of benefits (e.g. getting data moved faster). When I get around to it, will do some more accurate comparisons and put into a follow up post. However I can see already where the XT has the performance similar to the SSD however with almost 10x the capacity which means it could possibly have an interesting role in supporting disk to disk (D2D) backups which I will give a try.

Eventually I will be removing the USB connector kit and actually installing the Momentus into a computer or two (not at the same time) however I am currently walking before running. Im still up in the air as to if I would install the XT into a computer with Windows XP SP3, or simply do a new install of Windows 7 on it to which Im open to thoughts, comments, feedback or applicable suggestions (besides switching to a Macbook or iPad).

Wrap up and fun moment

In the above photo, there is the Seagate Momentus (ST95005620AS), a Goflex USB 3.0 to SATA conversion attachment cable (docking device), a fortune cookie, couple of US quarters and Canadian two dollar coins (See out and about update), paper clips and fishing bobber on a note pad. Why the coins to show relative size and diversity across different geographies as this device will be traveling (it missed out on recent European trip to Holland).

Why the paper clips? Simple, why not, you never know when you will need one for something such as a MacGyver moment, or for pushing the tiny reset button on a device among other activities.

How about the fortune cookie? For good luck and I might need a quick snack while having a cup of coffee not to mention Chinese as well as Asian in general is one of my favorites cuisines to prepare or cook not to mention eat.

Oh, what about the fishing bobber? Why not, it was just laying around and you could also that Im fishing for information to see how the device fits into normal use or that it is there for fun or to add color to the photo.

Oh, and the note pad? Hmm, well, if you cannot figure that one out besides being a back drop, lets just say that the Momentus line in general as well as XT specifically are targeted for notebook, desktop, laptop or other deployment scenarios. If you still dont see the connection, ok fine, feel free to post a comment and I will happily clarify it for you.

That is all for the moment, however I will be following up with more soon.

In the meantime, enjoy your summer if in the northern hemisphere (or winter if in the south).

Take lots of photos, videos and make audio recordings to fill up those USB flash thumb drives (consumer SSD), SD memory cards, computer hard drives, cloud and online web hosting sites so that have you something to remember your special out and about moments by.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Supreme Court Rules Sarbox intact, Oversight Board Changes


Today the US Supreme Court ruled on a Nevada case involving constitutionality of the 2002 Sarbanes-Oxley (Sarbox) accounting regulations pertaining to appointments to the independent public company accounting oversight board.

The Supreme Court ruled that the Sarbox regulations or law remains intact, however the process or controls around the oversight board must change.

My interpretation and perspective from reading a few different reports is that Sarbox as you know and love (or hate) it is essentially still intact. However what has changed or will be is that individual board members can now be removed or at least in an easier manner. Instead of the request to strike down the Sarbox regulations, the Supreme Court instead appears to have left the regulations intact instead ruling that board members can be changed or removed.

What does this all mean?

Perhaps not much other than firms who have been making money on Sarbox now having something else to talk or consult about (Hmmm, a Sarbox stimulus?).

On the other hand, with the ability to have Sarbox board members more easily removed, perhaps we will see a new board installed that could influence the thinking and thus applicability of Sarbox activity.

Near term, I can see this as being non news for some, and for others, confusion and lets not forget that in chaos or confusion there is opportunity.

Here are some links to read more

  • US Supreme Court website and other news
  • Supreme Court to Hear Challenge to Accounting Board
  • Court Strikes Down Part of Sarbanes-Oxley
  • Nuff said about this for now, whats your take?

    Cheers gs

    Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    June 2010 StorageIO Newsletter

    StorageIO News Letter Image
    June 2010 Newsletter

    Welcome to the June 2010 edition of the Server and StorageIO Group (StorageIO) newsletter. This follows the Spring 2010 edition building on the great feedback received from recipients.
    Items that are new in this expanded edition include:

    • Out and About Update
    • Industry Trends and Perspectives (ITP)
    • Featured Article

    You can access this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions. Click on the following links to view the June 2010 edition as an HTML or PDF or, to go to the newsletter page to view previous editions.

    Follow via Goggle Feedburner here or via email subscription here.

    You can also subscribe to the news letter by simply sending an email to newsletter@storageio.com

    Enjoy this edition of the StorageIO newsletter, let me know your comments and feedback.

    Cheers gs

    Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    Industry Trends and Perspectives: Tiered Storage, Systems and Mediums

    This is part of an ongoing series of short industry trends and perspectives blog posts briefs.

    These short posts compliment other longer posts along with traditional industry trends and perspective white papers, research reports, solution brief content found at www.storageioblog.com/reports.

    Two years ago we read about how the magnetic disk drive would be dead in a couple of years at the hand of flash SSD. Guess what, it is a couple of years later and the magnetic disk drive is far from being dead. Granted high performance Fibre Channel disks will continue to be replaced by high performance, small form factor 2.5" SAS drives along with continued adoption of high capacity SAS and SATA devices.

    Likewise, SSD or flash drives continue to be deployed, however outside of iPhone, iPod and other consumer or low end devices, nowhere near the projected or perhaps hoped for level. Rest assured the trend Im seeing and hearing from IT customers is that some will continue to look for places to strategically deploy SSD where possible, practical and affordable, there will continue to be a roll for disk and even tape devices on a go forward basis.

    Also watch for more coverage and discussion around the emergence of the Hybrid Hard Disk Drive (HHDD) that was discussed about four to five years ago. The HHDD made an appearance and then quietly went away for some time, perhaps more R and D time in the labs while flash SSD garnered the spotlight.

    There could be a good opportunity for HHDD technology leveraging the best of both worlds that is continued pricing decreases for disk with larger capacity using smaller yet more affordable amounts of flash in a solution that is transparent to the server or storage controller making for easier integration.

    Related and companion material:
    Blog: ILM = Has It Losts its Meaning
    Blog: SSD and Storage System Performance
    Blog: Has SSD put Hard Disk Drives (HDDs) On Endangered Species List
    Blog: Optimize Data Storage for Performance and Capacity Efficiency

    That is all for now, hope you find this ongoing series of current and emerging Industry Trends and Perspectives interesting.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Industry Trends and Perspectives: RAID Rebuild Rates

    This is part of an ongoing series of short industry trends and perspectives blog posts briefs.

    These short posts compliment other longer posts along with traditional industry trends and perspective white papers, research reports, solution brief content found at www.storageioblog.com/reports.

    There is continued concern about how long large capacity disk drives take to be rebuilt in RAID sets particularly as the continued shift from 1TB to 2TB occurs. It should not be a surprise that a disk with more capacity will take longer to rebuild or copy as well as with more drives; the likely hood of one failing statistically increases.

    Not to diminish the issue, however also to avoid saying the sky is falling, we have been here before! In the late 90s and early 2000s there was a similar concern with the then large 9GB, 18GB let alone emerging 36GB and 72GB drives. There have been improvements in RAID as well as rebuild algorithms along with other storage system software or firmware enhancements not to mention boost in processor or IO bus performance.

    However not all storage systems are equal even if they use the same underlying processors, IO busses, adapters or disk drives. Some vendors have made significant improvements in their rebuild times where each generation of software or firmware can reconstruct a failed drive faster. Yet for others, each subsequent iteration of larger capacity disk drives brings increased rebuild times.

    If disk drive rebuild times are a concern, ask your vendor or solution provider what they are doing as well as have done over the past several years to boost their performance. Look for signs of continued improvement in rebuild and reconstruction performance as well as decrease in error rates or false drive rebuilds.

    Related and companion material:
    Blog: RAID data protection remains relevant
    Blog: Optimize Data Storage for Performance and Capacity Efficiency

    That is all for now, hope you find this ongoing series of current and emerging Industry Trends and Perspectives interesting.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Industry Trends and Perspectives: Tape, Disk and Dedupe Coexistence

    This is part of an ongoing series of short industry trends and perspectives blog posts briefs.

    These short posts compliment other longer posts along with traditional industry trends and perspective white papers, research reports, solution brief content found at www.storageioblog.com/reports.

    The topic of this post is a trend that I am seeing and hearing about during discussions with IT professionals pertaining to how tape is still alive despite common industry FUD.

    Not only is tape still very much alive with recent enhancements including LTO5 with an extended range roadmap, it is also finding new roles. In addition to being deployed in new roles, tape is coexisting and complimenting dedupe or other disk based backup and data protection approaches and vice versa.

    Hearing tape is alive in the same sentence as dedupe deployments continuing may sound counter intuitive if you only listen to some vendor pitches.

    However if you talk with IT customers particularly those in larger environments or with VARs that provide complete solution offering focus you will hear a different tune than tape is dead and dedupe rules. Tape is still alive however its roll is changing. Watch for more on this and related topics.

    That is all for now, hope you find this ongoing series of current and emerging Industry Trends and Perspectives interesting.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Industry Trends and Perspectives: Tiered Hypervisors and Microsoft Hyper-V

    Storage I/O trends
    This is part of an ongoing series of short industry trends and perspectives blog posts briefs.

    These short posts complement other longer posts along with traditional industry trends and perspective white papers, research reports, solution brief content found at www.storageioblog.com/reports.

    Multiple – Tiered Hypervisors for Server Virtualization

    The topic of this post is a trend that I am seeing and hearing about during discussions with IT professionals of the use of two or more server virtualization hypervisors or what is known as tiered Hypervisors.

    Server Virtualization Hypervisor Trends

    A trends tied to server virtualization that I am seeing more of are that IT organizations are increasingly deploying or using two or more different hypervisors (e.g. Citrix/Xen, Microsoft/Hyper-V, VMware vSphere) in their environment (on separate physical server or blades).

    Tiered hypervisors is a concept similar to what many IT organizations already have in terms of different types of servers for various use cases, multiple operating systems as well as several kinds of storage mediums or devices.

    What Im seeing is that IT pros are using different hypervisors to meet various cost, management and vendor control goals aligning the applicable technology to the business or application service category.

    Tiered Virtualization Hypervisor Management

    Of course this brings up the discussion of how to manage multiple hypervisors and thus the real battle is or will be not about hypervisors, rather that of End to End (E2E) management.

    A question that I often ask VARs and IT customers if they see Microsoft on the offensive or defensive with Hyper-V vs. VMware and vice versa, that is if VMware is on the defense or offense against Microsoft.

    Not surprisingly the VMware and Microsoft faithful will say that the other is clearly on the defensive.

    Meanwhile from other people, the feelings are rather mixed with many feeling that Microsoft is increasingly on the offensive with VMware being seen by some as playing a strong defense with a ferocious offense.

    Learn more

    Related and companion material:
    Video: Beyond Virtualization Basics (Free: May require registration)
    Blog: Server and Storage Virtualization: Life beyond Consolidation
    Blog: Should Everything Be Virtualized?

    That is all for now, hope you find this ongoing series of current and emerging Industry Trends and Perspectives interesting.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Industry Trends and Perspectives: 6GB SAS and DAS are not Dumb A$$ Storage

    Blog: Industry Trends and Perspectives: 6GB SAS and DAS are not Dumb A$$ Storage

    This is part of an ongoing series of short industry trends and perspectives blog posts briefs.

    These short posts compliment other longer posts along with traditional industry trends and perspective white papers, research reports, solution brief content found at www.storageioblog.com/reports.

    With 6G that increases performance as well as connectivity flexibility, more servers are supporting SAS natively while storage system continue to add support for 3.5" and 2.5" small form factor high performance and large capacity SAS drives. Shared SAS DAS storage systems are being deployed for consolidation attached to two or more servers as well as for clustered solutions.

    Another area where shared SAS DAS storage is being deployed is in cloud, scale out NAS and bulk storage environments as a price performance alternative to iSCSI or Fibre Channel solutions.

    Keep an eye on these and other trends including converged systems, server, storage and networking management along with associated tools.

    Related and companion material:
    Article: Green and SASy = Energy and Economic, Effective Storage
    Article: The Many Faces of SAS – Beyond the DAS Factor

    That is all for now, hope you find this ongoing series of current and emerging Industry Trends and Perspectives interesting.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Upcoming Event: Industry Trends and Perspective European Seminar

    Event Seminar Announcement:

    IT Data Center, Storage and Virtualization Industry Trends and Perspective
    June 16, 2010 Nijkerk, GELDERLAND Netherlands

    Event TypeTraining/Seminar
    Event TypeSeminar Training with Greg Schulz of US based Server and StorageIO
    SponsorBrouwer Storage Consultancy
    Target AudienceStorage Architects, Consultants, Pre-Sales, Customer (technical) decison makers
    KeywordsCloud, Grid, Data Protection, Disaster Recovery, Storage, Green IT, VTL, Encryption, Dedupe, SAN, NAS, Backup, BC, DR, Performance, Virtualization, FCoE
    Location and VenueAmpt van Nijkerk Berencamperweg
    Nijkerk, GELDERLAND NL
    WhenWed. June 16, 2010 9AM-5PM Local
    Price€ 450,=
    Event URLLinkedIn: https://storageioblog.com/book4.html
    ContactGert Brouwer
    Olevoortseweg 43
    3861 MH Nijkerk
    The Netherlands
    Phone: +31-33-246-6825
    Fax: +31-33-245-8956
    Cell Phone: +31-652-601-309

    info@brouwerconsultancy.com

    AbstractGeneral items that will be covered include: What are current and emerging macro trends, issues, challenges and opportunities. Common IT customer and IT trends, issues and challenges. Opportunities for leveraging various current, new and emerging technologies, techniques. What are some new and improved technologies and techniques. The seminar will provide insight on how to address various IT and data storage management challenges, where and how new and emerging technologies can co-exist as well as compliment installed resources for maximum investment protection and business agility. Additional themes include cost and storage resource management, optimization and efficiency approaches along with where and how cloud, virtualizaiton and other topics fit into existing environments.

    Buzzwords and topics to be discussed include among others: FC and FCoE, SAS, SATA, iSCSI and NAS, I/O Vritualization (IOV) and convergence SSD (Flash and RAM), RAID, Second Generation MAID and IPM, Tape Performance and Capacity planning, Performance and Capacity Optimization, Metrics IRM tools including DPM, E2E, SRA, SRM, as Well as Federated Management Data movement and migration including automation or policy enabled HA and Data protection including Backup/Restore, BC/DR , Security/Encryption VTL, CDP, Snapshots and replication for virtual and non virtual environments Dynamic IT and Optimization , the new Green IT (efficiency and productivity) Distributed data protection (DDP) and distributed data caching (DDC) Server and Storage Virtualization along with discussion about life beyond consolidation SAN, NAS, Clusters, Grids, Clouds (Public and Private), Bulk and object based Storage Unified and vendor prepackaged stacked solutions (e.g. EMC VCE among others) Data footprint reduction (Servers, Storage, Networks, Data Protection and Hypervisors among others.

    Learn about other events involving Greg Schulz and StorageIO at www.storageio.com/events

    EMC VPLEX: Virtual Storage Redefined or Respun?

    In a flurry of announcements that coincide with EMCworld occurring in Boston this week of May 10 2010 EMC officially unveiled the Virtual Storage vision initiative (aka twitter hash tag of #emcvs) and initial VPLEX product. The Virtual Storage initiative was virtually previewed back in March (See my previous post here along with one from Stu Miniman (twitter @stu) of EMC here or here) and according to EMC the VPLEX product was made generally available (GA) back in April.

    The Virtual Storage vision and associated announcements consisted of:

    • Virtual Storage vision – Big picture  initiative view of what and how to enable private clouds
    • VPLEX architecture – Big picture view of federated data storage management and access
    • First VPLEX based product – Local and campus (Metro to about 100km) solutions
    • Glimpses of how the architecture will evolve with future products and enhancements


    Figure 1: EMC Virtual Storage and Virtual Server Vision and Big Pictures

    The Big Picture
    The EMC Virtual Storage vision (Figure 1) is the foundation of a private IT cloud which should enable characteristics including transparency, agility, flexibility, efficient, always on, resiliency, security, on demand and scalable. Think of it this way, EMC wants to enable and facilitate for storage what is being done by server virtualization hypervisor vendors including VMware (which happens to be owned by EMC), Microsoft HyperV and Citrix/Xen among others. That is, break down the physical barriers or constraints around storage similar to how virtual servers release applications and their operating systems from being tied to a physical server.

    While the current focus of desktop, server and storage virtualization has been focused on consolidation and cost avoidance, the next big wave or phase is life beyond consolidation where the emphasis expands to agility, flexibility, ease of use, transparency, and portability (Figure 2). In the next phase which puts an emphasis around enablement and doing more with what you have while enhancing business agility focus extends from how much can be consolidated or the number of virtual machines per physical machine to that of using virtualization for flexibility, transparency (read more here and here or watch here).


    Figure 2: Virtual Storage Big Picture

    That same trend will be happening with storage where the emphasis also expands from how much data can be squeezed or consolidated onto a given device to that of enabling flexibility and agility for load balancing, BC/DR, technology upgrades, maintenance and other routine Infrastructure Resource Management (IRM) tasks.

    For EMC, achieving this vision (both directly for storage, and indirectly for servers via their VMware subsidiary) is via local and distributed (metro and wide area) federation management of physical resources to support virtual data center operations. EMC building blocks for delivering this vision including VPLEX, data and storage management federation across EMC and third party products, FAST (fully automated storage tiering), SSD, data protection and data footprint reduction and data protection management products among others.

    Buzzword bingo aside (e.g. LAN, SAN, MAN, WAN, Pots and Pans) along with Automation, DWDM, Asynchronous, BC, BE or Back End, Cache coherency, Cache consistency, Chargeback, Cluster, db loss, DCB, Director, Distributed, DLM or Distributed Lock Management, DR, Foe or Fibre Channel over Ethernet, FE or Front End, Federated, FAST, Fibre Channel, Grid, HyperV, Hypervisor, IRM or Infrastructure Resource Management, I/O redirection, I/O shipping, Latency, Look aside, Metadata, Metrics, Public/Private Cloud, Read ahead, Replication, SAS, Shipping off to Boston, SRA, SRM, SSD, Stale Reads, Storage virtualization, Synchronization, Synchronous, Tiering, Virtual storage, VMware and Write through among many other possible candidates the big picture here is about enabling flexibility, agility, ease of deployment and management along with boosting resource usage effectiveness and presumably productivity on a local, metro and future global basis.


    Figure 3: EMC Storage Federation and Enabling Technology Big Picture

    The VPLEX Big Picture
    Some of the tenants of the VPLEX architecture (Figure 3) include a scale out cluster or grid design for local and distributed (metro and wide area) access where you can start small and evolve as needed in a predictable and deterministic manner.


    Figure 4: Generic Virtual Storage (Local SAN and MAN/WAN) and where VPLEX fits

    The VPLEX architecture is targeted towards enabling next generation data centers including private clouds where ease and transparency of data movement, access and agility are essential. VPLEX sits atop existing EMC and third party storage as a virtualization layer between physical or virtual servers and in theory, other storage systems that rely on underlying block storage. For example in theory a NAS (NFS, CIFS, and AFS) gateway, CAS content archiving or Object based storage system or purpose specific database machine could sit between actual application servers and VPLEX enabling multiple layers of flexibility and agility for larger environments.

    At the heart of the architecture is an engine running a highly distributed data caching algorithm that uses an approach where a minimal amount of data is sent to other nodes or members in the VPLEX environment to reduce overhead and latency (in theory boosting performance). For data consistency and integrity, a distributed cache coherency model is employed to protect against stale reads and writes along with load balancing, resource sharing and failover for high availability. A VPLEX environment consists of a federated management view across multiple VPLEX clusters including the ability to create a stretch volume that is accessible across multiple VPLEX clusters (Figure 5).


    Figure 5: EMC VPLEX Big Picture


    Figure 6: EMC VPLEX Local with 1 to 4 Engines

    Each VPLEX local cluster (Figure 6) is made up of 1 to 4 engines (Figure 7) per rack with each engine consisting of two directors each having 64GByte of cache, localized compute Intel processors, 16 Front End (FE) and 16 Back End (BE) Fibre Channel ports configured in a high availability (HA). Communications between the directors and engines is Fibre Channel based. Meta data is moved between the directors and engines in 4K blocks to maintain consistency and coherency. Components are fully redundant and include phone home support.


    Figure 7: EMC VPLEX Engine with redundant directors

    VPLEX initially host servers supported include VMware, Cisco UCS, Windows, Solaris, IBM AIX, HPUX and Linux along with EMC PowerPath and Windows multipath management drivers. Local server clusters supported include Symantec VCS, Microsoft MSCS and Oracle RAC along with various volume mangers. SAN fabric connectivity supported includes Brocade and Cisco as well as Legacy McData based products.

    VPLEX also supports cache (Figure 8 ) write thru to preserve underlying array based functionality and performance with 8,000 total virtualized LUNs per system. Note that underlying LUNs can be aggregated or simply passed through the VPLEX. Storage that attaches to the BE Fibre Channel ports include EMC Symmetrix VMAX and DMX along with CLARiiON CX and CX4. Third party storage supported includes HDS9000 and USPV/VM along with IBM DS8000 and others to be added as they are certified. In theory given that the VPLEX presents block based storage to hosts; one would also expect that NAS, CAS or other object based gateways and servers that rely on underlying block storage to also be supported in the future.


    Figure 8: VPLEX Architecture and Distributed Cache Overview

    Functionality that can be performed between the cluster nodes and engines with VPLEX include data migration and workload movement across different physical storage systems or sites along with shared access with read caching on a local and distributed basis. LUNS can also be pooled across different vendors underlying storage solutions that also retain their native feature functionality via VPLEX write thru caching.

    Reads from various servers can be resolved by any node or engine that checks their cache tables (Figure 8 ) to determine where to resolve the actual I/O operation from. Data integrity checks are also maintained to prevent stale reads or write operations from occurring. Actual meta data communications between nodes is very small to enable state fullness while reducing overhead and maximizing performance. When a change to cache data occurs, meta information is sent to other nodes to maintain the distributed cache management index schema. Note that only pointers to where data and fresh cache entries reside are what is stored and communicated in the meta data via the distributed caching algorithm.


    Figure 9: EMC VPLEX Metro Today

    For metro deployments, two clusters (Figure 9) are utilized with distances supported up to about 100km or about 5ms of latency in a synchronous manner utilizing long distance Fibre Channel optics and transceivers including Dense Wave Division Multiplexing (DWDM) technologies (See Chapter 6: Metropolitan and Wide Area Storage Networking in Resilient Storage Networking (Elsevier) for additional details on LAN, MAN and WAN topics).

    Initially EMC is supporting local or Metro including Campus based VPLEX deployments requiring synchronous communications however asynchronous (WAN) Geo and Global based solutions are planned for the future (Figure 10).


    Figure 10: EMC VPLEX Future Wide Area and Global

    Online Workload Migration across Systems and Sites
    Online workload or data movement and migration across storage systems or sites is not new with solutions available from different vendors including Brocade, Cisco, Datacore, EMC, Fujitsu, HDS, HP, IBM, LSI and NetApp among others.

    For synchronization and data mobility operations such as a VMware Vmotion or Microsoft HyperV Live migration over distance, information is written to separate LUNs in different locations across what are known as stretch volumes to enable non disruptive workload relocation across different storage systems (arrays) from various vendors. Once synchronization is completed, the original source can be disconnected or taken offline for maintenance or other common IRM tasks. Note that at least two LUNs are required, or put another way, for every stretch volume, two LUNs are subtracted from the total number of available LUNs similar to how RAID 1 mirroring requires at least two disk drives.

    Unlike other approaches that for coherency and performance rely on either no cached data, or, extensive amounts of cached data along with subsequent overhead for maintaining state fullness (consistency and coherency) including avoiding stale reads or writes, VPLEX relies on a combination of distributed cache lookup tables along with pass thru access to underlying storage when or where needed. Consequently large amounts of data does not need to be cached as well as shipped between VPLEX devices to maintain data consistency, coherency or performance which should also help to keep costs affordable.

    Approach is not unique, it is the implementation
    Some storage virtualization solutions that have been software based running on an appliance or network switch as well as hardware system based have had a focus of emulating or providing competing capabilities with those of mid to high end storage systems. The premise has been to use lower cost, less feature enabled storage systems aggregated behind the appliance, switch or hardware based system to provide advanced data and storage management capabilities found in traditional higher end storage products.

    VPLEX while like any tool or technology could be and probably will be made to do other things than what it is intended for is really focused on, flexibility, transparency and agility as opposed to being used as a means of replacing underlying storage system functionality. What this means is that while there is data movement and migration capabilities including ability to synchronize data across sites or locations, VPLEX by itself is not a replacement for the underlying functionality present in both EMC and third party (e.g. HDS, HP, IBM, NetApp, Oracle/Sun or others) storage systems.

    This will make for some interesting discussions, debates and applies to oranges comparisons in particular with those vendors whose products are focused around replacing or providing functionality not found in underlying storage system products.

    In a nut shell summary, VPLEX and the Virtual Storage story (vision) is about enabling agility, resiliency, flexibility, data and resource mobility to simply IT Infrastructure Resource Management (IRM). One of the key themes of global storage federation is anywhere access on a local, metro, wide area and global basis across both EMC and heterogeneous third party vendor hardware.

    Lets Put it Together: When and Where to use a VPLEX
    While many storage virtualization solutions are focused around consolidation or pooling, similar to first wave server and desktop virtualization, the next general broad wave of virtualization is life beyond consolidation. That means expanding the focus of virtualization from consolidation, pooling or LUN aggregation to that of enabling transparency for agility, flexibility, data or system movement, technology refresh and other common time consuming IRM tasks.

    Some applications or usage scenarios in the future should include in addition to VMware Vmotion, Microsoft HypverV and Microsoft Clustering along with other host server closuring solutions.


    Figure 11: EMC VPLEX Usage Scenarios

    Thoughts and Industry Trends Perspectives:

    The following are various thoughts, comments, perspectives and questions pertaining to this and storage, virtualization and IT in general.

    Is this truly unique as is being claimed?

    Interestingly, the message Im hearing out of EMC is not the claim that this is unique, revolutionary or the industries first as is so often the case by vendors, rather that it is their implementation and ability to deploy on a broad perspective basis that is unique. Now granted you will probably hear as is often the case with any vendor or fan boy/fan girl spins of it being unique and Im sure this will also serve up plenty of fodder for mudslinging in the blogsphere, YouTube galleries, twitter land and beyond.

    What is the DejaVu factor here?

    For some it will be nonexistent, yet for others there is certainly a DejaVu depending on your experience or what you have seen and heard in the past. In some ways this is the manifestation of many vision and initiatives from the late 90s and early 2000s when storage virtualization or virtual storage in an open context jumped into the limelight coinciding with SAN activity. There have been products rolled out along with proof of concept technology demonstrators, some of which are still in the market, others including companies have fallen by the way side for a variety of reasons.

    Consequently if you were part of or read or listened to any of the discussions and initiatives from Brocade (Rhapsody), Cisco (SVC, VxVM and others), INRANGE (Tempest) or its successor CNT UMD not to mention IBM SVC, StorAge (now LSI), Incipient (now part of Texas Memory) or Troika among others you should have some DejaVu.

    I guess that also begs the question of what is VPLEX, in band, out of band or hybrid fast path control path? From what I have seen it appears to be a fast path approach combined with distributed caching as opposed to a cache centric inband approaches such as IBM SVC (either on a server or as was tried on the Cisco special service blade) among others.

    Likewise if you are familiar with IBM Mainframe GDPS or even EMC GDDR as well as OpenVMS Local and Metro clusters with distributed lock management you should also have DejaVu. Similarly if you had looked at or are familiar with any of the YottaYotta products or presentations, this should also be familiar as EMC acquired the assets of that now defunct company.

    Is this a way for EMC to sell more hardware along with software products?

    By removing barriers enabling IT staffs to support more data on more storage in a denser and more agile footprint the answer should be yes, something that we may see other vendors emulate, or, make noise about what they can or have been doing already.

    How is this virtual storage spin different from the storage virtualization story?

    That all depends on your view or definition as well as belief systems and preferences for what is or what is not virtual storage vs. storage virtualization. For some who believe that storage virtualization is only virtualization if and only if it involves software running on some hardware appliance or vendors storage system for aggregation and common functionality than you probably wont see this as virtual storage let alone storage virtualization. However for others, it will be confusing hence EMC introducing terms such as federation and avoiding terms including grid to minimize confusion yet play off of cloud crowd commotion.

    Is VPLEX a replacement for storage system based tiering and replication?

    I do not believe so and even though some vendors are making claims that tiered storage is dead, just like some vendors declared a couple of years ago that disk drives were going to be dead this year at the hands of SSD, neither has come to life so to speak pun intended. What this means for VPLEX is that it leverages underlying automated or manual tiering found in storage systems such as EMC FAST enabled or similar policy and manual functions in third party products.

    What VPLEX brings to the table is the ability to transparently present a LUN or volume locally or over distance with shared access while maintaining cache and data coherency. This means that if a LUN or volume moves the applications or file system or volume managers expecting to access that storage will not be surprised, panic or encounter failover problems. Of course there will be plenty of details to be dug into and seen how it all actually works as is the case with any new technology.

    Who is this for?

    I see this as for environments that need flexibility and agility across multiple storage systems either from one or multiple vendors on a local or metro or wide area basis. This is for those environments that need ability to move workloads, applications and data between different storage systems and sites for maintenance, upgrades, technology refresh, BC/DR, load balancing or other IRM functions similar to how they would use virtual server migration such as VMotion or Live migration among others.

    Do VPLEX and Virtual Storage eliminate need for Storage System functionality?

    I see some storage virtualization solutions or appliances that have a focus of replacing underlying storage system functionality instead of coexisting or complementing. A way to test for this approach is to listen or read if the vendor or provider says anything along the lines of eliminating vendor lock in or control of the underlying storage system. That can be a sign of the golden rule of virtualization of whoever controls the virtualization functionality (at the server hypervisor or storage) controls the gold! This is why on the server side of things we are starting to see tiered hypervisors similar to tiered servers and storage where mixed hypervisors are being used for different purposes. Will we see tiered storage hypervisors or virtual storage solutions the answer could be perhaps or it depends.

    Was Invista a failure not going into production and this a second attempt at virtualization?

    There is a popular myth in the industry that Invista never saw the light of day outside of trade show expo or other demos however the reality is that there are actual customer deployments. Invista unlike other storage virtualization products had a different focus which was that around enabling agility and flexibility for common IRM tasks, similar the expanded focus of VPLEX. Consequently Invista has often been in apples to oranges comparison with other virtualization appliances that have as focus pooling along with other functions or in some cases serving as an appliance based storage system.

    The focus around Invista and usage by those customers who have deployed it that I have talked with is around enabling agility for maintenance, facilitating upgrades, moves or reconfiguration and other common IRM tasks vs using it for pooling of storage for consolidation purposes. Thus I see VPLEX extending on the vision of Invista in a role of complimenting and leveraging underlying storage system functionality instead of trying to replace those capabilities with that of the storage virtualizer.

    Is this a replacement for EMC Invista?

    According to EMC the answer is no and that customers using Invista (Yes, there are customers that I have actually talked to) will continue to be supported. However I suspect that over time Invista will either become a low end entry for VPLEX, or, an entry level VPLEX solution will appear sometime in the future.

    How does this stack up or compare with what others are doing?

    If you are looking to compare to cache centric platforms such as IBMs SVC that adds extensive functionality and capabilities within the storage virtualization framework this is an apples to oranges comparison. VPLEX is providing cache pointers on a local and global basis functioning in a compliment to underlying storage system model where SVC caches at the specific cluster basis and enhancing functionality of underlying storage system. Rest assured there will be other apples to oranges comparisons made between these platforms.

    How will this be priced?

    When I asked EMC about pricing, they would not commit to a specific price prior to the announcement other than indicating that there will be options for on demand or consumption (e.g. cloud pricing) as well as pricing per engine capacity as well as subscription models (pay as you go).

    What is the overhead of VPLEX?

    While EMC runs various workload simulations (including benchmarks) internally as well as some publicly (e.g. Microsoft ESRP among others) they have been opposed to some storage simulation benchmarks such as SPC. The EMC opposition to simulations such as SPC have been varied however this could be a good and interesting opportunity for them to silence the industry (including myself) who continue ask them (along with a couple of other vendors including IBM and their XIV) when they will release public results.

    What the interesting opportunity I think is for EMC is that they do not even have to benchmark one of their own storage systems such as a CLARiiON or VMAX, instead simply show the performance of some third party product that already is tested on the SPC website and then a submission with that product running attached to a VPLEX.

    If the performance or low latency forecasts are as good as they have been described, EMC can accomplish a couple of things by:

    • Demonstrating the low latency and minimal to no overhead of VPLEX
    • Show VPLEX with a third party product comparing latency before and after
    • Provide a comparison to other virtualization platforms including IBM SVC

    As for EMC submitting a VMAX or CLARiiON SPC test in general, Im not going to hold my breath for that, instead, will continue to look at the other public workload tests such as ESRP.

    Additional related reading material and links:

    Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier)
    Chapter 3: Networking Your Storage
    Chapter 4: Storage and IO Networking
    Chapter 6: Metropolitan and Wide Area Storage Networking
    Chapter 11: Storage Management
    Chapter 16: Metropolitan and Wide Area Examples

    The Green and Virtual Data Center (CRC)
    Chapter 3: (see also here) What Defines a Next-Generation and Virtual Data Center
    Chapter 4: IT Infrastructure Resource Management (IRM)
    Chapter 5: Measurement, Metrics, and Management of IT Resources
    Chapter 7: Server: Physical, Virtual, and Software
    Chapter 9: Networking with your Servers and Storage

    Also see these:

    Virtual Storage and Social Media: What did EMC not Announce?
    Server and Storage Virtualization – Life beyond Consolidation
    Should Everything Be Virtualized?
    Was today the proverbial day that he!! Froze over?
    Moving Beyond the Benchmark Brouhaha

    Closing comments (For now):
    As with any new vision, initiative, architecture and initial product there will be plenty of questions to ask, items to investigate, early adopter customers or users to talk with and determine what is real, what is future, what is usable and practical along with what is nice to have. Likewise there will be plenty of mud ball throwing and slinging between competitors, fans and foes which for those who enjoy watching or reading those you should be well entertained.

    In general, the EMC vision and story builds on and presumably delivers on past industry hype, buzz and vision with solutions that can be put into environments as productivity tool that works for the customer, instead of the customer working for the tool.

    Remember the golden rule of virtualization which is in play here is that whoever controls the virtualization or associated management controls the gold. Likewise keep in mind that aggregation can cause aggravation. So do not be scared, however look before you leap meaning do your homework and due diligence with appropriate levels of expectations, aligning applicable technology to the task at hand.

    Also, if you have seen or experienced something in the past, you are more likely to have DejaVu as opposed to seeing things as revolutionary. However it is also important to leverage lessons learned for future success. YottaYotta was a lot of NaddaNadda, lets see if EMC can leverage their past experiences to make this a LottaLotta.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Happy Earth Day 2010!

    Here in the northern hemisphere it is late April and thus mid spring time.

    That means the trees sprouting their buds, leaves and flowering while other plants and things come to life.

    In Minnesota where I live, there is not a cloud in the sky today, the sun is out and its going to be another warm day in the 60s, a nice day to not be flying or traveling and thus enjoy the fine weather.

    Among other things of note on this earth day 2010 include:

    • Minnesota Twins new home Target Field was just named the most Green Major League Baseball (MLB) stadium as well as greenest in the US with its LEED (or see here) certification.
    • Icelands Eyjafjallajokull volcano continues to spew water vapor steam, CO2 and ash at a slower rate than last week when it first erupted with some speculating that there could be impending activity from other Icelandic volcanos. Some estimates placed the initial eruption CO2 impact and subsequent flight cancellations to be neutral, essentially canceling each other out, however Im sure we will be hearing many different stories in the weeks to come.

    • Image of Iceland Eyjafjallajokull Volcano Eruption via Boston.com

    • Flights to/from and within Europe and the UK are returning to normal
    • Toyota continues to deal with recalls on some of their US built automobiles including the energy efficient Prius, some of which may have been purchased during the recent US cash for clunkers (CFC) program (hmm, is that ironic or what?)
    • Greenpeace in addition to using a Facebook page to protest Facebook data center practices is now targeting cloud IT in general including just before the Apple iPad launch (Heres some comments from Microsoft).
    • Vendors in all industries are lining up for the second coming of Green marketing or perhaps Green Washing 2.0

    The new Green IT, moving beyond Green wash and hype

    Speaking of Green IT including Green Computing, Green Storage, Virtualization, Cloud, Federation and more, here is a link to a post that I did back in February discussing how the Green Gap continues to exist.

    The green gap exists and centers around the confusion of what Green means along with the common disconnects between core IT issues or barriers to becoming more efficient, effective, flexible and optimized from both an economic as well as environmental basis to those commonly messaged to under the green umbrella (read more here).

    Regardless of where you stand on Green, Green washing, Green hype, environmentalism, eco-tech and other related themes, for at least a moment, set aside the politics and science debates and think in terms of practicality and economics.

    That is, look for simple, recurring things that can be done to stretch your dollar or spending ability in order to support demand (See figure below) in a more effective manner along with reducing waste. For example to meet growing demand requirements in the face of shrinking or stagnate budgets, the action is to stretch available resources to do more work when needed, or retain more where applicable with the same or less footprint. What this means is that while common messaging is around reducing costs, look at the inverse which is to do more with available budgets or resources. The result is green in terms of economic and environmental benefits.

    IT Resource demand
    Increasing IT Resource Demand

    Green IT wheel of oppourtunity
    Green IT enablement techniques and technologies

    Look at and understand the broader aspects of being green which has both economical and environmental benefits without compromising on productivity or functionality. There are many aspects or facets of being green beyond those commonly discussed or perceived to be so (See Green IT enablement techniques and technologies figure above).

    Certainly recycling of paper, water, aluminum, plastics and other items including technology equipment are important to reduce waste and are things to consider. Another aspect of reducing waste particularly in IT is to avoid rework that can range from finding network bottlenecks or problems that result in continuous retransmission of data for failed backup, replication or data transfers that cause lost opportunity or resource consumption. Likewise programming errors (bugs) or miss configuration that results in rework or lost productivity also are forms of waste among others.

    Another theme is that of shifting from energy avoidance to energy efficiency and effectiveness which are often thought to the same. However the expanded focus is also about getting more work done when needed with the same or less resources (See figure below) for example increasing activity (IOPS, transactions, emails or video served, bandwidth or messages) per watt of energy consumed.

    From energy avoidence to effectiveness
    Shifting from energy avoidance to effectiveness

    One of the many techniques and approaches for addressing energy including stretching resources and being green include intelligent power management (IPM). With IPM, the focus is not strictly centered around energy avoidance, instead about inteligently adapting to different workloads or activity balancing performance and energy. Thus when there is work to be done, get the work done quickly with as little energy as possible (IOP or activity per watt), when there is less work, provide lower performance and thus smaller energy requirements, or when no work to be done, going into additional energy saving modes. Thus power management does not have to be exclusively about turrning off the lights or IT equipment in order to be green.

    The following two figures look at Green IT past, present and future with an expanding focus around optimization and effectiveness meaning getting more work done, storing more data for longer periods of time, meeting growth demands with what appears to be additional resources however at a lower per unit cost without compromising on performance, availability or economics.

    Green IT wheel of oppourtunity
    Green IT: Past, present and future shift from avoidance to efficiency and effectiveness

    Green IT wheel of oppourtunity
    The new Green IT: Boosting business effectiveness, maximize ROI while helping the environment

    If you think about going green as simply doing or using things more effectively, reducing waste, working more intelligently or effectively the benefits are both economical and environmentally positive (See the two figures above).

    Instead of finding ways to fund green initiatives, shift the focus to how you can enable enhanced productivity, stretching resources further, doing more in the same or smaller footprint (floor space, power, cooling, energy, personal, licensing, budgets) for business economic and environmental sustainability with the result being environmental encampments.

    Also keep in mind that small percentage changes on a large or recurring basis have significant benefits. For example a small change in cooling temperatures while staying within vendor guideline recommendations can result in big savings for large environments.

     

    Bottom line

    If you are a business and discounting green as simply a fad, or perhaps as a public relations (PR) initiative or activity tied to reducing carbon footprints and recycling then you are missing out on economic (top and bottom line) enhancement opportunities.

    Likewise if you think that going green is only about the environment, then there is a missed opportunity to boost economic opportunities to help fund those inititiaves.

    Going green means many different things to various people and is often more broad and common sense based than most realize.

    That is all for now, happy earth day 2010

    Cheers gs

    Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    Spring 2010 StorageIO Newsletter

    Welcome to the spring 2010 edition of the Server and StorageIO (StorageIO) news letter.

    This edition follows the inaugural issue (Winter 2010) incorporating feedback and suggestions as well as building on the fantastic responses received from recipients.

    A couple of enhancements included in this issue (marked as New!) include a Featured Related Site along with Some Interesting Industry Links. Another enhancement based on feedback is to include additional comment that in upcoming issues will expand to include a column article along with industry trends and perspectives.

    StorageIO News Letter Image
    Spring 2010 Newsletter

    You can access this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions. Click on the following links to view the spring 2010 newsletter as HTML or PDF or, to go to the newsletter page.

    Follow via Goggle Feedburner here or via email subscription here.

    You can also subscribe to the news letter by simply sending an email to newsletter@storageio.com

    Enjoy this edition of the StorageIO newsletter, let me know your comments and feedback.

    Also, a very big thank you to everyone who has helped make StorageIO a success!.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Its US Census time, What about IT Data Centers?

    It is that once a decade activity time this year referred to as the US 2010 Census.

    With the 2010 census underway, not to mention also time for completing and submitting your income tax returns, if you are in IT, what about measuring, assessing, taking inventory or analyzing your data and data center resources?

    US 2010 Cenus formsUS 2010 Cenus forms
    Figure 1: IT US 2010 Census forms

    Have you recently taken a census of your data, data storage, servers, networks, hardware, software tools, services providers, media, maintenance agreements and licenses not to mention facilities?

    Likewise have you figured out what if any taxes in terms of overhead or burden exists in your IT environment or where opportunities to become more optimized and efficient to get an IT resource refund of sorts are possible?

    If not, now is a good time to take a census of your IT data center and associated resources in what might also be called an assessment, review, inventory or survey of what you have, how its being used, where and who is using and when along with associated configuration, performance, availability, security, compliance coverage along with costs and energy impact among other items.

    IT Data Center Resources
    Figure 2: IT Data Center Metrics for Planning and Forecasts

    How much storage capacity do you have, how is it allocated along with being used?

    What about storage performance, are you meeting response time and QoS objectives?

    Lets not forget about availability, that is planned and unplanned downtime, how have your systems been behaving?

    From an energy or power and cooling standpoint, what is the consumption along with metrics aligned to productivity and effectiveness. These include IOPS per watt, transactions per watt, videos or email along with web clicks or page views per watt, processor GHz per watt along with data movement bandwidth per watt and capacity stored per watt in a given footprint.

    Other items to look into for data centers besides storage include servers, data and I/O networks, hardware, software, tools, services and other supplies along with physical facility with metrics such as PUE. Speaking of optimization, how is your environment doing, that is another advantage of doing a data center census.

    For those who have completed and sent in your census material along with your 2009 tax returns, congratulations!

    For others in the US who have not done so, now would be a good time to get going on those activities.

    Likewise, regardless of what country or region you are in, its always a good time to take a census or inventory of your IT resources instead of waiting every ten years to do so.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Hard product vs. soft product

    In the IT industry space and data storage or computers and servers particularly so, mention hard product or software product and what comes to mind?

    How about physical vs. virtual servers or storage, hardware vs. software solutions, products vs. services?

    By contrast, in the aviation and airline industry among others, mention hard vs. soft product and there is a slight variation, which is the difference between one providers service delivery experience.

    For example, two or more different airlines or carriers may fly the same aircraft perhaps even with the same engines, instrumentation, navigation electronics and base features, all part of the hard product.

    However, their hard product could vary by type of seats, spacing or pitch along with width, overhead luggage room, Video on Demand (VoD) or In Flight Entertainment (IFE) as well as different cabin treatments (carpeting, wall coverings) and galley configurations. Even in scenarios where carriers have the same equipment and hard product, their soft product can differ.

    Example of a Soft Product, that is service (or lack there of) being delivered

    Example of a Soft Product (Service or lack there of being delivered)

    The soft product is the service delivery experience including by the cabin crew (flight attendants and pursers), food (or lack of), beverage, presentation and so forth. Also part of the soft product can be how seats are allocated or available for selection, boarding process and other items that contribute to the overall customer experience.

    This all got me thinking on a recent flight where the hard product (e.g. aircraft) of a particular carrier was identical; however given transitions taking place, the soft product still differed as was not fully integrated or merged yet. What the experience got me thinking about is that in IT, customers or solution providers can buy the same technology or hard product (hardware, software, services) from the same suppliers yet present different soft products or service experience to their customers.

    Example IT hard product (hardware and software) delivering soft product services

    IT equipment being used for delivery of different soft products

    Im sure that some of the cloud crowd cheerleaders might even jump up and down and claim that is the benefit of using managed service producers or similar services to obtain a different soft product. And while that may be true in some instances, it is also true that different traditional IT organizations are able to craft and deploy various types of soft products to their customers to meet different service requirements, cost or economic objectives using the same technology used by others.

    A different example of hard vs soft product is a site I have visited that has mainframes, windows and open systems servers whose business requires a soft product that is highly available, reliable, flexible, fast and affordable. Needless to say, in that environment, some of the open systems including windows platforms can have reliability close to if not equal to the mainframes.

    Example IT hard product (hardware and software) delivering soft product services
    IT equipment being used for delivery of different soft products

    What is even more amazing is that no special or different hard products (e.g. servers, storage, networks or software) are being used to achieve those services objectives. Rather it is the soft product that achieves the results in terms of how the techniques are used and managed. Likewise I have heard of other environments that have mixed mainframe and open systems, using common hard products as other organizations yet whose soft product is not as robust or reliable as others. If using the same hard product that is same software, hardware, networks and services, how could the soft product be any less robust?

    The answer is that good and reliable technology is important, however the technology is only as good as how it is managed, configured, monitored and deployed centering on processes, procedures and best practices.

    Next time you are on an airplane, or, using some other service that leverages common technologies (hardware or software or networks) take a moment to look around at the soft product and how the service experience of a common hard product can vary. That is, using common technology, how can various best practices, policies and operating principals to meet diverse service requirements differ to meet demand as well as economic requirements.

    What is your take and experience on different hard vs soft products in or around IT?

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved