VMware vSAN 6.6 hyper-converged (HCI) software defined data infrastructure

server storage I/O trends

VMware vSAN 6.6 hyper-converged (HCI) software defined data infrastructure

In case you missed it, VMware announced vSAN v6.6 hyper-converged infrastructure (HCI) software defined data infrastructure solution. This is the first of a five-part series about VMware vSAN V6.6. Part II (just the speeds feeds please) is located here, part III (reducing cost and complexity) located here, part IV (scaling ROBO and data centers today) found here, as well as part V here (VMware vSAN evolution, where to learn more and summary).

VMware vSAN 6.6
Image via VMware

For those who are not aware, vSAN is a VMware virtual Storage Area Network (e.g. vSAN) that is software-defined, part of being a software-defined data infrastructure (SDDI) and software-defined data center (SDDC). Besides being software-defined vSAN is HCI combining compute (server), I/O networking, storage (space and I/O) along with hypervisors, management, and other tools.

Software-defined data infrastructure

Excuse Me, What is vSAN and who is if for

Some might find it odd having to explain what vSAN is, on the other hand, not everybody is dialed into the VMware world ecosystem, so let’s give them some help, for everybody else, and feel free to jump ahead.

For those not familiar, VMware vSAN is an HCI software-defined storage solution that converges compute (hypervisors and server) with storage space capacity and I/O performance along with networking. Being HCI means that with vSAN as you scale compute, storage space capacity and I/O performance also increases in an aggregated fashion. Likewise, increase storage space capacity and server I/O performance you also get more compute capabilities (along with memory).

For VMware-centric environments looking to go CI or HCI, vSAN offers compelling value proposition leveraging known VMware tools and staff skills (knowledge, experience, tradecraft). Another benefit of vSAN is the ability to select your hardware platform from different vendors, a trend that other CI/HCI vendors have started to offer as well.

CI and HCI data infrastructure

Keep in mind that fast applications need a fast server, I/O and storage, as well as server storage I/O needs CPU along with memory to generate I/O operations (IOPs) or move data. What this all means is that HCI solutions such as VMware vSAN combine or converge the server compute, hypervisors, storage file system, storage devices, I/O and networking along with other functionality into an easy to deploy (and management) turnkey solution.

Learn more about CI and HCI along with who some other vendors are as well as considerations at www.storageio.com/converge. Also, visit VMware sites to find out more about vSphere ESXi hypervisors, vSAN, NSX (Software Defined Networking), vCenter, vRealize along with other tools for enabling SDDC and SDDI.

Give Me the Quick Elevator Pitch Summary

VMware has enhanced vSAN with version 6.6 (V6.6) enabling new functionality, supporting new hardware platforms along with partners, while reducing costs, improving scalability and resiliency for SDDC and SDDI environments. This includes from small medium business (SMB) to mid-market to small medium enterprise (SME) as well as workgroup, departmental along with Remote Office Branch Office (ROBO).

Being a HCI solution, management functions of the server, storage, I/O, networking, hypervisor, hardware, and software are converged to improve management productivity. Also, vSAN integrated with VMware vSphere among other tools enable modern, robust data infrastructure that serves, protect, preserve, secure and stores data along with their associated applications.

Where to Learn More

The following are additional resources to learn more about vSAN and related technologies.

What this all means

Overall a good set of enhancements as vSAN continues its evolution looking back just a few years ago, to where it is today and will be in the future. If you have not looked at vSAN recently, take some time beyond reading this piece to learn some more.

Continue reading more about VMware vSAN 6.6 in part II (just the speeds feeds please) is located here, part III (reducing cost and complexity) located here, part IV (scaling ROBO and data centers today) located here, as well as part V here (VMware vSAN evolution, where to learn more and summary).

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book “Software-Defined Data Infrastructure Essentials” (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

VMware vSAN V6.6 Part II (just the speeds feeds features please)

server storage I/O trends

VMware vSAN v6.6 Part II (just the speeds feeds features please)

In case you missed it, VMware announced vSAN v6.6 hyper-converged infrastructure (HCI) software defined data infrastructure solution. This is the second of a five-part series about VMware vSAN V6.6. View Part I here, part III (reducing cost and complexity) located here, part IV (scaling ROBO and data centers today) found here, as well as part V here (VMware vSAN evolution, where to learn more and summary).

VMware vSAN 6.6
Image via VMware

For those who are not aware, vSAN is a VMware virtual Storage Area Network (e.g. vSAN) that is software-defined, part of being a software-defined data infrastructure (SDDI) and software-defined data center (SDDC). Besides being software-defined vSAN is HCI combining compute (server), I/O networking, storage (space and I/O) along with hypervisors, management, and other tools.

Just the Speeds and Feeds Please

For those who just want to see the list of what’s new with vSAN V6.6, here you go:

  • Native encryption for data-at-rest
  • Compliance certifications
  • Resilient management independent of vCenter
  • Degraded Disk Handling v2.0 (DDHv2)
  • Smart repairs and enhanced rebalancing
  • Intelligent rebuilds using partial repairs
  • Certified file service & data protection solutions
  • Stretched clusters with local failure protection
  • Site affinity for stretched clusters
  • 1-click witness change for Stretched Cluster
  • vSAN Management Pack for vRealize
  • Enhanced vSAN SDK and PowerCLI
  • Simple networking with Unicast
  • vSAN Cloud Analytics with real-time support notification and recommendations
  • vSAN ConfigAssist with 1-click hardware lifecycle management
  • Extended vSAN Health Services
  • vSAN Easy Install with 1-click fixes
  • Up to 50% greater IOPS for all-flash with optimized checksum and dedupe
  • Support for new next-gen workloads
  • vSAN for Photon in Photon Platform 1.1
  • Day 0 support for latest flash technologies
  • Expanded caching tier choice
  • Docker Volume Driver 1.1

What’s New and Value Proposition of vSAN 6.6

Let’s take a closer look beyond the bullet list of what’s new with vSAN 6.6, as well as perspectives of those features to address different needs. The VMware vSAN proposition is to evolve and enable modernizing data infrastructures with HCI powered by vSphere along with vSAN.

Three main themes or characteristics (and benefits) of vSAN 6.6 include addressing (or enabling):

  • Reducing risk while scaling
  • Reducing cost and complexity
  • Scaling for today and tomorrow

VMware vSAN 6.6 summary
Image via VMware

Reducing risk while scaling

Reducing (or removing) risk while evolving your data infrastructure with HCI including flexibility of choosing among five support hardware vendors along with native security. This includes native security, availability and resiliency enhancements (including intelligent rebuilds) without sacrificing storage efficiency (capacity) or effectiveness (performance productivity), management and choice.

VMware vSAN DaRE
Image via VMware

Dat level Data at Rest Encryption (DaRE) of all vSAN dat objects that are enabled at a cluster level. The new functionality supports hybrid along with all flash SSD as well as stretched clusters. The VMware vSAN DaRE implementation is an alternative to using self-encrypting drives (SEDs) reducing cost, complexity and management activity. All vSAN features including data footprint reduction (DFR) features such as compression and deduplication are supported. For security, vSAN DaRE integrations with compliance key management technologies including those from SafeNet, Hytrust, Thales and Vormetric among others.

VMware vSAN management
Image via VMware

ESXi HTML 5 based host client, along with CLI via ESXCLI for administering vSAN clusters as an alternative in case your vCenter server(s) are offline. Management capabilities include monitoring of critical health and status details along with configuration changes.

VMware vSAN health management
Image via VMware

Health monitoring enhancements include handling of degraded vSAN devices with intelligence proactively detecting impending device failures. As part of the functionality, if a replica of the failing (or possible soon to fail) device exists, vSAN can take action to maintain data availability.

Where to Learn More

The following are additional resources to find out more about vSAN and related technologies.

What this all means

With each new release, vSAN is increasing its feature, functionality, resiliency and extensiveness associated with traditional storage and non-CI or HCI solutions. Continue reading more about VMware vSAN 6.6 in Part I here, part III (reducing cost and complexity) located here, part IV (scaling ROBO and data centers today) found here, as well as part V here (VMware vSAN evolution, where to learn more and summary).

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the Spring 2017 release of his new book “Software-Defined Data Infrastructure Essentials” (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

VMware vSAN V6.6 Part III (reducing costs complexity)

server storage I/O trends

VMware vSAN V6.6 Part III (Reducing costs complexity)

In case you missed it, VMware announced vSAN v6.6 hyper-converged infrastructure (HCI) software defined data infrastructure solution. This is the third of a five-part series about VMware vSAN V6.6. View Part I here, Part II (just the speeds feeds please) is located here, part IV (scaling ROBO and data centers today) found here, as well as part V here (VMware vSAN evolution, where to learn more and summary).

VMware vSAN 6.6
Image via VMware

For those who are not aware, vSAN is a VMware virtual Storage Area Network (e.g. vSAN) that is software-defined, part of being a software-defined data infrastructure (SDDI) and software-defined data center (SDDC). Besides being software-defined vSAN is HCI combining compute (server), I/O networking, storage (space and I/O) along with hypervisors, management, and other tools.

Reducing cost and complexity

Reducing your total cost of ownership (TCO) including lower capital expenditures (CapEx) and operating (OPEX). VMware is claiming CapEx and OpEx reduced TCO of 50%. Keep in mind that solutions such as vSAN also can help drive return on investment (ROI) as well as return on innovation (the other ROI) via improved productivity, effectiveness, as well as efficiencies (savings). Another aspect of addressing TCO and ROI includes flexibility leveraging stretched clusters to address HA, BR, BC and DR Availability needs cost effectively. These enhancements include efficiency (and effectiveness e.g. productivity) at scale, proactive cloud analytics, and intelligent operations.

VMware vSAN stretch cluster
Image via VMware

Low cost (or cost-effective) Local, Remote Resiliency and Data Protection with Stretched Clusters across sites. Upon a site failure, vSAN maintains availability is leveraging surviving site redundancy. For performance and productivity effectiveness, I/O traffic is kept local where possible and practical, reducing cross-site network workload. Bear in mind that the best I/O is the one you do not have to do, the second is the one with the least impact.

This means if you can address I/Os as close to the application as possible (e.g. locality of reference), that is a better I/O. On the other hand, when data is not local, then the best I/O is the one involving a local or remote site with least overhead impact to applications, as well as server storage I/O (including networks) resources. Also keep in mind that with vSAN you can fine tune availability, resiliency and data protection to meet various needs by adjusting fault tolerant mode (FTM) to address a different number of failures to tolerate.

server storage I/O locality of reference

Network and cloud friendly Unicast Communication enhancements. To improve performance, availability, and capacity (CPU demand reduction) multicast communications are no longer used making for easier, simplified single site and stretched cluster configurations. When vSAN clusters upgrade to V6.6 unicast is enabled.

VMware vSAN unicast
Image via VMware

Gaining insight, awareness, adding intelligence to avoid flying blind, introducing vSAN Cloud Analytics and Proactive Guidance. Part of a VMware customer, experience improvement program, leverages cloud-based health checks for easy online known issue detection along with relevant knowledge bases pieces as well as other support notices. Whether you choose to refer to this feature as advanced analytics, artificial intelligence (AI), proactive rules enabled management problem isolation, solving resolution I will leave that up to you.

VMware vSAN cloud analytics
Image via VMware

Part of the new tools analytics capabilities and prescriptive problem resolution (hmm, some might call that AI or advanced analytics, just saying), health check issues are identified, notifications along with suggested remediation. Another feature is the ability to leverage continuous proactive updates for advance remediation vs. waiting for subsequent vSAN releases. Net result and benefit are reducing time, the complexity of troubleshooting converged data infrastructure issues spanning servers, storage, I/O networking, hardware, software, cloud, and configuration. In other words, enable you more time to be productive vs. finding and fixing problems leveraging informed awareness for smart decision-making.

Where to Learn More

The following are additional resources to find out more about vSAN and related technologies.

What this all means

Continue reading more about VMware vSAN 6.6 in part I here, part II (just the speeds feeds please) located here, part IV (scaling ROBO and data centers today) found here, as well as part V here (VMware vSAN evolution, where to learn more and summary).

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book “Software-Defined Data Infrastructure Essentials” (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

More modernizing data protection, virtualization and clouds with certainty

This is a follow-up to a recent post about modernizing data protection and doing more than simply swapping out media or mediums like flat tires on a car as well as part of the Quantum protecting data with certainty event series.

As part of a recent 15 city event series sponsored by Quantum (that was a disclosure btw ;) ) titled Virtualization, Cloud and the New Realities for Data Protection that had a theme of strategies and technologies that will help you adapt to a changing IT environment I was asked to present a keynote at the events around Modernizing data protection for cloud, virtual and legacy environments (see earlier and related posts here and here).

Quantum data protection with certainty

Since late June (taking July and most of August off) and wrapping up last week, the event series has traveled to Boston, Chicago, Palo Alto, Houston, New York City, Cleveland, Raleigh, Atlanta, Washington DC, San Diego, Los Angeles, Mohegan Sun CT, St. Louis, Portland Oregon and King of Prussia (Philadelphia area).

The following are a series of posts via IT Knowledge Exchange (ITKE) that covered these events including commentary and perspectives from myself and others.

Data protection in the cloud, summary of the events
Practical solutions for data protection challenges
Big data’s new and old realities
Can you afford to gamble on data protection
Conversations in and around modernizing data protection
Can you afford not to use cloud based data protection

In addition to the themes in the above links, here are some more images, thoughts and perspectives from while being out and about at these and other events.

Datalink does your data center suck sign
While I was traveling saw this advertisement sign from Datalink (who is a Quantum partner that participated in some of the events) in a few different airports which is a variation of the Datadomain tape sucks attention getter. For those not familiar, that creature on the right is an oversized mosquito with the company logos on the lower left being Datalink, NetApp, Cisco and VMware.

goddess of data fertility
When in Atlanta for one of the events at the Morton’s in the Sun trust plaza, the above sculpture was in the lobby. Its real title is the goddess of fertility, however I’m going to refer to it as the goddess of data fertility, after all, there is no such thing as a data or information recession.

The world and storageio runs on dunkin donuts
Traveling while out and about is like a lot of things particular IT and data infrastructure related which is hurry up and wait. Not only does America Run on Dunkin, so to does StorageIO.

Use your imagination
When out and about, sometimes instead of looking up, or around, take a moment and look down and see what is under your feet, then let your imagination go for a moment about what it means. Ok, nuff of that, drink your coffee and let’s get back to things shall we.

Delta 757 and PW2037 or PW2040
Just like virtualization and clouds, airplanes need physical engines to power them which have to be energy-efficient and effective. This means being very reliable, good performance, fuel-efficient (e.g. a 757 on a 1,500 mile trip if full can be in the neighborhood of 65 plus miles per gallon per passenger with a low latency (e.g. fast trip). In this case, a Pratt and Whitney PW2037 (could be a PW2040 as Delta has a few of them) on a Delta 757 is seen powering this flight as it climbs out of LAX on a Friday morning after one of the event series session the evening before in LA.

Ambulance waiting at casino
Not sure what to make out of this image, however it was taken while walking into the Mohegan Sun casino where we did one of the dinner events at the Michael Jordan restaraunt

David Chapa of Quantum in bank vault
Here is an image from one of the events in this series which is a restaurant in Cleveland where the vault is a dinning room. No that is not a banker, well perhaps a data protection banker, it is the one and only (@davidchapa) David Chapa aka the Chief Technology Evangelist (CTE) of Quantum, check out his blog here.

Just before landing in portland
Nice view just before landing in Portland Oregon where that evenings topic was as you might have guessed, data protection modernization, clouds and virtualization. Don’t be scared, be ready, learn and find concerns to overcome them to have certainty with data protection in cloud, virtual and physical environments.
Teamwork
Cloud, virtualization and data protection modernization is a shared responsibility requiring team work and cooperation between service or solution provider and the user or consumer. If the customer or consumer of a service is using the right tools, technologies, best practices and having had done their homework for applicable levels of services with SLAs and SLOs, then a service provider with good capabilities should be in harmony with each other. Of course having the right technologies and tools for the task at hand is also important.
Underground hallway connecting LAX terminals, path to the clouds
Moving your data to the cloud or a virtualized environment should not feel like a walk down a long hallway, that is assuming you have done your homework, that the service is safe and secure, well taken care of, there should be less of concerns. Now if that is a dark, dirty, dingy, dilapidated dungeon like hallway, then you just might be on the highway to hell vs. stairway to heaven or clouds ;).

clouds along california coastline
There continues to be barriers to cloud adoption and deployment for data protection among other users.

Unlike the mountain ranges inland from the LA area coastline causing a barrier for the marine layer clouds rolling further inland, many IT related barriers can be overcome. The key to overcoming cloud concerns and barriers is identifying and understanding what they are so that resolutions, solutions, best practices, tools or work around’s can be developed or put into place.

The world and storageio runs on dunkin donuts
Hmm, breakfast of champions and road warriors, Dunkin Donuts aka DD, not to be confused with DDUP the former ticker symbol of Datadomain.

Tiered coffee
In the spirit of not treating everything the same, have different technology or tools to meet various needs or requirements, it only makes sense that there are various hot beverage options including hot water for tea, regular and decaffeinated coffee. Hmm, tiered hot beverages?


On the lighter side, things including technology of all type will and do break, even with maintenance, so having a standby plan, or support service to call can come in handy. In this case the vehicle on the right did not hit the garage door that came off of its tracks due to wear and tear as I was preparing to leave for one of the data protection events. Note to self, consider going from bi-annual garage door preventive maintenance to annual service check-up.

Some salesman talking on phone in a quiet zone

While not part of or pertaining to data protection, clouds, virtualization, storage or data infrastructure topics, the above photo was taken while in a quiet section of an airport lounge waiting for a flight to one of the events. This falls in the class of a picture is worth a thousand words category as the sign just to the left of the sales person talking loudly on his cell phone about his big successful customer call says Quiet Zone with symbol of no cell phone conversations.

How do I know the guy was not talking about clouds, virtualization, data infrastructure or storage related topics? Simple, his conversation was so loud me and everybody else in the lounge could hear the details of the customer conversation as it was being relayed back to sales management.

Note to those involved in sales or customer related topics, be careful of your conversations in public and pseudo public places including airports, airport lounges, airplanes, trains, planes, hotel lobbies and other places, you never know who you will be broadcasting to.

Here is a link to a summary of the events along with common questions, thoughts and perspectives.

Quantum data protection with certainty

Thanks to everyone who participated in the events including attendees, as well as Quantum and their partners for sponsoring this event series, look forward to see you while out and about at some future event or venue.

Ok, nuff said.

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Cloud and Virtual Data Storage Networking

For those who have read any of my previous posts, seen some of my articles, news letters, videos, pod casts, web casts or in person appearances you may have heard that I have a new book coming out this summer.

Here in the northern hemisphere its summer (well technically the solstice is just around the corner) and in Minnesota the ice (from the winter) is off the lakes and rivers. Granted, there is some ice floating that fell out of coolers for keeping beverages cool. This means that it is also fishing (and catching) season on the Scenic St. Croix River.

Karen of Arcola catches first fish of 2011 season, St. Croix river, stripe bassGreg showing his first catch of the 2011 season, St. Croix walleye aka Walter or Wanda

FTC disclosures (and for fun): Karenofarcola is wearing a StorageIO baseball cap and Im wearing a cap from a vendor marketing person who sent several as they too enjoy fishing and boating. Funny thing about the cap, all of the river rats and fishing people think it is from the people who make rod reels instead of solutions that go around tape and disk reels. Note, if you feel compelled to send me baseball caps, send at least a pair so there is a backup, standby, spare or extra one for a guest. The mustang survival jacket that Im wearing with the Seadoo logo is something I bought myself. I did get a discount however since there was a Seadoo logo on it and I used to have Seadoo jet boats. Btw, that was some disclosure fun and humor!

Ok, enough of the fun stuff, lets get back to the main theme of this post.

My new book which is the third in a series of solo projects including Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures (Elsevier) and The Green and Virtual Data Center (CRC).

While the official launch and general availability will be later in the summer, following are some links and related content to give you advance information about the new book.

Cloud and Virtual Data Storage Networking

Click on the above image which will take you to the CRC Press page where you can learn more including what the book is about, view a table of contents, see reviews and more. Also check out the video below to learn more as well as visit my main web site where you can learn about Cloud and Virtual Data Storage Networking, my other books and view (or listen to) related content such as white papers, solution briefs, articles, tips, web cast, pod cast as well as view the recent and upcoming events schedule.

I also invite you to join Cloud and Virtual Data Storage Networking group

You can also view the short video at dailymotion, metacage, blip.tv, veoh, flickr, and photobucket among other venues.

If you are interested in being a reviewer, send a note to cvdsn@storageio.com with your name, blog or website and contact information including shipping address (sorry no PO boxes) plus telephone (or skype) number. Also indicate if you are a blogger, press/media, free lance writer, analyst, consultant, var, vendor, investor, IT professional or other.

Watch for more news and information as we get closer to the formal launch and release, in the meantime, you can pre order your copy now at Amazon, CRC Press and other venues around the world.

Ok, time to get back to work or go fishing, nuff said

Cheers Gs

Greg Schulz – Author The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Spring 2011 Server and StorageIO News Letter

StorageIO News Letter Image
Spring 2011 Newsletter

Welcome to the Spring 2011 edition of the Server and StorageIO Group (StorageIO) newsletter. This follows the Winter 2011 edition.

You can access this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions.

 

Click on the following links to view the Spring 2011 edition as an HTML or PDF or, to go to the newsletter page to view previous editions.

Follow via Goggle Feedburner here or via email subscription here.

You can also subscribe to the news letter by simply sending an email to newsletter@storageio.com

Enjoy this edition of the StorageIO newsletter, let me know your comments and feedback.

Cheers gs

Nuff said for now

Cheers
Gs

Greg Schulz – Author The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Fall 2010 StorageIO News Letter

StorageIO News Letter Image
Fall 2010 Newsletter

Welcome to the Fall 2010 edition of the Server and StorageIO Group (StorageIO) newsletter. This follows the August 2010 edition building on the great feedback received from recipients.

You can access this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions. Click on the following links to view the Fall 2010 edition as an HTML or PDF or, to go to the newsletter page to view previous editions.

Follow via Goggle Feedburner here or via email subscription here.

You can also subscribe to the news letter by simply sending an email to newsletter@storageio.com

Enjoy this edition of the StorageIO newsletter, let me know your comments and feedback.

Cheers gs

Nuff said for now

Cheers gs

Greg Schulz – Author The Green and Virtual Data Center (CRC) and Resilient Storage Networks (Elsevier)
twitter @storageio

IBMs Storwize or wise Storage, the V7000 and DFR

A few months ago IBM bought a Data Footprint Reduction (DFR) technology company called Storwize (read more about DFR and Storwize Real time Compression here, here, here, here and here).

A couple of weeks ago IBM renamed the Storwize real time compression technology to surprise surprise, IBM real time compression (wow, wonder how lively that market focus research group study discussion was).

Subsequently IBM recycled the Storwize name in time to be used for the V7000 launch.

Now to be clear right up front, currently the V7000 does not include real time compression capabilities, however I would look for that and other forms of DFR techniques to appear on an increasing basis in IBM products in the future.

IBM has a diverse storage portfolio with good products some with longer legs than others to compete in the market. By long legs, that means both technology and marketability for enabling their direct as well as partners including distributors or vars to effectively compete with other vendors offerings.

The enablement capability of the V7000 will be to give IBM and their business partners a product that they will want go tell and sell to customers competing with Cisco, Dell, EMC, Fujitsu, HDS, HP, NEC, NetApp and Oracle among others.

What about XIV?

For those interested in XIV regardless of if you are a fan, nay sayer or simply an observer, here, here and here are some related posts to view if you like (as well as comment on).

Back to the V7000

A couple of common themes about the IBM V7000 are:

  • It appears to be a good product based on the SVC platform with many enhancements
  • Expanding the industry scope and focus awareness around Data Footprint Reduction (DFR)
  • Branding the storwize acquisition as real-time compression as part of their DFR portfolio
  • Confusion about using the Storwize name for a storage virtualization solution
  • Lack of Data Footprint Reduction (DFR) particularly real-time compression (aka Storwize)
  • Yet another IBM storage product adding to confusion around product positioning

Common questions that Im being asked about the IBM V7000 include among others:

  • Is the V7000 based on LSI, NetApp or other third party OEM technology?

    No, it is based on the IBM SVC code base along with an XIV like GUI and features from other IBM products.

  • Is the V7000 based on XIV?

    No, as mentioned above, the V7000 is based on the IBM SVC code base along with an XIV like GUI and features from other IBM products.

  • Does the V7000 have DFR such as dedupe or compression?

    No, not at this time other than what was previously available with the SVC.

  • Does this mean there will be a change or defocusing on or of other IBM storage products?

    IMHO I do not think so other than perhaps around XIV. If anything, I would expect IBM to start pushing the V7000 as well as the entire storage product portfolio more aggressively. Now there could be some defocusing on XIV or put a different way, putting all products on the same equal footing and let the customer determine what they want based on effective solution selling from IBM and their business partners.

  • What does this mean for XIV is that product no longer the featured or marquee product?

    IMHO XIV remains relevant for the time being. However, I also expect to be put on equal footprint with other IBM products or, if you prefer, other IBM products particularly the V7000 to be unleashed to compete with other external vendors solutions such as those from Cisco, Dell, EMC, Fujitsu, HDS, HP, NEC, NetApp and Oracle among others. Read more here, here and here about XIV remaining relevant.

  • Why would I not just buy an SVC and add storage to it?

    That is an option and strength of SVC to sit in front of different IBM storage products as well as those of third party competitors. However with the V7000 customers now have a turnkey storage solution to sell instead of a virtualization appliance.

  • Is this a reaction to EMC VPLEX, HDS VSP, HP SVSP or 3PAR, Oracle/Sun 7000?

    Perhaps it is, perhaps it is a reaction to XIV, and perhaps it is a realization that IBM has a lot of IP that could be combined into a solution to respond to a market need among many other scenarios. However, IBM has had a virtualization platform with a decent installed base in the form of SVC which happens to be at the heart of the V7000.

  • Does this mean IBM is jumping on the using off the shelf server instead of purpose built hardware for storage systems bandwagon like Oracle, HP and others are doing?

    If you are new to storage or IBM, it might appear that way, however, IBM has been shipping storage systems that are based on general purpose servers for a couple for a couple of decades now. Granted, some of those products are based on IBM Power PC (e.g. power platform) also used in their pSeries formerly known as the RS6000s. For example, the DS8000 series similar to its predecessors the ESS (aka Shark) and VSS before that have been based on the Power platform. Likewise, SVC has been based on general purpose processors since its inception.

    Likewise, while only generally deployed in two node pairs, the DS8000 is architected to scale into many more nodes that what has been shipped meaning that IBM has had clustered storage for some time, granted, some of their competitors will dispute that.

  • How does the V7000 stack up from a performance standpoint?

    Interestingly, IBM has traditionally been very good if not out front running public benchmarks and workload simulations ranging from SPC to TPC to SPEC to Microsoft ESRP among others for all of their storage systems except one (e.g. XIV). However true to traditional IBM systems and storage practices, just a couple of weeks after the V7000 launch, IBM has released the first wave of performance comparisons including SPC for the V7000 which can be seen here to compare with others.

  • What do I think of the V7000?

    Like other products both in the IBM storage portfolio or from other vendors, the V7000 has its place and in that place which needs to be further articulated by IBM, it has a bright future. I think that the V7000 for many environments particularly those that were looking at XIV will be a good IBM based solution as well as competitor to other solutions from Dell, EMC, HDS, HP, NetApp, Oracle as well as some smaller startups providers.

Comments, thoughts and perspectives:

IBM is part of a growing industry trend realizing that data footprint reduction (DFR) focus should expand the scope beyond backup and dedupe to span an entire organization using many different tools, techniques and best practices. These include archiving of databases, email, file systems for both compliance and non compliance purposes, backup/restore modernization or redesign, compression (real-time for online and post processing). In addition, DFR includes consolidation of storage capacity and performance (e.g. fast 15K SAS, caching or SSD), data management (including some data deletion where practical), data dedupe, space saving snapshots such as copy on write or redirect on write, thin provisioning as well as virtualization for both consolidation and enabling agility.

IBM has some great products, however too often with such a diverse product portfolio better navigation and messaging of what to use when, where and why is needed not to mention the confusion over the current product dejur.

As has been the case for the past couple of years, lets see how this all plays out in a year or so from now. Meanwhile cast your vote or see the results of others as to if XIV remains relevant. Likewise, join in on the new poll below as to if the V7000 is now relevant or not.

Note: As with the ongoing is XIV relevant polling (above), for the new is the V7000 relevant polling (below) you are free to vote early, vote often, vote for those who cannot or that care not to vote.

Here are some links to read more about this and related topics:

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Re visiting if IBM XIV is still relevant with V7000

Over the past couple of years I routinely get asked what I think of XIV by fans as well as foes in addition to many curious or neutral onlookers including XIV competitors, other analysts, media, bloggers, consultants as well as IBM customers, prospects, vars and business partners. Consequently I have done some blog posts about my thoughts and perspectives.

Its time again for what has turned out to be the third annual perspective or thoughts around IBM XIV and if it is still relevant as a result of the recent IBM V7000 (excuse me, I meant to say IBM Storwize V7000) storage system launch.

For those wanting to take a step back in time, here is an initial thought perspective about IBM and XIV storage from 2008, as well as the 2009 revisiting of XIV relevance post and the latest V7000 companion post found here.

What is the IBM V7000?

Here is a link to a companion post pertaining to the IBM V7000 that you will want to have a look at.

In a nut shell, the V7000 is a new storage system with built in storage virtualization or virtual storage if you prefer that leverages IBM developed software from its San Volume Controller (SVC), DS8000 enterprise system and others.

Unlike the SVC which is a gateway or appliance head that virtualizes various IBM and third party storage systems providing data movement, migration, copy, replication, snapshot and other agility or abstraction capabilities, the V7000 is a turnkey integrated solution.

By being a turnkey solution, the V7000 combines the functionality of the SVC as a basis for adding other IBM technologies including a GUI management tool similar to that found on XIV along with dedicated attached storage (e.g. SAS disk drives including fast, high capacity as well as SSD).

In other words, for those customer or prospects who liked XIV because of its management GUI interface, you may like the V7000.

For those who liked the functionality capabilities of the SVC however needed it to be a turnkey solution, you might like the V7000.

For those of you who did not like or competed with the SVC in the past, well, you know what to do.

BTW, for those who knew of Storwize the Data Footprint Reduction (DFR) vendor with real time compression that IBM recently acquired and renamed IBM Real time Compression, the V7000 does not contain any real time compression (yet).

What are my thoughts and perspectives?

In addition to the comments in the companion post found here, right now Im of the mind set that XIV does not fade away quietly into the sunset or take a timeout at the IBM technology rest and recuperation resort located on the beautiful someday isle.

The reason I think XIV will remain somewhat relevant for some time, (time to be determined of course) is that IBM has expended over the past two and half years significant resources to promote it. Those resources have included marketing time, messaging space and in some instances perhaps inadvertinly at the expense of other IBM storage solutions. Simiarly, a lot of time, money and effort have gone into business partner outreach to establish and keep XIV relevant with those commuities who in turn have gone to their customers to tell and sell the XIV story to some customers who have bought it.

Consequently or as a result of all of that investment, I would be surprised if IBM were simply to walk away from XIV at least near term.

What I do see as happening including some early indicators is that the V7000 (along with other IBM products) now will be getting equal billing, resources and promotional support. Weather this means the XIV division finally being assimilated into the mainstream IBM fold and on equal footing with other IBM products, or, that other IBM products being brought up to an elevated position of XIV is subject to interpretation and your own perception.

I expect to continue to see IBM teams and subsequently their distributors, vars and other business partners get more excited talking about the V7000 along with other IBM solutions. For example, SONAS for bulk, clustered and scale out NAS, DS8000 for high end, GMAS and Information Archive platforms as well as N and DS3K/DS4K/DS5K not to mentiuon the TS/TL backup and archive target platforms along with associated Tivoli software. Also, lets not forget about SVC among other IBM solutions including of course, XIV.

I would also not be surprised if some of the diehard XIV loyalist (e.g. sales and marketing reps that were faithful members of Moshe Yani army who appears to be MIA at IBM) pack up their bags and leave the IBM storage SANdbox in virtual protest. That is, refusing to be assimilated into the general IBM storage pool and thus leaving for Greener IT pastures elsewhere. Some will stick around discovering the opportunities associated with selling a broader more diverse product portfolio into their target accounts where they have spent time and resources to establish relationships or getting thier proverbial foot in the door.

Consequently, I think XIV remains somewhat relevant for now given all of the resources that IBM poured into it and relationships that their partner ecosystem also spent on establishing with the installed customer base.

However, I do think that the V7000 despite some confusion (here and here) around its recycled Storwize name that is built around the field proven SVC and other IBM technology has some legs. Those legs of the V7000 are both from a technology standpoint as well as a means to get the entire IBM systems and storage group energized to go out and compete with their primary nemesis (e.g. Dell, EMC, HP, HDS, NetApp and Oracle among others).

As has been the case for the past couple of years, lets see how this all plays out in a year or so from now. Meanwhile cast your vote or see the results of others as to if XIV remains relevant. Likewise, join in on the new poll below as to if the V7000 is now relevant or not.

Note: As with the ongoing is XIV relevant polling (above), for the new is the V7000 relevant polling (below) you are free to vote early, vote often, vote for those who cannot or that care not to vote.

Here are some links to read more about this and related topics:

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Have VTLs or VxLs become Zombies, Declared dead yet still alive?

Have you heard or read the reports and speculation that VTLs (Virtual Tape Libraries) are dead?

It seems that in IT the all to popular trend is to declare something dead so that your new product or technology can have a chance of making it in to the market or perhaps seen in a better light.

Sometimes this approach works to temporary freeze the market until common sense and clarity returns to the market or until something else fun to talk about comes along and in other cases, the messages can fall on deft ears.

The approach of declaring something dead tends to play well for those who like shiny new toys (SNT) or new shiny toys (NST) and being on the popular, cool trendy bandwagon.

Not surprisingly, while some actual IT customers can fall into the SNT or NST syndrome, its often the broader industry including media, bloggers, analysts, consultants and other self proclaimed or anointed pundits as well as vendors who latch on to the declare it dead movement. After all, who wants to talk about something that is old, boring and already being sold to paying customers who are using it. Now this is not a bad thing as we need a balance of up and coming challengers to keep the status quo challenged, likewise we need a balance of the new to avoid death grips on the old and what is working.

Likewise, many IT customers particularly larger ones tend to be very risk averse and conservative with their budgets protecting their investments thus they may only go leading bleeding edge if there is a dual redundant blood bank with a backup on hot standby (thats some HA humor BTW).

Another reason that declaring items dead in support of SNT and NST is that while many of the commonly declared dead items are on the proverbial plateau of productivity for IT customers, that also can mean that they are on the plateau of profitability for the vendors.

However, not all good things last and at sometime, there is the need to transition from the old to the new and this is where things like virtualization including virtual tape libraries or virtual disk libraries or virtual storage library or what ever you want to call a VxL (more on what a VxL is in a moment) can come into play.

I realize that for some, particularly those who like to grasp on to SNT, NST and ride the dead pool bandwagons this will probably appear as snarky or cynical which is fine, after all, for some, you should be laughing to the bank and if not, you may in fact be missing out on an opportunity for playing in the dead pool marketing game.

Now back to VxL.

In the case of VTLs, for some it is the T word that bothers them, you know T as in Tape which is not a SNT or NST in an age where SSD has supposedly killed the disk drive which allegedly terminated tape (yeah right). Sure tape is not being used as much for backup as it has in the past with its role shifting to that of longer term retention, something that it is well suited for.

For tape fans (or cynics) you can read more here, here and here. However there is still a large amount of backup/restore along with other data protection or preservation (e.g. archiving) processing (software tools, processes, procedures, skill sets, management tools) that still expects to see tape.

Hence this is where VTLs or VxLs come into play leveraging virtualization in an Life Beyond Consolidation (and here) scenario providing abstraction, transparency, agility and emulation and IMHO are still very much alive and evolving.

Ok, for those who do not like or believe in or of its continued existence and evolving role, substitute the T (tape) with X and you get a VxL. That is, plug in what ever X word that makes you happy or marketable or a Shiny New TLA. For example Virtual Disk Library, Virtual Storage Library, Virtual Backup Library, Virtual Compression Library, Virtual Dedupe Library, Virtual ILM Library, Virtual Archive Library, Virtual Cloud Library and so forth. Granted some VxLs only emulate tape and hence are VTLs while others support NAS and other protocols (or personalities) not to mention functionality ranging from replication, DFR as well as automated policy management.

However, keep in mind that if your preference is VTL, VxL or what ever other buzzword bingo name that you want to use or come up with, look at how virtualization in the form of abstraction, transparency and emulation can bridge the gap between the new (disk based data protection) combined with DFR (Data Footprint Reduction) and the old (existing backup/restore, archive or other management tools and processes.

Here are some additional links pertaining to VTLs (excuse me, VxLs):

  • Virtual tape libraries: Old backup technology holdover or gateway to the future?
  • Not to mention here, here, here, here or here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

More Data Footprint Reduction (DFR) Material

This is part of an ongoing series of short industry trends and perspectives (ITP) blog posts briefs based on what I am seeing and hearing in my conversations with IT professionals on a global basis.

These short posts compliment other longer posts along with traditional industry trends and perspective white papers, research reports, videos, podcasts, webcasts as well as solution brief content found a www.storageioblog.com/reports and www.storageio.com/articles.

If you recall from previous posts including here, here or here among others, Data Footprint Reduction (DFR) is a collection of tools, technologies and best practices for addressing growing data storage management and cost impacts.

DFR encompasses many different tools, techniques and technologies across various applications ranging from active or primary storage to secondary and inactive along with backup and archive.

Some of the technologies techniques and technologies include archiving, backup modernization, compression, data management, dedupe, space saving snapshots and thin provisioning among others.

Following are some links to various articles and commentary pertaining to DFR:

  • Using DFR including dedupe and compression to defry storage and management costs
  • Deduplicate, compress and defray costs of data storage management
  • Virtual tape libraries: Old backup technology holdover or gateway to the future?
  • As well as here, here or here

In the spirit of DFR, that is doing more with less, nuff said (for now).

Of course let me know what your thoughts and perspectives are on this and other related topics.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Data footprint reduction (Part 2): Dell, IBM, Ocarina and Storwize

Dell

IBM

Over the past couple of weeks there has been a flurry of IT industry activity around data footprint impact reduction with Dell buying Ocarina and IBM acquiring Storwize. For those who want the quick (compacted, reduced) synopsis of what Dell buying Ocarina as well as IBM acquiring Storwize means read the first post in this two part series as well as some of my comments here and here.

This piece and it companion in part I of this two part series is about expanding the discussion to the much larger opportunity for vendors or vars of overall data footprint impact reduction beyond where they are currently focused. Likewise, this is about IT customers realizing that there are more opportunities to address data and storage optimization across your entire organization using various techniques instead of just focusing on backup or vmware virtual servers.

Who is Ocarina and Storwize?
Ocarina is a data and storage management software startup focused on data footprint reduction using a variety of approaches, techniques and algorithms. They differ from the traditional data dedupers (e.g. Asigra, Bakbone, Commvault, EMC Avamar, Datadomain and Networker, Exagrid, Falconstor, HP, IBM Protectier and TSM, Quantum, Sepaton and Symantec among others) by looking at data footprint reduction beyond just backup.

This means looking at how to reduce data footprint across different types of data including videos, image as well as text based documents among others. As a result, the market sweet spot for Ocarina is for general data footprint reduction including static along with active data including entertainment, video surveillance or gaming, reference data, web 2.0 and other bulk storage application data needs (this should compliment Dells recent Exanet acquisition).

What this means is that Ocarina is very well suited to address the rapidly growing amount of unstructured data that may not otherwise be handled as efficiently with by dedupe alone.

Storwize is a data and storage management startup focused on data footprint reduction using inline compression with an emphasis on maintaining performance for reads as well as writes of unstructured as well as structured database data. Consequently the market sweet spot for Storwize is around boosting the capacity of existing NAS storage systems from different vendors without negatively impacting performance. The trade off of the Storwize approach is that you do not get the spectacular data reduction ratios associated with backup centric or focused dedupe, however, you maintain performance associated with online storage that some dedupers dream of.

Both Dell and IBM have existing dedupe solutions for general purpose as well as backup along with other data footprint impact reduction tools (either owned or via partners). Now they are both expanding their focus and reach similar to what others such as EMC, HP, NetApp, Oracle and Symantec among others are doing. What this means is that someone at Dell and IBM see that there is much more to data footprint impact reduction than just a focus on dedupe for backup.

Wait, what does all of this discussion (or read here for background issues, challenges and opportunities) about unstructured data and changing access lifecycles have to do with dedupe, Ocarina and Storwize?

Continue reading on as this is about the expanding opportunity for data footprint reduction across entire organizations. That is, more data is being kept online and expanding data footprint impact needs to be addressed to meet business objectives using various techniques balancing performance, availability, capacity and energy or economics (PACE).

Dell

IBM

What does all of this have to do with IBM buying Storwize and Dell acquiring Ocarina?
If you have not pieced this together yet, let me net it out.

This is about the opportunity to address the organization wide expanding data footprint impact across all applications, types of data as well as tiers of storage to support business growth (more data to store) while maintaining QoS yet reduce per unit costs including management.

This is about expanding the story to the broader data footprint impact reduction from the more narrowly focused backup and dedupe discussion which are still in their infancy on a relative basis to their full market potential (read more here).

Now are you seeing where this is going and fits?

Does this mean IBM and Dell defocus on their existing Dedupe product lines or partners?
I do not believe so, at least as long as their respective revenue prevention departments are kept on the sidelines and off of the field of play. What I mean by this is that the challenge for IBM and Dell is similar to that of what others such as EMC are faced with having diverse portfolios or technology toolboxes. The challenge is messaging to the bigger issues, then aligning the right tool to the task at hand to address given issues and opportunities instead of singularly focused on a specific product causing revenue prevention elsewhere.

As an example, for backup, I would expect Dell to continue to work with its existing dedupe backup centric partners and technologies however find new opportunities to leverage their Ocarina solution. Likewise, IBM I would expect to continue to show customers where Tivoli software based dedupe or Protectier (aka the deduper formerly known as Diligent) or other target based dedupe fits and expand into other data footprint impact areas with Storewize.

Does this change the playing field?
IMHO these moves as well as some previous moves by the likes of EMC and NetApp among others are examples of expanding the scope and dimension of the playing field. That is, the focus is much more than just dedupe for backup or of virtual machines (e.g. VMware vSphere or Microsoft HyperV).

This signals a growing awareness around the much larger and broader opportunity around organization wide data footprint impact reduction. In the broader context some applications or data gets compressed either in application software such as databases, file systems, operating systems or even hypervisors as well as in networks using protocol or bandwidth optimizers as well as inline compression or post processing techniques as has been the case with streaming tape devices for some time.

This also means that where with dedupe the primary focus or marketing angle up until recently has been around reduction ratios, to meet the needs of time or performance sensitive applications data transfer rates also become important.

Hence the role of policy based data footprint reduction where the right tool or technique to meet specific service requirements is applied. For those vendors with a diverse data footprint impact reduction tool kit including archive, compression, dedupe, thin provision among other techniques, I would expect to hear expanded messaging around the theme of applying the right tool to the task at hand.

Does this mean Dell bought Ocarina to accessorize EqualLogic?
Perhaps, however that would then beg the question of why EqualLogic needs accessorizing. Granted there are many EqualLogic along with other Dell sold storage systems attached to Dell and other vendors servers operating as NFS or Windows CIFS file servers that are candidates for Ocarina. However there are also many environments that do not yet include Dell EqualLogic solutions where Ocarina is a means for Dell to extend their reach enabling those organizations to do more with what they have while supporting growth.

In other words, Ocarina can be used to accessorize, or, it can be used to generate and create pull through for various Dell products. I also see a very strong affinity and opportunity for Dell to combine their recent Exanet NAS storage clustering software with Dell servers, storage to create bulk or scale out solutions similar to what HP and other vendors have done. Of course what Dell does with the Ocarina software over time, where they integrate it into their own products as well as OEM to others should be interesting to watch or speculate upon.

Does this mean IBM bought Storwize to accessorize XIV?
Well, I guess if you put a gateway (or software on a server which is the same thing) in front of XIV to transform it into a NAS system, sure, then Storwize could be used to increase the net usable capacity of the XIV installed base. However that is a lot of work and cost for what is on a relative basis a small footprint, yet it is a viable option never the less.

IMHO IBM has much more of a play, perhaps a home run by walking before they run by placing Storwize in front of their existing large installed base of NetApp N series (not to mention targeting NetApps own install base) as well as complimenting their SONAS solutions. From there as IBM gets their legs and mojo, they could go on the attack by going after other vendors NAS solutions with an efficiency story similar to how IBM server groups target other vendors server business for takeout opportunities except in a complimenting manner.

Longer term I would not be surprised to see IBM continue development of the block based IP (as well as file) in the storwize product for deployment in solutions ranging from SVC to their own or OEM based products along with articulating their comprehensive data footprint reduction solution portfolio. What will be important for IBM to do is articulating what solution to use when, where, why and how without confusing their customers, partners and rest of the industry (something that Dell will also have to do).

Some links for additional reading on the above and related topics

Wrap up (for now)

Organizations of all shape and size are encountering some form of growing data footprint impact that currently, or soon will need to be addressed. Given that different applications and types of data along with associated storage mediums or tiers have various performance, availability, capacity, energy as well as economic characteristics multiple data footprint impact reduction tools or techniques are needed. What this all means is that the focus of data footprint reduction is expanding beyond that of just dedupe for backup or other early deployment scenarios.

Note what this means is that dedupe has an even brighter future than where it currently is focused which is still only scratching the surface of potential market adoption as was discussed in part 1 of this series.

However this also means that dedupe is not the only solution to all data footprint reduction scenarios. Other techniques including archiving, compression, data management, thin provisioning, data deletion, tiered storage and consolidation will start to gain respect, coverage discussions and debates.

Bottom line, use the most applicable technologies or combinations along with best practice for the task and activity at hand.

For some applications reduction ratios are an important focus on the tools or modes of operations that achieve those results.

Likewise for other applications where the focus is on performance with some data reduction benefit, tools are optimized for performance first and reduction secondary.

Thus I expect messaging from some vendors to adjust (expand) to those capabilities that they have in their toolboxes (product portfolios) offerings

Consequently, IMHO some of the backup centric dedupe solutions may find themselves in niche roles in the future unless they can diversity. Vendors with multiple data footprint reduction tools will also do better than those with only a single function or focused tool.

However for those who only have a single or perhaps a couple of tools, well, guess what the approach and messaging will be. After all, if all you have is a hammer everything looks like a nail, if all you have is a screw driver, well, you get the picture.

On the other hand, if you are still not clear on what all this means, send me a note, give a call, post a comment or a tweet and will be happy to discuss with you.

Oh, FWIW, if interested, disclosure: Storwize was a client a couple of years ago.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Data footprint reduction (Part 1): Life beyond dedupe and changing data lifecycles

Over the past couple of weeks there has been a flurry of IT industry activity around data footprint impact reduction with Dell buying Ocarina and IBM acquiring Storwize. For those who want the quick (compacted, reduced) synopsis of what Dell buying Ocarina as well as IBM acquiring Storwize means read this post here along with some of my comments here and here.

Now, before any Drs or Divas of Dedupe get concerned and feel the need to debate dedupes expanding role, success or applicability, relax, take a deep breath, then read on and take another breath before responding if so inclined.

The reason I mention this is that some may mistake this as a piece against or not in favor of dedupe as it talks about life beyond dedupe which could be mistaken as indicating dedupes diminished role which is not the case (read ahead and see figure 5 to see the bigger picture).

Likewise some might feel that since this piece talks about archiving for compliance and non regulatory situations along with compression, data management and other forms of data footprint reduction they may be compelled to defend dedupes honor and future role.

Again, relax, take a deep breath and read on, this is not about the death of dedupe.

Now for others, you might wonder why the dedupe tongue in check humor mentioned above (which is what it is) and the answer is quite simple. The industry in general is drunk on dedupe and in some cases thus having numbed its senses not to mention having blurred its vision of the even bigger opportunities for the business benefits of data footprint reduction beyond todays backup centric or vmware server virtualization dedupe discussions.

Likewise, it is time for the industry to wake (or sober) up and instead of trying to stuff everything under or into the narrowly focused dedupe bottle. Instead, realize that there is a broader umbrella called data footprint impact reduction which includes among other techniques, dedupe, archive, compression, data management, data deletion and thin provisioning across all types of data and applications. What this means is a broader opportunity or market than what exists or being discussed today leveraging different techniques, technologies and best practices.

Consequently this piece is about expanding the discussion to the larger opportunity for vendors or vars to extend their focus to the bigger world of overall data footprint impact reduction beyond where currently focused. Likewise, this is about IT customers realizing that there are more opportunities to address data and storage optimization across your entire organization using various techniques instead of just focusing on backup.

In other words, there is a very bright future for dedupe as well as other techniques and technologies that fall under the data footprint reduction umbrella including data stored online, offline, near line, primary, secondary, tertiary, virtual and in a public or private cloud..

Before going further however lets take a step back and look at some business along with IT issues, challenges and opportunities.

What is the business and IT issue or challenge?
Given that there is no such thing as a data or information recession shown in figure 1, IT organizations of all size are faced with the constant demand to store more data, including multiple copies of the same or similar data, for longer periods of time.


Figure 1: IT resource demand growth continues

The result is an expanding data footprint, increased IT expenses, both capital and operational, due to additional Infrastructure Resource Management (IRM) activities to sustain given levels of application Quality of Service (QoS) delivery shown in figure 2.

Some common IT costs associated with supporting an increased data footprint include among others:

  • Data storage hardware and management software tools acquisition
  • Associated networking or IO connectivity hardware, software and services
  • Recurring maintenance and software renewal fees
  • Facilities fees for floor space, power and cooling along with IT staffing
  • Physical and logical security for data and IT resources
  • Data protection for HA, BC or DR including backup, replication and archiving


Figure 2: IT Resources and cost balancing conflicts and opportunities

Figure 2 shows the result is that IT organizations of all size are faced with having to do more with what they have or with less including maximizing available resources. In addition, IT organizations often have to overcome common footprint constraints (available power, cooling, floor space, server, storage and networking resources, management, budgets, and IT staffing) while supporting business growth.

Figure 2 also shows that to support demand, more resources are needed (real or virtual) in a denser footprint, while maintaining or enhancing QoS plus lowering per unit resource cost. The trick is improving on available resources while maintaining QoS in a cost effective manner. By comparison, traditionally if costs are reduced, one of the other curves (amount of resources or QoS) are often negatively impacted and vice versa. Meanwhile in other situations the result can be moving problems around that later resurface elsewhere. Instead, find, identify, diagnose and prescribe the applicable treatment or form of data footprint reduction or other IT IRM technology, technique or best practices to cure the ailment.

What is driving the expanding data footprint?
Granted more data can be stored in the same or smaller physical footprint than in the past, thus requiring less power and cooling per Gbyte, Tbyte or PByte. Data growth rates necessary to sustain business activity, enhanced IT service delivery and enable new applications are placing continued demands to move, protect, preserve, store and serve data for longer periods of time.

The popularity of rich media and Internet based applications has resulted in explosive growth of unstructured file data requiring new and more scalable storage solutions. Unstructured data includes spreadsheets, Power Point, slide decks, Adobe PDF and word documents, web pages, video and audio JPEG, MP3 and MP4 files. This trend towards increasing data storage requirements does not appear to be slowing anytime soon for organizations of all sizes.

After all, there is no such thing as a data or information recession!

Changing data access lifecycles
Many strategies or marketing stories are built around the premise that shortly after data is created data is seldom, if ever accessed again. The traditional transactional model lends itself to what has become known as information lifecycle management (ILM) where data can and should be archived or moved to lower cost, lower performing, and high density storage or even deleted where possible.

Figure 3 shows as an example on the left side of the diagram the traditional transactional data lifecycle with data being created and then going dormant. The amount of dormant data will vary by the type and size of an organization along with application mix. 


Figure 3: Changing access and data lifecycle patterns

However, unlike the transactional data lifecycle models where data can be removed after a period of time, Web 2.0 and related data needs to remain online and readily accessible. Unlike traditional data lifecycles where data goes dormant after a period of time, on the right side of figure 3, data is created and then accessed on an intermittent basis with variable frequency. The frequency between periods of inactivity could be hours, days, weeks or months and, in some cases, there may be sustained periods of activity.

A common example is a video or some other content that gets created and posted to a web site or social networking site such as Face book, Linked in, or You Tube among others. Once the content is discussed, while it may not change, additional comment and collaborative data can be wrapped around the data as additional viewers discover and comment on the content. Solution approaches for the new category and data lifecycle model include low cost, relative good performing high capacity storage such as clustered bulk storage as well as leveraging different forms of data footprint reduction techniques.

Given that a large (and growing) percentage of new data is unstructured, NAS based storage solutions including clustered, bulk, cloud and managed service offerings with file based access are gaining in popularity. To reduce cost along with support increased business demands (figure 2), a growing trend is to utilize clustered, scale out and bulk NAS file systems that support NFS, CIFS for concurrent large and small IOs as well as optionally pNFS for large parallel access of files. These solutions are also increasingly being deployed with either built in or add on accessorized data footprint reduction techniques including archive, policy management, dedupe and compression among others.

What is your data footprint impact?
Your data footprint impact is the total data storage needed to support your various business application and information needs. Your data footprint may be larger than how much actual data storage you have as seen in figure 4. In Figure 4, an example is an organization that has 20TBytes of storage space allocated and being used for databases, email, home directories, shared documents, engineering documents, financial and other data in different formats (structured and unstructured) not to mention varying access patterns.


Figure 4: Expanding data footprint due to data proliferation and copies being retained

Of the 20TBytes of data allocated and used, it is very likely that the consumed storage space is not 100 percent used. Database tables may be sparsely (empty or not fully) allocated and there is likely duplicate data in email and other shared documents or folders. Additionally, of the 20TBytes, 10TBytes are duplicated to three different areas on a regular basis for application testing, training and business analysis and reporting purposes.

The overall data footprint is the total amount of data including all copies plus the additional storage required for supporting that data such as extra disks for Redundant Array of Independent Disks (RAID) protection or remote mirroring.

In this overly simplified example, the data footprint and subsequent storage requirement are several times that of the 20TBytes of data. Consequently, the larger the data footprint the more data storage capacity and performance bandwidth needed, not to mention being managed, protected and housed (powered, cooled, situated in a rack or cabinet on a floor somewhere).

Data footprint reduction techniques
While data storage capacity has become less expensive on a relative basis, as data footprint continue to expand in order to support business requirements, more IT resources will be needed to be made available in a cost effective, yet QoS satisfying manner (again, refer back to figure 2). What this means is that more IT resources including server, storage and networking capacity, management tools along with associated software licensing and IT staff time will be required to protect, preserve and serve information.

By more effectively managing the data footprint across different applications and tiers of storage, it is possible to enhance application service delivery and responsiveness as well as facilitate more timely data protection to meet compliance and business objectives. To realize the full benefits of data footprint reduction, look beyond backup and offline data improvements to include online and active data using various techniques such as those in table 1 among others.

There are several methods (shown in table 1) that can be used to address data footprint proliferation without compromising data protection or negatively impacting application and business service levels. These approaches include archiving of structured (database), semi structured (email) and unstructured (general files and documents), data compression (real time and offline) and data deduplication.

 

Archiving

Compression

Deduplication

When to use

Structured (database), email and unstructured

Online (database, email, file sharing), backup or archive

Backup or archiving or recurring and similar data

Characteristic

Software to identify and remove unused data from active storage devices

Reduce amount of data to be moved (transmitted) or stored on disk or tape.

Eliminate duplicate files or file content observed over a period of time to reduce data footprint

Examples

Database, email, unstructured file solutions with archive storage

Host software, disk or tape, (network routers) and compression appliances or software as well as appearing in some primary storage system solutions

Backup and archive target devices and Virtual Tape Libraries (VTLs), specialized appliances

Caveats

Time and knowledge to know what and when to archive and delete, data and application aware

Software based solutions require host CPU cycles impacting application performance

Works well in background mode for backup data to avoid performance impact during data ingestion

Table 1: Data footprint reduction approaches and techniques

Archiving for compliance and general data retention
Data archiving is often perceived as a solution for compliance, however, archiving can be used for many other non compliance purposes. These include general data footprint reduction, to boost performance and enhance routine data maintenance and data protection. Archiving can be applied to structured databases data, semi structured email data and attachments and unstructured file data.

A key to deploying an archiving solution is having insight into what data exists along with applicable rules and policies to determine what can be archived, for how long, how many copies and how data ultimately may be finally retired or deleted. Archiving requires a combination of hardware, software and people to implement business rules.

A challenge with archiving is having the time and tools available to identify what data should be archived and what data can be securely destroyed when no longer needed. Further complicating archiving is that knowledge of the data value is also needed; this may well include legal issues as to who is responsible for making decisions on what data to keep or discard.

If a business can invest in the time and software tools, as well as identify which data to archive to support an effective archive strategy, the returns can be very positive towards reducing the data footprint without limiting the amount of information available for use.

Data compression (real time and offline)
Data compression is a commonly used technique for reducing the size of data being stored or transmitted to improve network performance or reduce the amount of storage capacity needed for storing data. If you have used a traditional or TCP/IP based telephone or cell phone, watched either a DVD or HDTV, listened to an MP3, transferred data over the internet or used email you have most likely relied on some form of compression technology that is transparent to you. Some forms of compression are time delayed, such as using PKZIP to zip files, while others are real time or on the fly based such as when using a network, cell phone or listening to an MP3.

Two different approaches to data compression that vary in time delay or impact on application performance along with the amount of compression and loss of data are loss less (no data loss) and lossy (some data loss for higher compression ratio). In addition to these approaches, there are also different implementations of including real time for no performance impact to applications and time delayed where there is a performance impact to applications.

In contrast to traditional ZIP or offline, time delayed compression approaches that require complete decompression of data prior to modification, online compression allows for reading from, or writing to, any location within a compressed file without full file decompression and resulting application or time delay. Real time appliance or target based compression capabilities are well suited for supporting online applications including databases, OLTP, email, home directories, web sites and video streaming among others without consuming host server CPU or memory resources or degrading storage system performance.

Note that with the increase of CPU server processing performance along with multiple cores, server based compression running in applications such as database, email, file systems or operating systems can be a viable option for some environments.

A scenario for using real time data compression is for time sensitive applications that require large amounts of data such as online databases, video and audio media servers, web and analytic tools. For example, databases such as Oracle support NFS3 Direct IO (DIO) and Concurrent IO (CIO) capabilities to enable random and direct addressing of data within an NFS based file. This differs from traditional NFS operations where a file would be sequential read or written.

Another example of using real time compression is to combine a NAS file server configured with 300GB or 600GB high performance 15.5K Fibre Channel or SAS HDDs in addition to flash based SSDs to boost the effective storage capacity of active data without introducing a performance bottleneck associated with using larger capacity HDDs. Of course, compression would vary with the type of solution being deployed and type of data being stored just as dedupe ratios will differ depending on algorithm along with if text or video or object based among other factors.

Deduplication (Dedupe)
Data deduplication (also known as single instance storage, commonalty factoring, data difference or normalization) is a data footprint reduction technique that eliminates the occurrence of the same data. Deduplication works by normalizing the data being backed up or stored by eliminating recurring or duplicate copies of files or data blocks depending on the implementation.

Some data deduplication solutions boast spectacular ratios for data reduction given specific scenarios, such as backup of repetitive and similar files, while providing little value over a broader range of applications.

This is in contrast with traditional data compression approaches that provide lower, yet more predictable and consistent data reduction ratios over more types of data and application, including online and primary storage scenarios. For example, in environments where there is little to no common or repetitive data files, data deduplication will have little to no impact while data compression generally will yield some amount of data footprint reduction across almost all types of data.

Some data deduplication solution providers have either already added, or have announced plans to add, compression techniques to compliment and increase the data footprint effectiveness of their solutions across a broader range of applications and storage scenarios, attesting to the value and importance of data compression to reduce data footprint.

When looking at deduplication solutions, determine if the solution is designed to scale in terms of performance, capacity and availability over a large amount of data along with how restoration of data will be impacted by scaling for growth. Other items to consider include how data is reduplicated, such as real time using inline or some form of time delayed post processing, and the ability to select the mode of operation.

For example, a dedupe solution may be able to process data at a specific ingest rate inline until a certain threshold is hit and then processing reverts to post processing so as to not cause a performance degradation to the application writing data to the deduplication solution. The downside of post processing is that more storage is needed as a buffer. It can, however, also enable solutions to scale without becoming a bottleneck during data ingestion.

However, there is life beyond dedupe which is to in no way diminish dedupe or its very strong and bright future, one that Im increasingly convinced of having talked with hundreds of IT professionals (e.g. the customers) is that only the surface is being scratched for dedupe, not to mention larger data footprint impact opportunity seen in figure 5.


Figure 5: Dedupe adoption and deployment waves over time

While dedupe is a popular technology from a discussion standpoint and has good deployment traction, it is far from reaching mass customer adoption or even broad coverage in environments where it is being used. StorageIO research shows broadest adoption of dedupe centered around backup in smaller or SMB environments (dedupe deployment wave one in figure 5) with some deployment in Remote Office Branch Office (ROBO) work groups as well as departmental environments.

StorageIO research also shows that complete adoption in many of those SMB, ROBO, work group or smaller environments has yet to reach 100 percent. This means that there remains a large population that has yet to deploy dedupe as well as further opportunities to increase the level of dedupe deployment by those already doing so.

There has also been some early adoption in larger core IT environments where dedupe coexists with complimenting existing data protection and preservation practices. Another current deployment scenario for dedupe has been for supporting core edge deployments in larger environments that provide support for backup and data protection of ROBO, work group and departmental systems.

Note that figure 5 simply shows the general types of environments in which dedupe is being adopted and not any sort of indicators as to the degree of deployment by a given customer or IT environment.

What to do about your expanding data footprint impact?
Develop an overall data foot reduction strategy that leverages different techniques and technologies addressing online primary, secondary and offline data. Assess and discover what data exists and how it is used in order to effectively manage storage needs.

Determine policies and rules for retention and deletion of data combining archiving, compression (online and offline) and dedupe in a comprehensive data footprint strategy. The benefit of a broader, more holistic, data footprint reduction strategy is the ability to address the overall environment, including all applications that generate and use data as well as IRM or overhead functions that compound and impact the data footprint.

Data footprint reduction: life beyond (and complimenting) dedupe
The good news is that the Drs. and Divas of dedupe marketing (the ones who also are good at the disco dedupe dance debates) have targeted backup as an initial market sweet (and success) spot shown in figure 5 given the high degree of duplicate data.


Figure 6: Leverage multiple data footprint reduction techniques and technologies

However that same good news is bad news in that there is now a stigma that dedupe is only for backup, similar to how archive was hijacked by the compliance marketing folks in the post Y2K era. There are several techniques that can be used individually to address specific data footprint reduction issues or in combination as seen in figure 7 to implement a more cohesive and effective data footprint reduction strategy.


Figure 7: How various data footprint reduction techniques are complimentary

What this means is that both archive, dedupe as well as other forms of data footprint reduction can and should be used beyond where they have been target marketed using the applicable tool for the task at hand. For example, a common industry rule of thumb is that on average, ten percent of data changes per day (your mileage and rate of change will certainly vary given applications, environment and other factors).

Now assuming that you have 100TB (feel free to subtract a zero or two, or add as many as needed) of data (note I did not say storage capacity or percent utilized), ten percent change would be 10TB that needs to be backed up, replicated and so forth. Now with basic 2 to 1 streaming tape compression (2.5 to 1 in upcoming LTO enhancements) would reduce the daily backup footprint from 10TB to 5TB.

Using dedupe with 10 to 1 would get that from 10TB down to 1TB or about the size of a large capacity disk drive. With 20 to 1 that cuts the daily backup down to 500GB and so forth. The net effect is that more daily backups can be stored in the same footprint which in turn helps expedite individual file recover by having more options to choose from off of the disk based cache, buffer or storage pool.

On the other hand, if your objective is to reduce and eliminate storage capacity, then the same amount of backups can be stored on less disk freeing up resources. Now take the savings times the number of days in your backup retention and you should see the numbers start to add up.

Now what about the other 90 percent of the data that may not have changed, or, that did change and exists on higher performance storage?

Can its footprint impact be reduced?

The answer should be perhaps or it depends as well as prompts the question of what tool would be best. There is a popular thinking as is often the case with industry buzzwords or technologies to use it everywhere. After all goes the thinking, if it is a good thing why not use and deploy more of it everywhere?

Keep in mind that dedupe trades time to perform thinking and apply intelligence to further reduce data in exchange for space capacity. Thus trading time for space capacity can have a negative impact on applications that need lower response time, higher performance where the focus is on rates vs ratios. For example, the other 90 to 100 percent of the data in the above example may have to be on a mix of high and medium performance storage to meet QoS or service level agreement (SLA) objectives. While it would fun or perhaps cool to try and achieve a high data reduction ratio on the entire 100TB of active data with dedupe (e.g. trying to achieve primary dedupe), the performance impacts could have a negative impact.

The option is to apply a mix of different data footprint reduction techniques across the entire 100TB. That is, use dedupe where applicable and higher reduction ratios can be achieved while balancing performance, compression used for streaming data to tape for retention or archive as well as in databases or other applications software not to mention in networks. Likewise, use real time compression or what some refer to as primary dedupe for online active changing data along with online static read only data.

Deploy a comprehensive data footprint reduction strategy combining various techniques and technologies to address point solution needs as well as the overall environment, including online, near line for backup, and offline for archive data.

Lets not forget about archiving, thin provisioning, space saving snapshots, commonsense data management among other techniques across the entire environment. In other words, if your focus is just on dedupe for backup to
achieve an optimized and efficient storage environment, you are also missing

out on a larger opportunity. However, this also means having multiple tools or

technologies in your IT IRM toolbox as well as understanding what to use when, where and why.

Data transfer rates is a key metric for performance (time) optimization such as meeting backup or restore or other data protection windows. Data reduction ratios is a key metric for capacity (space) optimization where the focus is on storing as much data in a given footprint

Some additional take away points:

  • Develop a data footprint reduction strategy for online and offline data
  • Energy avoidance can be accomplished by powering down storage
  • Energy efficiency can be accomplished by using tiered storage to meet different needs
  • Measure and compare storage based on idle and active workload conditions
  • Storage efficiency metrics include IOPS or bandwidth per watt for active data
  • Storage capacity per watt per footprint and cost is a measure for in active data
  • Small percentage reductions on a large scale have big benefits
  • Align the applicable form of virtualization for the given task at hand

Some links for additional reading on the above and related topics

Wrap up (for now, read part II here)

For some applications reduction ratios are an important focus on the tools or modes of operations that achieve those results.

Likewise for other applications where the focus is on performance with some data reduction benefit, tools are optimized for performance first and reduction secondary.

Thus I expect messaging from some vendors to adjust (expand) to those capabilities that they have in their toolboxes (product portfolios) offerings

Consequently, IMHO some of the backup centric dedupe solutions may find themselves in niche roles in the future unless they can diversity. Vendors with multiple data footprint reduction tools will also do better than those with only a single function or focused tool.

However for those who only have a single or perhaps a couple of tools, well, guess what the approach and messaging will be.

After all, if all you have is a hammer everything looks like a nail, if all you have is a screw driver, well, you get the picture.

On the other hand, if you are still not clear on what all this means, send me a note, give a call, post a comment or a tweet and will be happy to discuss with you.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Storage Efficiency and Optimization – The Other Green

For those of you in the New York City area, I will be presenting live in person at Storage Decisions September 23, 2009 conference The Other Green, Storage Efficiency and Optimization.

Throw out the "green“: buzzword, and you’re still left with the task of saving or maximizing use of space, power, and cooling while stretching available IT dollars to support growth and business sustainability. For some environments the solution may be consolation while others need to maintain quality of service response time, performance and availability necessitating faster, energy efficient technologies to achieve optimization objectives.

To accomplish these and other related issues, you can turn to the cloud, virtualization, intelligent power management, data footprint reduction and data management not to mention various types of tiered storage and performance optimization techniques. The session will look at various techniques and strategies to optimize either on-line active or primary as well as near-line or secondary storage environment during tough economic times, as well as to position for future growth, after all, there is no such thing as a data recession!

Topics, technologies and techniques that will be discussed include among others:

  • Energy efficiency (strategic) vs. energy avoidance (tactical), whats different between them
  • Optimization and the need for speed vs. the need for capacity, finding the right balance
  • Metrics & measurements for management insight, what the industry is doing (or not doing)
  • Tiered storage and tiered access including SSD, FC, SAS, tape, clouds and more
  • Data footprint reduction (archive, compress, dedupe) and thin provision among others
  • Best practices, financial incentives and what you can do today

This is a free event for IT professionals, however space I hear is limited, learn more and register here.

For those interested in broader IT data center and infrastructure optimization, check out the on-going seminar series The Infrastructure Optimization and Planning Best Practices (V2.009) – Doing more with less without sacrificing storage, system or network capabilities Seminar series continues September 22, 2009 with a stop in Chicago. This is also a free Seminar, register and learn more here or here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved