VMware vSAN V6.6 Part II (just the speeds feeds features please)

server storage I/O trends

VMware vSAN v6.6 Part II (just the speeds feeds features please)

In case you missed it, VMware announced vSAN v6.6 hyper-converged infrastructure (HCI) software defined data infrastructure solution. This is the second of a five-part series about VMware vSAN V6.6. View Part I here, part III (reducing cost and complexity) located here, part IV (scaling ROBO and data centers today) found here, as well as part V here (VMware vSAN evolution, where to learn more and summary).

VMware vSAN 6.6
Image via VMware

For those who are not aware, vSAN is a VMware virtual Storage Area Network (e.g. vSAN) that is software-defined, part of being a software-defined data infrastructure (SDDI) and software-defined data center (SDDC). Besides being software-defined vSAN is HCI combining compute (server), I/O networking, storage (space and I/O) along with hypervisors, management, and other tools.

Just the Speeds and Feeds Please

For those who just want to see the list of what’s new with vSAN V6.6, here you go:

  • Native encryption for data-at-rest
  • Compliance certifications
  • Resilient management independent of vCenter
  • Degraded Disk Handling v2.0 (DDHv2)
  • Smart repairs and enhanced rebalancing
  • Intelligent rebuilds using partial repairs
  • Certified file service & data protection solutions
  • Stretched clusters with local failure protection
  • Site affinity for stretched clusters
  • 1-click witness change for Stretched Cluster
  • vSAN Management Pack for vRealize
  • Enhanced vSAN SDK and PowerCLI
  • Simple networking with Unicast
  • vSAN Cloud Analytics with real-time support notification and recommendations
  • vSAN ConfigAssist with 1-click hardware lifecycle management
  • Extended vSAN Health Services
  • vSAN Easy Install with 1-click fixes
  • Up to 50% greater IOPS for all-flash with optimized checksum and dedupe
  • Support for new next-gen workloads
  • vSAN for Photon in Photon Platform 1.1
  • Day 0 support for latest flash technologies
  • Expanded caching tier choice
  • Docker Volume Driver 1.1

What’s New and Value Proposition of vSAN 6.6

Let’s take a closer look beyond the bullet list of what’s new with vSAN 6.6, as well as perspectives of those features to address different needs. The VMware vSAN proposition is to evolve and enable modernizing data infrastructures with HCI powered by vSphere along with vSAN.

Three main themes or characteristics (and benefits) of vSAN 6.6 include addressing (or enabling):

  • Reducing risk while scaling
  • Reducing cost and complexity
  • Scaling for today and tomorrow

VMware vSAN 6.6 summary
Image via VMware

Reducing risk while scaling

Reducing (or removing) risk while evolving your data infrastructure with HCI including flexibility of choosing among five support hardware vendors along with native security. This includes native security, availability and resiliency enhancements (including intelligent rebuilds) without sacrificing storage efficiency (capacity) or effectiveness (performance productivity), management and choice.

VMware vSAN DaRE
Image via VMware

Dat level Data at Rest Encryption (DaRE) of all vSAN dat objects that are enabled at a cluster level. The new functionality supports hybrid along with all flash SSD as well as stretched clusters. The VMware vSAN DaRE implementation is an alternative to using self-encrypting drives (SEDs) reducing cost, complexity and management activity. All vSAN features including data footprint reduction (DFR) features such as compression and deduplication are supported. For security, vSAN DaRE integrations with compliance key management technologies including those from SafeNet, Hytrust, Thales and Vormetric among others.

VMware vSAN management
Image via VMware

ESXi HTML 5 based host client, along with CLI via ESXCLI for administering vSAN clusters as an alternative in case your vCenter server(s) are offline. Management capabilities include monitoring of critical health and status details along with configuration changes.

VMware vSAN health management
Image via VMware

Health monitoring enhancements include handling of degraded vSAN devices with intelligence proactively detecting impending device failures. As part of the functionality, if a replica of the failing (or possible soon to fail) device exists, vSAN can take action to maintain data availability.

Where to Learn More

The following are additional resources to find out more about vSAN and related technologies.

What this all means

With each new release, vSAN is increasing its feature, functionality, resiliency and extensiveness associated with traditional storage and non-CI or HCI solutions. Continue reading more about VMware vSAN 6.6 in Part I here, part III (reducing cost and complexity) located here, part IV (scaling ROBO and data centers today) found here, as well as part V here (VMware vSAN evolution, where to learn more and summary).

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the Spring 2017 release of his new book “Software-Defined Data Infrastructure Essentials” (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

VMware vSAN V6.6 Part III (reducing costs complexity)

server storage I/O trends

VMware vSAN V6.6 Part III (Reducing costs complexity)

In case you missed it, VMware announced vSAN v6.6 hyper-converged infrastructure (HCI) software defined data infrastructure solution. This is the third of a five-part series about VMware vSAN V6.6. View Part I here, Part II (just the speeds feeds please) is located here, part IV (scaling ROBO and data centers today) found here, as well as part V here (VMware vSAN evolution, where to learn more and summary).

VMware vSAN 6.6
Image via VMware

For those who are not aware, vSAN is a VMware virtual Storage Area Network (e.g. vSAN) that is software-defined, part of being a software-defined data infrastructure (SDDI) and software-defined data center (SDDC). Besides being software-defined vSAN is HCI combining compute (server), I/O networking, storage (space and I/O) along with hypervisors, management, and other tools.

Reducing cost and complexity

Reducing your total cost of ownership (TCO) including lower capital expenditures (CapEx) and operating (OPEX). VMware is claiming CapEx and OpEx reduced TCO of 50%. Keep in mind that solutions such as vSAN also can help drive return on investment (ROI) as well as return on innovation (the other ROI) via improved productivity, effectiveness, as well as efficiencies (savings). Another aspect of addressing TCO and ROI includes flexibility leveraging stretched clusters to address HA, BR, BC and DR Availability needs cost effectively. These enhancements include efficiency (and effectiveness e.g. productivity) at scale, proactive cloud analytics, and intelligent operations.

VMware vSAN stretch cluster
Image via VMware

Low cost (or cost-effective) Local, Remote Resiliency and Data Protection with Stretched Clusters across sites. Upon a site failure, vSAN maintains availability is leveraging surviving site redundancy. For performance and productivity effectiveness, I/O traffic is kept local where possible and practical, reducing cross-site network workload. Bear in mind that the best I/O is the one you do not have to do, the second is the one with the least impact.

This means if you can address I/Os as close to the application as possible (e.g. locality of reference), that is a better I/O. On the other hand, when data is not local, then the best I/O is the one involving a local or remote site with least overhead impact to applications, as well as server storage I/O (including networks) resources. Also keep in mind that with vSAN you can fine tune availability, resiliency and data protection to meet various needs by adjusting fault tolerant mode (FTM) to address a different number of failures to tolerate.

server storage I/O locality of reference

Network and cloud friendly Unicast Communication enhancements. To improve performance, availability, and capacity (CPU demand reduction) multicast communications are no longer used making for easier, simplified single site and stretched cluster configurations. When vSAN clusters upgrade to V6.6 unicast is enabled.

VMware vSAN unicast
Image via VMware

Gaining insight, awareness, adding intelligence to avoid flying blind, introducing vSAN Cloud Analytics and Proactive Guidance. Part of a VMware customer, experience improvement program, leverages cloud-based health checks for easy online known issue detection along with relevant knowledge bases pieces as well as other support notices. Whether you choose to refer to this feature as advanced analytics, artificial intelligence (AI), proactive rules enabled management problem isolation, solving resolution I will leave that up to you.

VMware vSAN cloud analytics
Image via VMware

Part of the new tools analytics capabilities and prescriptive problem resolution (hmm, some might call that AI or advanced analytics, just saying), health check issues are identified, notifications along with suggested remediation. Another feature is the ability to leverage continuous proactive updates for advance remediation vs. waiting for subsequent vSAN releases. Net result and benefit are reducing time, the complexity of troubleshooting converged data infrastructure issues spanning servers, storage, I/O networking, hardware, software, cloud, and configuration. In other words, enable you more time to be productive vs. finding and fixing problems leveraging informed awareness for smart decision-making.

Where to Learn More

The following are additional resources to find out more about vSAN and related technologies.

What this all means

Continue reading more about VMware vSAN 6.6 in part I here, part II (just the speeds feeds please) located here, part IV (scaling ROBO and data centers today) found here, as well as part V here (VMware vSAN evolution, where to learn more and summary).

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book “Software-Defined Data Infrastructure Essentials” (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

EMC is now Dell EMC, part of Dell Technologies and other server storage Updates

EMC is now Dell EMC and other server storage Updates

server storage I/O trends

In case you missed it or did not hear, EMC is now Dell EMC and is future ready (one of their new tag lines).

What this means is that EMC is no longer a publicly traded company instead now being privately held under the Dell Technologies umbrella. In case you did not know or had forgotten, one of the principal owners of Dell Technologies is Michael Dell aka the founder of Dell Computers which itself went private a few years ago. The Dell Server division which sells direct as well as via channels and OEMs is now part of the Dell EMC division (e.g. they sell Servers, Storage, I/O and Networking hardware, software and services).

Dell EMC Storage Portfolio
Dell EMC Storage Portfolio – Via emc.com

Other related news and activities include:

  • Dell EMC sells Content Division (e.g. Documentum (bought in 2003), InfoArchive and LEAP) to OpenText for $1.62B USD
  • Dell is selling its Sonicwall and software division (e.g. what was a mix of Quest and other non-EMC related software) to a Private Equity group. The new company to be called Quest has ironically as one of its investors, activist PE firm Elliott Management. You might recall Elliott Management was the activist investor pushing for more value out of EMC for shareholders.
  • Expands Data Protection Portfolio For VMware Environments
  • Hybrid Cloud Platform Enhancements
  • XtremIO New Features and Management for Virtualized Environments
  • Combines DSSD and PowerEdge Servers for SAS (Software) Analytics
  • ScaleIO Ready Node Offers All-Flash Software-Defined
  • Expands Microsoft Support across Cloud and Converged Infrastructure
  • With approximately 140,000 employees worldwide post merger Dell EMC has announce some expected layoffs.

Dell EMC Enahncements made today

  • Announced a new entry-level VMAX (200F) with very small physical footprint, affordable starter system price and flexibility to scale as you need to grow. Also announced were SRDF third site enhancements as well as VPLEX updates.
  • Data Domain enhancements including OS 6.0, flash and tiering across private, public and hybrid cloud
  • Unity mid-range storage (e.g. the successor to VNX) enhanced with all-flash and UnityOE software updates that include in-line compression along with cloud tiering. All-flash Unity models using 15.36TB SAS Flash SSD drives (3D NAND) can support up to 384TB in a 2U rack. Cloud tiering includes support for Virtustream, AWS and Microsoft Azure.

Dell EMC VMAX storage family
Dell EMC VMAX family and new 200F – Via emc.com

Note that in-line compression on Unity and VMAX systems is available on all-flash based systems, while tiering is available on both all-flash as well as hybrid systems.

Where To Learn More

Dell Updates Storage Center Operating System 7 (SCOS 7)
EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I
Part II – EMC DSSD D5 Direct Attached Shared AFA
EMCworld 2016 Getting Started on Dell EMC announcements
EMCworld 2016 EMC Hybrid and Converged Clouds Your Way
Dell-EMC: The Storage Ramifications
VMware Targets Synergies in Dell-EMC Deal 
Dell to Buy EMC for $67B; Sharpen Focus on Large Enterprises and High-End Computing
Dell SAN strategy examined after move to go private
EMC VxRack Neutrino Nodes launched for OpenStack cloud storage
EMC Under Pressure To Spin Off VMware
EMC Bridges Cloud, On-Premise Storage With TwinStrata Buy
Top Ten Takeaways from EMC World
When to implement ultra-dense server storage
EMCworld 2015 How Do You Want Your Storage Wrapped?
EMCworld 2015 How Do You Want Your Storage Wrapped?

What This All Means

For those that think (or wish) that now that EMC has gone private (e.g. granted under Dell ownership) that they have gone away and no longer relevant, time will tell what happens long term. However while they (EMC, now Dell EMC) are no longer a publicly held company, they are still very much in the public spotlight addressing legacy, current as well as emerging IT data infrastructure and software-defined data center, software defined storage and related topics spanning cloud, virtual, container among others.

What this all means is that Dell EMC is following through with providing different types of data infrastructure along with associated server, storage and I/O solutions as well as associated software defined storage management and data protection tools to meet various needs. How do you want your storage wrapped? Do you want it software defined such as a ScaleIO, ECS (object), DataDomain (data protection), VIPR, or Unity among other virtual storage appliances (VSAs), or tin-wrapped as a physical storage system or appliance?

With the VMAX 200F, Dell EMC is showing that they can scale-down the VMAX. Dell EMC is also showing they can scale VMAX up and out while making it affordable and physically practical for smaller environments who want, need or are required to have traditional enterprise class storage in a small footprint (price, physical space) with enterprise resiliency.

Dell EMC Storage Portfolio
Dell EMC Storage Portfolio – Via emc.com

A question that comes up is what happens with the various competing Dell and EMC (pre-merger) storage product lines. If you look closely at the storage line up photo above, you will notice the Dell SC (e.g. Compellent) is shown along with all of the EMC solutions. This should or could prompt the question of what about the PS series (e.g. EqualLogic) or some MD. So far the answer I have received is that they remain available for sale which you can confirm via the Dell website. However, what will the future bring to those or others is still TBD.

Needless to say there is more to see and hear coming out of Dell EMC in the weeks and months ahead, that is unless as some predict (or wishful thinking) they go away which I don’t see happening anytime soon. Oh, FWIW, Dell and EMC have been Server StorageIO clients direct and indirect via 3rd parties in the past (that’s a disclosure btw).

Ok, nuff said, for now…

Cheers
Gs

Greg Schulz – Microsoft MVP and VMware vSAN vExpert, Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

Is there an information or data recession? Are you using less storage? (With Polls)

Is there an information or data recession? Are you using less storage? (With Polls)

StorageIO industry trends

Is there an information recession where you are creating, processing, moving or saving less data?

Are you using less data storage than in the past either locally online, offline or remote including via clouds?

IMHO there is no such thing as a data or information recession, granted storage is being used more effectively by some, while economic pressures or competition enables your budgets to be stretched further. Likewise people and data are living longer and getting larger.

In conversations with IT professionals particular the real customers (e.g. not vendors, VAR’s, analysts, blogalysts, consultants or media) I routinely hear from people that they continue to have the need to store more information, however they’re data storage usage and acquisition patterns are changing. For some this means using what they have more effectively leveraging data footprint reduction (DFR) which includes (archiving, compression, dedupe, thin provision, changing how and when data is protected). This also means using different types of storage from flash SSD to HDD to SSHD to tape summit resources as well as cloud in different ways spanning block, file and object storage local and remote.

A common question that comes up particular around vendor earnings announcement times is if the data storage industry is in decline with some vendors experience poor results?

Look beyond vendor revenue metrics

As a back ground reading, you might want to check out this post here (IT and storage economics 101, supply and demand) which candidly should be common sense.

If all you looked at were a vendors revenues or margin numbers as an indicator of how well such as the data storage industry (includes traditional, legacy as well as cloud) you would not be getting the picture.

What needs to be factored into the picture is how much storage is being shipped (from components such as drives to systems and appliances) as well as delivered by service providers.

Looking at storage systems vendors from a revenue earnings perspective you would get mixed indicators depending on who you include, not to mention on how those vendors report break of revenues by product, or amount units shipped. For example looking at public vendors EMC, HDS, HP, IBM, NetApp, Nimble and Oracle (among others) as well as the private ones (if you can see the data) such as Dell, Pure, Simplivity, Solidfire, Tintri results in different analysis. Some are doing better than others on revenues and margins, however try to get clarity on number of units or systems shipped (for actual revenue vs. loaners (planting seeds for future revenue or trials) or demos).

Then look at the service providers such as AWS, Centurlylink, Google, HP, IBM, Microsoft Rackspace or Verizon (among others) you should see growth, however clarity about how much they are actually generating on revenues plus margin for storage specific vs. broad general buckets can be tricky.

Now look at the component suppliers such as Seagate and Western Digital (WD) for HDDs and SSHDs who also provide flash SSD drives and other technology. Also look at the other flash component suppliers such as Avago/LSI whose flash business is being bought by Seagate, FusionIO, SANdisk, Samsung, Micron and Intel among others (this does not include the systems vendors who OEM those or other products to build systems or appliances). These and other component suppliers can give another indicator as to the health of the industry both from revenue and margin, as well as footprint (e.g. how many devices are being shipped). For example the legacy and startup storage systems and appliance vendors may have soft or lower revenue numbers, however are they shipping the same or less product? Likewise the cloud or service providers may be showing more revenues and product being acquired however at what margin?

What this all means?

Growing amounts of information?

Look at revenue numbers in the proper context as well as in the bigger picture.

If the same number of component devices (e.g. processors, HDD, SSD, SSHD, memory, etc) are being shipped or more, that is an indicator of continued or increased demand. Likewise if there is more competition and options for IT organizations there will be price competition between vendors as well as service providers.

All of this means that while IT organizations budgets stay stretched, their available dollars or euros should be able to buy (or rent) them more storage space capacity.

Likewise using various data and storage management techniques including DFR, the available space capacity can be stretched further.

So this then begs the question of if the management of storage is important, why are we not hearing vendors talking about software defined storage management vs. chasing each other to out software define storage each other?

Ah, that’s for a different post ;).

So what say you?

Are you using less storage?

Do you have less data being created?

Are you using storage and your available budget more effectively?

Please take a few minutes and cast your vote (and see the results).

Sorry I have no Amex or Amazon gift cards or other things to offer you as a giveaway for participating as nobody is secretly sponsoring this poll or post, it’s simply sharing and conveying information for you and others to see and gain insight from.

Do you think that there is an information or data recession?

How about are you using or buying more storage, could there be a data storage recession?

Some more reading links

IT and storage economics 101, supply and demand
Green IT deferral blamed on economic recession might be result of green gap
Industry trend: People plus data are aging and living longer
Is There a Data and I/O Activity Recession?
Supporting IT growth demand during economic uncertain times
The Human Face of Big Data, a Book Review
Garbage data in, garbage information out, big data or big garbage?
Little data, big data and very big data (VBD) or big BS?

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Part II Until the focus expands to data protection – What to do about it

Storage I/O trends

Part II – Until the focus expands to data protection – What to do about it

This is the second of a three-part series (read part I here) about how vendors are keeping backup alive, however what they can and should do to shift and expand the conversation to data protection and related themes.

Modernizing data protection and what to do about it

Building off of what was mentioned in the first post, lets take a look at what can be done including expanding the conversation around data protection in support of business continuance (BC), disaster recovery (DR), high availability (HA), business resiliency (BR) not to mention helping backup to actually retire (someday). Now when I backup retire, I’m not necessarily talking about a technology such as hardware, software or a service including clouds, rather when, where, why and how data gets protected. What I mean by this is to step back from looking at the tools and technologies to how they are used and can be used in new and different ways moving forward.

People convergenceStorageIO people convergence
Converged people and technology teams

All to often I see where new technologies or tools get used in old ways which while providing some near-term relief, the full capabilities of what is being used may not be fully realized. This also ties into the theme of people not technologies can be a barrier to convergence and transformation that you can read more about here and here.

Whats your data protection strategy, business or technology focused?

expand focus beyond tools
Data protection strategy evolving beyond tools looking for a problem to solve

Part of modernizing data protection is getting back to the roots or fundamentals including revisiting business needs, requirements along with applicable threat risks to then align application tools, technologies and techniques. This means expanding focus from just the technology, however also more importantly how to use different tools for various scenarios. In other words having a tool-box and know how to use it vs. everything looking like a nail as all you have is a hammer. Check out various webinars, Google+ hangouts and other live events that I’m involved with on the StorageIO.com events page on data protection and related data infrastructure themes including BackupU (getting back to the basics and fundamentals).

data protection options

Everything is not the same, leverage different data protection approaches to different situations

Wrap up (for now)

Continue reading part three of this series here to see what can be done (taking action) about shifting the conversation about modernizing data protection. Also check out conversations about trends, themes, technologies, techniques perspectives in my ongoing data protection diaries discussions (e.g. www.storageioblog.com/data-protection-diaries-main/).

Ok, nuff said

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

 

Data Protection Diaries – My data protection needs and wants

Storage I/O trends

Blog post: Data Protection Diaries – My data protection needs and wants

Update 1/10/18

Rather than talking about what others should do or consider for their data protection needs, for this post I wrote down some notes using my Livescribe about what I need and want for my environment. As part of walking the talk in future posts I’m going to expand a bit more on what I’m doing as well as considering for enhancements to my environment for data protection which consists of cloud, virtual and physical.

Why and what am I Protecting?

live scribe example
Livescribe notes that I used for creating the following content

What is my environment

Server and StorageIO (aka StorageIO) is a small business that is focused in and around data infrastructures which includes data protection as a result, have lots of data including videos, audio, images, presentations, reports, research as well, file serving as back-office applications.  Then there are websites, blog, email and related applications, some of which are cloud based that are also part of my environment that have different availability, durable, and accessibility requirements.

My environment includes local on-site physical as well as virtual systems, mobile devices, as well as off-site resources including a dedicated private server (DPS) at a service provider. On one hand as a small business, I could easily move most if not everything into the cloud using an as a service model. However, I also have a lab and research environment for doing various things involving data infrastructure including data protection so why not leverage those for other things.

Why do I need to protect my information and data infrastructure?

  • Protect and preserve the business along with associated information as well as assets
  • Compliance (self and client based, PCI and other)
  • Security (logical and physical) and privacy to guard against theft, loss, instrusions
  • Logical (corruption, virus, accidental deletion) and physical damage to systems, devices, applications and data
  • Isolate and contain faults of hardware, software, networks, people actions from spreading to disasters
  • Guard against on-site or off-site incidents, acts of man or nature, head-line news and non head-line news
  • Address previous experience, incidents and situations, preventing future issues or problems
  • Support growth while enabling agility, flexibity
  • Walk the talk, research, learning increasing experience

My wants – What I would like to have

  • Somebody else pay for it all, or exist in world where there are no threat risks to information (yeh right ;) )
  • Cost effective and value (not necessarily the cheapest, I also want it to work)
  • High availability and durability to protect against different threat risks (including myself)
  • Automated, magically to take care of everything enabled by unicorns and pixie dust ;).

My requirements – What I need (vs. want):

  • Support mix of physical, virtual and cloud applications, systems and data
  • Different applications and data, local and some that are mobile
  • Various operating environments including Windows and Linux
  • NOT have to change my environment to meet limits of a particular solution or approach
  • Need a solution (s) that fit my needs and that can scale, evolve as well as enable change when my environment does
  • Also leverage what I have while supporting new things

Data protection topics, trends, technologies and related themes

Wrap and summary (for now)

Taking a step back to look at a high-level of what my data protection needs are involves looking at business requirements along with various threat risks, not to mention technical considerations. In a future post I will outline what I am doing as well as considering for enhancements or other changes along with different tools, technologies used in hybrid ways. Watch for more posts in this ongoing series of the data protection dairies via www.storageioblog.com/data-protection-diaries-main/.

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Part II: EMC announces XtremIO General Availability, speeds and feeds

Storage I/O trends

XtremIO flash SSD more than storage I/O speed

Following up part I of this two-part series, here are more more details, insights and perspectives about EMC XtremIO and it’s generally availability that were announced today.

XtremIO the basics

  • All flash Solid State Device (SSD) based solution
  • Cluster of up to four X-Brick nodes today
  • X-Bricks available in 10TB increments today, 20TB in January 2014
  • 25 eMLC SSD drives per X-Brick with redundant dual processor controllers
  • Provides server-side iSCSI and Fibre Channel block attachment
  • Integrated data footprint reduction (DFR) including global dedupe and thin provisioning
  • Designed for extending duty cycle, minimizing wear of SSD
  • Removes need for dedicated hot spare drives
  • Capable of sustained performance and availability with multiple drive failure
  • Only unique data blocks are saved, others tracked via in-memory meta data pointers
  • Reduces overhead of data protection vs. traditional small RAID 5 or RAID 6 configurations
  • Eliminates overhead of back-end functions performance impact on applications
  • Deterministic  storage I/O performance (IOPs, latency, bandwidth) over life of system

When would you use XtremIO vs. another storage system?

If you need all enterprise like data services including thin provisioning, dedupe, resiliency with deterministic performance on an all-flash system with raw capacity from 10-40TB (today) then XtremIO could be a good fit. On the other hand, if you need a mix of SSD based storage I/O performance (IOPS, latency or bandwidth) along with some HDD based space capacity, then a hybrid or traditional storage system could be the solution. Then there are hybrid scenarios where a hybrid storage system, array or appliance (mix of SSD and HDD) are used for most of the applications and data, with an XtremIO handling more tasks that are demanding.

How does XtremIO compare to others?

EMC with XtremIO is taking a different approach than some of their competitors whose model is to compare their faster flash-based solutions vs. traditional mid-market and enterprise arrays, appliances or storage systems on a storage I/O IOP performance basis. With XtremIO there is improved performance measured in IOPs or database transactions among other metrics that matter. However there is also an emphasis on consistent, predictable, quality of service (QoS) or what is known as deterministic storage I/O performance basis. This means both higher IOPs with lower latency while doing normal workload along with background data services (snapshots, data footprint reduction, etc).

Some of the competitors focus on how many IOPs or work they can do, however without context or showing impact to applications when back-ground tasks or other data services are in use. Other differences include how cluster nodes are interconnected (for scale out solutions) such as use of Ethernet and IP-based networks vs dedicated InfiniBand or PCIe fabrics. Host server attachment will also differ as some are only iSCSI or Fibre Channel block, or NAS file, or give a mix of different protocols and interfaces.

An industry trend however is to expand beyond the flash SSD need for speed focus by adding context along with QoS, deterministic behavior and addition of data services including snapshots, local and remote replication, multi-tenancy, metering and metrics, security among other items.

Storage I/O trends

Who or what are XtremIO competition?

To some degree vendors who only have PCIe flash SSD cards might place themselves as the alternative to all SSD or hybrid mixed SSD and HDD based solutions. FusionIO used to take that approach until they acquired NexGen (a storage system) and now have taken a broader more solution balanced approach of use the applicable tool for the task or application at hand.

Other competitors include the all SSD based storage arrays, systems or appliance vendors which includes legacy existing as well as startups vendors that include among others IBM who bought TMS (flashsystems), NetApp (EF540), Solidfire, Pure, Violin (who did a recent IPO) and Whiptail (bought by Cisco).  Then there are the hybrid which is a long list including Cloudbyte (software), Dell, EMCs other products, HDS, HP, IBM, NetApp, Nexenta (Software), Nimble, Nutanix, Oracle, Simplivity and Tintri among others.

What’s new with this XtremIO announcement

10TB X-Bricks enable 10 to 40TB (physical space capacity) per cluster (available on 11/19/13). 20TB X-Bricks (larger capacity drives) will double the space capacity in January 2014. If you are doing the math, that means either a single brick (dual controller) system, or up to four bricks (nodes, each with dual controllers) configurations. Common across all system configurations are data features such as thin provisioning, inline data footprint reduction (e.g. dedupe) and XtremIO Data Protection (XDP).

What does XtremIO look like?

XtremIO consists of up to four nodes (today) based on what EMC calls X-Bricks.
EMC XtremIO X-Brick
25 SSD drive X-Brick

Each 4U X-Brick has 25 eMLC SSD drives in a standard EMC 2U DAE (disk enclosure) like those used with the VNX and VMAX for SSD and Hard Disk Drives (HDD). In addition to the 2U drive shelve, there are a pair of 1U storage processors (e.g. controllers) that give redundancy and shared access to the storage shelve.

XtremIO Architecture
XtremIO X-Brick block diagram

XtremIO storage processors (controllers) and drive shelve block diagram. Each X-Brick and their storage processors or controllers communicate with each other and other X-Bricks via a dedicated InfiniBand using Remote Direct Memory Access (RDMA) fabric for memory to memory data transfers. The controllers or storage processors (two per X-Brick) each have dual processors with eight cores for compute, along with 256GB of DRAM memory. Part of each controllers DRAM memory is set aside as a mirror its partner or peer and vise versa with access being over the InfiniBand fabric.

XtremIO fabric
XtremIO X-Brick four node fabric cluster or instance

How XtremIO works

Servers access XtremIO X-Bricks using iSCSI and Fibre Channel for block access. A responding X-Brick node handles the storage I/O request and in the case of a write updates other nodes. In the case of a write, the handling node or controller (aka storage processor) checks its meta data map in memory to see if the data is new and unique. If so, the data gets saved to SSD along with meta data information updated across all nodes. Note that data gets ingested and chunked or sharded into 4KB blocks. So for example if a 32KB storage I/O request from the server arrives, that is broken (e.g. chunk or shard) into 8 4KB pieces each with a mathematical unique fingerprint created. This fingerprint is compared to what is known in the in memory meta data tables (this is a hexadecimal number compare so a quick operation). Based on the comparisons if unique the data is saved and pointers created, if already exists, then pointers are updated.

In addition to determining if unique data, the fingerprint is also used for generate a balanced data dispersal plan across the nodes and SSD devices. Thus there is the benefit of reducing duplicate data during ingestion, while also reducing back-end IOs within the XtremIO storage system. Another byproduct is the reduction in time spent on garbage collection or other background tasks commonly associated with SSD and other storage systems.

Meta data is kept in memory with a persistent copied written to reserved area on the flash SSD drives (think of as a vault area) to support and keep system state and consistency. In between data consistency points the meta data is kept in a log journal like how a database handles log writes. What’s different from a typical database is that XtremIO XIOS platform software does these consistency point writes for persistence on a granularity of seconds vs. hours or minutes.

Storage I/O trends

What about rumor that XtremIO can only do 4KB IOPs?

Does this mean that the smallest storage I/O or IOP that XtremIO can do is 4GB?

That is a rumor or some fud I have heard floated by a competitor (or two or three) that assumes if only 4KB internal chunk or shard being used for processing, that must mean no IOPs smaller than 4KB from a server.

XtremIO can do storage I/O IOP sizes of 512 bytes (e.g. the standard block size) as do other systems. Note that the standard server storage I/O block or IO size is 512 bytes or multiples of that unless the new 4KB advanced format (AF) block size being used which based on my conversations with EMC, AF is not supported, yet. (Updated 11/15/13 EMC has indicated that host (front-end) 4K AF support, along with 512 byte emulation modes are available now with XIOS). Also keep in mind that since XtremIO XIOS internally is working with 4KB chunks or shards, that is a stepping stone for being able to eventually leverage back-end AF drive support in the future should EMC decide to do so (Updated 11/15/13 Waiting for confirmation from EMC about if back-end AF support is now enabled or not, will give more clarity as it is recieved).

What else is EMC doing with XtremIO?

  • VCE Vblock XtremIO systems for SAP HANA (and other databases) in memory databases along with VDI optimized solutions.
  • VPLEX and XtremIO for extended distance local, metro and wide area HA, BC and DR.
  • EMC PowerPath XtremIO storage I/O path optimization and resiliency.
  • Secure Remote Support (aka phone home) and auto support integration.

Boosting your available software license minutes (ASLM) with SSD

Another use of SSD has been in the past the opportunity to make better use of servers stretching their usefulness or delaying purchase of new ones by improving their effective use to do more work. In the past this technique of using SSDs to delay a server or CPU upgrade was used when systems when hardware was more expensive, or during the dot com bubble to fill surge demand gaps.  This has the added benefit of stretching database and other expensive software licenses to go further or do more work. The less time servers spend waiting for IOP’s means more time for doing useful work and bringing value of the software license. Otoh, the more time spent waiting is lot available software minutes which is cost overhead.

Think of available software licence minutes (ASLM) in terms of available software license minutes where if doing useful work your software is providing value. On the other hand if those minutes are not used for useful work (e.g. spent waiting or lost due to CPU or server or IO wait, then they are lost). This is like airlines and available seat miles (ASM) metric where if left empty it’s a lost opportunity, however if used, then value, not to mention if yield management applied to price that seat differently. To make up for that loss many organizations have to add extra servers and thus more software licensing costs.

Storage I/O trends

Can we get a side of context with them metrics?

EMC along with some other vendors are starting to give more context with their storage I/O performance metrics that matter than simple IOP’s or Hero Marketing Metrics. However context extends beyond performance to also availability and space capacity which means data protection overhead. As an example, EMC claims 25% for RAID 5 and 20% for RAID 6 or 30% for RAID 5/RAID 6 combo where a 25 drive (SSD) XDP has a 8% overhead. However this assumes a 4+1 (5 drive) RAID , not apples to apples comparison on a space overhead basis. For example a 25 drive RAID 5 (24+1) would have around an 4% parity protection space overhead or a RAID 6 (23+2) about 8%.

Granted while the space protection overhead might be more apples to apples with the earlier examples to XDP, there are other differences. For example solutions such as XDP can be more tolerant to multiple drive failures with faster rebuilds than some of the standard or basic RAID implementations. Thus more context and clarity would be helpful.

StorageIO would like see vendors including EMC along with startups who give data protection space overhead comparisons without context to do so (and applaud those who provide context). This means providing the context for data protection space overhead comparisons similar to performance metrics that matter. For example simply state with an asterisk or footnote comparing a 4+1 RAID 5 vs. a 25 drive erasure or forward error correction or dispersal or XDP or wide stripe RAID for that matter (e.g. can we get a side of context). Note this is in no way unique to EMC and in fact quite common with many of the smaller startups as well as established vendors.

General comments

My laundry list of items which for now would be nice to have’s, however for you might be need to have would include native replication (today leverages Recover Point), Advanced Format (4KB) support for servers (Updated 11/15/13 Per above, EMC has confirmed that host/server-side (front-end) AF along with 512 byte emulation modes exist today), as well as SSD based drives, DIF (Data Integrity Feature), and Microsoft ODX among others. While 12Gb SAS server to X-Brick attachment for small in the cabinet connectivity might be nice for some, more practical on a go forward basis would be 40GbE support.

Now let us see what EMC does with XtremIO and how it competes in the market. One indicator to watch in the industry and market of the impact or presence of EMC XtremIO is the amount of fud and mud that will be tossed around. Perhaps time to make a big bowl of popcorn, sit back and enjoy the show…

Ok, nuff said (for now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Garbage data in, garbage information out, big data or big garbage?

StorageIO industry trends cloud, virtualization and big data

Do you know the computer technology saying, garbage data in results in garbage information out?

In other words even with the best algorithms and hardware, bad, junk or garbage data put in results in garbage information delivered. Of course, you might have data analysis and cleaning software to look for, find and remove bad or garbage data, however that’s for a different post on another day.

If garbage data in results in garbage information out, does garbage big data in result in big garbage out?

I’m sure my sales and marketing friends or their surrogates will jump at the opportunity to tell me why and how big data is the solution to the decades old garbage data in problem.

Likewise they will probably tell me big data is the solution to problems that have not even occurred or been discovered yet, yeah right.

However garbage data does not discriminate or show preference towards big data or little data, in fact it can infiltrate all types of data and systems.

Lets shift gears from big and little data to how all of that information is protected, backed up, replicated, copied for HA, BC, DR, compliance, regulatory or other reasons. I wonder how much garbage data is really out there and many garbage backups, snapshots, replication or other copies of data exist? Sounds like a good reason to modernize data protection.

If we don’t know where the garbage data is, how can we know if there is a garbage copy of the data for protection on some other tape, disk or cloud. That also means plenty of garbage data to compact (e.g. compress and dedupe) to cut its data footprint impact particular with tough economic times.

Does this mean then that the cloud is the new destination for garbage data in different shapes or forms, from online primary to back up and archive?

Does that then make the cloud the new virtual garbage dump for big and little data?

Hmm, I think I need to empty my desktop trash bin and email deleted items among other digital house keeping chores now.

On the other hand, just had a thought about orphaned data and orphaned storage, however lets leave those sleeping dogs lay where they rest for now.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

More modernizing data protection, virtualization and clouds with certainty

This is a follow-up to a recent post about modernizing data protection and doing more than simply swapping out media or mediums like flat tires on a car as well as part of the Quantum protecting data with certainty event series.

As part of a recent 15 city event series sponsored by Quantum (that was a disclosure btw ;) ) titled Virtualization, Cloud and the New Realities for Data Protection that had a theme of strategies and technologies that will help you adapt to a changing IT environment I was asked to present a keynote at the events around Modernizing data protection for cloud, virtual and legacy environments (see earlier and related posts here and here).

Quantum data protection with certainty

Since late June (taking July and most of August off) and wrapping up last week, the event series has traveled to Boston, Chicago, Palo Alto, Houston, New York City, Cleveland, Raleigh, Atlanta, Washington DC, San Diego, Los Angeles, Mohegan Sun CT, St. Louis, Portland Oregon and King of Prussia (Philadelphia area).

The following are a series of posts via IT Knowledge Exchange (ITKE) that covered these events including commentary and perspectives from myself and others.

Data protection in the cloud, summary of the events
Practical solutions for data protection challenges
Big data’s new and old realities
Can you afford to gamble on data protection
Conversations in and around modernizing data protection
Can you afford not to use cloud based data protection

In addition to the themes in the above links, here are some more images, thoughts and perspectives from while being out and about at these and other events.

Datalink does your data center suck sign
While I was traveling saw this advertisement sign from Datalink (who is a Quantum partner that participated in some of the events) in a few different airports which is a variation of the Datadomain tape sucks attention getter. For those not familiar, that creature on the right is an oversized mosquito with the company logos on the lower left being Datalink, NetApp, Cisco and VMware.

goddess of data fertility
When in Atlanta for one of the events at the Morton’s in the Sun trust plaza, the above sculpture was in the lobby. Its real title is the goddess of fertility, however I’m going to refer to it as the goddess of data fertility, after all, there is no such thing as a data or information recession.

The world and storageio runs on dunkin donuts
Traveling while out and about is like a lot of things particular IT and data infrastructure related which is hurry up and wait. Not only does America Run on Dunkin, so to does StorageIO.

Use your imagination
When out and about, sometimes instead of looking up, or around, take a moment and look down and see what is under your feet, then let your imagination go for a moment about what it means. Ok, nuff of that, drink your coffee and let’s get back to things shall we.

Delta 757 and PW2037 or PW2040
Just like virtualization and clouds, airplanes need physical engines to power them which have to be energy-efficient and effective. This means being very reliable, good performance, fuel-efficient (e.g. a 757 on a 1,500 mile trip if full can be in the neighborhood of 65 plus miles per gallon per passenger with a low latency (e.g. fast trip). In this case, a Pratt and Whitney PW2037 (could be a PW2040 as Delta has a few of them) on a Delta 757 is seen powering this flight as it climbs out of LAX on a Friday morning after one of the event series session the evening before in LA.

Ambulance waiting at casino
Not sure what to make out of this image, however it was taken while walking into the Mohegan Sun casino where we did one of the dinner events at the Michael Jordan restaraunt

David Chapa of Quantum in bank vault
Here is an image from one of the events in this series which is a restaurant in Cleveland where the vault is a dinning room. No that is not a banker, well perhaps a data protection banker, it is the one and only (@davidchapa) David Chapa aka the Chief Technology Evangelist (CTE) of Quantum, check out his blog here.

Just before landing in portland
Nice view just before landing in Portland Oregon where that evenings topic was as you might have guessed, data protection modernization, clouds and virtualization. Don’t be scared, be ready, learn and find concerns to overcome them to have certainty with data protection in cloud, virtual and physical environments.
Teamwork
Cloud, virtualization and data protection modernization is a shared responsibility requiring team work and cooperation between service or solution provider and the user or consumer. If the customer or consumer of a service is using the right tools, technologies, best practices and having had done their homework for applicable levels of services with SLAs and SLOs, then a service provider with good capabilities should be in harmony with each other. Of course having the right technologies and tools for the task at hand is also important.
Underground hallway connecting LAX terminals, path to the clouds
Moving your data to the cloud or a virtualized environment should not feel like a walk down a long hallway, that is assuming you have done your homework, that the service is safe and secure, well taken care of, there should be less of concerns. Now if that is a dark, dirty, dingy, dilapidated dungeon like hallway, then you just might be on the highway to hell vs. stairway to heaven or clouds ;).

clouds along california coastline
There continues to be barriers to cloud adoption and deployment for data protection among other users.

Unlike the mountain ranges inland from the LA area coastline causing a barrier for the marine layer clouds rolling further inland, many IT related barriers can be overcome. The key to overcoming cloud concerns and barriers is identifying and understanding what they are so that resolutions, solutions, best practices, tools or work around’s can be developed or put into place.

The world and storageio runs on dunkin donuts
Hmm, breakfast of champions and road warriors, Dunkin Donuts aka DD, not to be confused with DDUP the former ticker symbol of Datadomain.

Tiered coffee
In the spirit of not treating everything the same, have different technology or tools to meet various needs or requirements, it only makes sense that there are various hot beverage options including hot water for tea, regular and decaffeinated coffee. Hmm, tiered hot beverages?


On the lighter side, things including technology of all type will and do break, even with maintenance, so having a standby plan, or support service to call can come in handy. In this case the vehicle on the right did not hit the garage door that came off of its tracks due to wear and tear as I was preparing to leave for one of the data protection events. Note to self, consider going from bi-annual garage door preventive maintenance to annual service check-up.

Some salesman talking on phone in a quiet zone

While not part of or pertaining to data protection, clouds, virtualization, storage or data infrastructure topics, the above photo was taken while in a quiet section of an airport lounge waiting for a flight to one of the events. This falls in the class of a picture is worth a thousand words category as the sign just to the left of the sales person talking loudly on his cell phone about his big successful customer call says Quiet Zone with symbol of no cell phone conversations.

How do I know the guy was not talking about clouds, virtualization, data infrastructure or storage related topics? Simple, his conversation was so loud me and everybody else in the lounge could hear the details of the customer conversation as it was being relayed back to sales management.

Note to those involved in sales or customer related topics, be careful of your conversations in public and pseudo public places including airports, airport lounges, airplanes, trains, planes, hotel lobbies and other places, you never know who you will be broadcasting to.

Here is a link to a summary of the events along with common questions, thoughts and perspectives.

Quantum data protection with certainty

Thanks to everyone who participated in the events including attendees, as well as Quantum and their partners for sponsoring this event series, look forward to see you while out and about at some future event or venue.

Ok, nuff said.

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

EPA Energy Star for data center storage draft 3 specification

US EPA Energy Star for Data Center StorageUncle SAM wants you to be energy efficient and effective with optimized data center storage

The U.S. EPA is ready to release DRAFT 3 of the Energy Star for data center storage specification and has an upcoming web session that you can sign up for if are not on their contact list of interested stake holders. If you are not familiar with the EPA Energy star for data center storage program, here is some background information.

Thus if you are interested, see the email and information below, signup and take part if so inclined as opposed to saying that you did not have a chance to comment.

Dear ENERGY STAR® Data Center Storage Manufacturer or Other Interested Party:

The U.S. Environmental Protection Agency (EPA) would like to announce the release of the Draft 3 Version 1.0 ENERGY STAR Specification for Data Center Storage. The draft is attached and is accompanied by a cover letter and Draft Test Method. Stakeholders are invited to review these documents and submit comments to EPA via email to storage@energystar.gov by Friday, July 27, 2012.

EPA will host a webinar on Wednesday, July 11, 2012, tentatively starting at 1:00PM EST. The agenda will be focused on elements from Draft 3, Product Families, and other key topics. Please RSVP to storage@energystar.gov no later than Tuesday, July 3, 2012 with the subject "RSVP – Storage Draft 3 specification meeting."

If you have any questions, please contact Robert Meyers, EPA, at Meyers.Robert@epa.gov or (202) 343-9923; or John Clinger, ICF International, at John.Clinger@icfi.com or (202) 572-9432.

Thank you for your continued support of the ENERGY STAR program.

For more information, visit: www.energystar.gov

This message was sent to you on behalf of ENERGY STAR. Each ENERGY STAR partner organization must have at least one primary contact receiving e-mail to maintain partnership. If you are no longer working on ENERGY STAR, and wish to be removed as a contact, please update your contact status in your MESA account. If you are not a partner organization and wish to opt out of receiving e-mails, you may call the ENERGY STAR Hotline at 1-888-782-7937 and request to have your mass mail settings changed. Unsubscribing means that you will no longer receive program-wide or product-specific e-mails from ENERGY STAR.

 

 

 

Ok, you have been advised, nuff said for now

Cheers Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

Commentary on Clouds, Storage, Networking, Green IT and other topics

Rather than doing a bunch of separate posts, here is a collection of different perspectives and commentary on various IT and data storage industry activity.

Various comments and perspectives

In this link are comments and perspectives regarding thin provisioning including how it works as well as when to use it for optimizing storage space capacity. Speaking of server and storage capacity, here in this link are comments on what server and storage would be needed to support an SMB office of 50 people (or more, or less) along with how to back it up.

For those interested or in need of managing data and other records in this link are comments on preparing yourself for regulatory scrutiny.

Storage networking interface or protocol debates (battles) can be interesting, in this link, see the role of iSCSI SANs for data storage environments. Lets not forget about Fibre Channel over Ethernet (FCoE) which is discussed in this link and here in this link. Here in this link are comments about how integrated rackem, stackem and package bundles stack up. To support increased continued demand for managed service providers (MSP), cloud and hosted services providers are continuing to invest in their infrastructures, so read some comments here. While technology plays a role particular as it matures, there is another barrier to leveraging converged solutions and that is organizational, read some perspectives and thoughts here.

Storage optimization including data footprint reduction (DFR) can be used to cut costs as well as support growth. In this link see tips on reducing storage costs and additional perspectives in this link to do more with what you have. Here in this link are some wit and wisdom comments on the world of disaster recovery solutions. Meanwhile in this link are perspectives for choosing the right business continuity (BC) and disaster recovery (DR) consultant. In this link are comments on BC and DR including planning for virtualization and life beyond consolidation. Are disk based dedupe and virtual tape summit resources libraries a hold over for old backup, or a gateway to the future, see some perspectives on those topics and technologies in this link.

Here are some more comments on DR and BC leveraging the cloud while perspectives on various size organizations looking at clouds for backup in this piece here. What is the right local, cloud or hybrid backup for SMBs, check out some commentary here while viewing some perspectives on cloud disaster recovery here. Not to be forgotten, laptop data protection can also be a major headache however there are also many cures discussed in this piece here.

The Storage Networking Industry Association (SNIA) Green Storage Initiative (GSI) debut their Emerald power efficiency measurement specification recently, read some perspectives and comments in this link here. While we are on the topic of data center efficiency and effectiveness, here in this link are perspectives on micro servers or mini blade systems. Solution bundles also known as data center in a box or SAN in a CAN have been popular with solutions from EMC (vBlocks) and NetApp (FlexPods) among others, read perspectives on them in this link.

Buzzword bingo

What would a conversation involving data storage and IT (particularly buzzword bingo) be without comments about Big Data and Big Bandwidth which you can read here.

Want to watch some videos, from Spring 2011 SNW, check out starting around the 15:00 to 55:00 time scale in this video from the Cube where various topics are discussed. Interested in how to scale data storage with clustered or scale up and out solutions, check out this video here or if you want to see some perspectives on data de duplication watch this clip.

Various comments and perspectives

Here is a video discussing SMBs as the current sweet spot for server virtualization with comments on the SMB virtualization dark side also discussed here. Meanwhile here are comments regarding EMC Flashy announcements from earlier this year on the Cube. Check out this video where I was a guest of Cali Lewis and John MacArthur on the Cube from the Dell Storage Forum discussing a range of topics as well as having some fun. Check out these videos and perspectives from VMworld 2011.

Whats your take on choosing the best SMB NAS? Here are some of my perspectives on choosing a SMB NAS storage system. Meanwhile here are some perspectives on enterprise class storage features finding their way into SMB NAS storage systems.

Meanwhile industry leaders EMC and NetApp have been busy enhancing their NAS storage solutions that you can read comments here.

Are you familiar with the Open Virtualization Alliance (OVA)? Here are some comments about OVA and other server virtualization topics.

Whats your take on Thunderbolt the new interconnect Apple is using in place of USB, here are my thoughts. Meanwhile various other tips and Ask the Expert (AtE) and discussion can be found here.

Check out the above links, as well view more perspectives, comments and news here, here, here, here and here.

Ok, nuff said for now

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Cloud and Virtual Data Storage Networking book released

Ok, it’s now official, following its debut at the VMworld 2011 book store last week in Las Vegas, my new book Cloud and Virtual Data Storage Networking (CRC Press) is now formally released with general availability announced today along with companion material located at https://storageioblog.com/book3 including the Cloud and Virtual Data Storage Networking LinkedIn group page launched a few months ago. Cloud and Virtual Data Storage Networking (CVDSN) a 370 page hard cover print is my third solo book that follows The Green and Virtual Data Center (CRC Press 2009) and Resilient Storage Networks (Elsevier 2004).

Cloud and Virtual Data Storage Networking Book by Greg Schulz
CVDSN book was on display at VMworld 2011 book store last week along with a new book by Duncan Epping (aka @DuncanYB ) and Frank Denneman (aka @frankdenneman ) titled VMware vSphere 5 Clustering Technical Deepdive. You can get your copy of Duncan and Franks new book on Amazon here.

Greg Schulz during book signing at VMworld 2011
Here is a photo of me on the left visiting a VMworld 2011 attendee in the VMworld book store.

 

Whats inside the book, theme and topics covered

When it comes to clouds, virtualization, converged and dynamic infrastructures Dont be scared however do look before you leap to be be prepared including doing your homework.

What this means is that you should do your homework, prepare, learn, and get involved with proof of concepts (POCs) and training to build the momentum and success to continue an ongoing IT journey. Identify where clouds, virtualization and data storage networking technologies and techniques compliment and enable your journey to efficient, effective and productive optimized IT services delivery.

 

There is no such thing as a data or information recession: Do more with what you have

A common challenge in many organizations is exploding data growth along with associated management tasks and constraints, including budgets, staffing, time, physical facilities, floor space, and power and cooling. IT clouds and dynamic infrastructure environments enable flexible, efficient and optimized, cost-effective and productive services delivery. The amount of data being generated, processed, and stored continues to grow, a trend that does not appear to be changing in the future. Even during the recent economic crisis, there has been no slow down or information recession. Instead, the need to process, move, and store data has only increased, in fact both people and data are living longer. CVDSN presents options, technologies, best practices and strategies for enabling IT organizations looking to do more with what they have while supporting growth along with new services without compromising on cost or QoS delivery (see figure below).

Driving Return on Innovation the new ROI: Doing more, reducing costs while boosting productivity

 

Expanding focus from efficiency and optimization to effectiveness and productivity

A primary tenant of a cloud and virtualized environment is to support growing demand in a cost-effective manner  with increased agility without compromising QoS. By removing complexity and enabling agility, information services can be delivered in a timely manner to meet changing business needs.

 

There are many types of information services delivery model options

Various types of information services delivery modes should be combined to meet various needs and requirements. These complimentary service delivery options and descriptive terms include cloud, virtual and data storage network enabled environments. These include dynamic Infrastructure, Public & Private and Hybrid Cloud, abstracted, multi-tenant, capacity on demand, Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) among others.

Convergence combing different technology domains and skill sets

Components of a cloud and virtual environment include desktop, servers, and storage, networking, hardware, and software, services along with APIs and software stacks. This include virtual and physical desktops, data, voice and storage networks, LANs, SANs, MANs, WANs, faster blade and rack servers with more memory, SSD and high-capacity storage and associated virtualization tools and management software. True convergence combines leveraging technology and people, processes and best practices aligned to make the most of those resources to deliver cost-effective services delivery.

 

Best people, processes, practices and products (the four Ps)

Bringing all the various components together is the Ps (people skill sets, process, practices and products). This means leveraging and enhancing people skill sets and experience, process and procedures to optimize workflow for streamlined service orchestration, practices and policies to be more effectively reducing waste without causing new bottlenecks, and products such as racks, stacks, hardware, software, and managed or cloud services.

 

Service categories and catalogs, templates SLO and SLA alignment

Establishing service categories aligned to known service levels and costs enables resources to be aligned to applicable SLO and SLA requirements. Leveraging service templates and defined policies can enable automation and rapid provisioning of resources including self-service requests.

 

Navigating to effective IT services delivery: Metrics, measurements and E2E management

You cannot effectively manage what you do not know about; likewise, without situational awareness or navigation tools, you are flying blind. E2E (End to End) tools can provide monitoring and usage metrics for reporting and accounting, including enabling comparison with other environments. Metrics include customer service satisfaction, SLO and SLAs, QoS, performance, availability and costs to service delivered.

 

The importance of data protection for virtual, cloud and physical environments

Clouds and virtualization are important tools and technologies for protecting existing consolidated or converged as well as traditional environments. Likewise, virtual and cloud environments or data placed there also need to be protected. Now is the time to rethink and modernize your data protection strategy to be more effective, protecting, preserving and serving more data for longer periods of time with less complexity and cost.

 

Packing smart and effectively for your journey: Data footprint reduction (DFR)

Reducing your data footprint impact leveraging data footprint reduction (DFR) techniques, technologies and best practices is important for enabling an optimized, efficient and effective IT services delivery environment. Reducing your data footprint is enabled with clouds and virtualization providing a means and mechanism for archiving inactive data and for transparently moving it. On the other hand, moving to a cloud and virtualized environment to do more with what you have is enhanced by reducing the impact of your data footprint. The ABCDs of data footprint reduction include Archiving, Backup modernization, Compression and consolidation, Data management and dedupe along with Storage tiering and thin provisioning among other techniques.

Cloud and Virtual Data Storage Networking book by Greg Schulz

How the book is laid out:

  • Table of content (TOC)
  • How the book is organized and who should read it
  • Preface
  • Section I: Why the need for cloud, virtualization and data storage networks
  • Chapter 1: Industry trends and perspectives: From issues and challenges to opportunities
  • Chapter 2: Cloud, virtualization and data storage networking fundamentals
  • Section II: Managing data and resources: Protect, preserve, secure and serve
  • Chapter 3: Infrastructure Resource Management (IRM)
  • Chapter 4: Data and storage networking security
  • Chapter 5: Data protection (Backup/Restore, BC and DR)
  • Chapter 6: Metrics and measurement for situational awareness
  • Section III: Technology, tools and solution options
  • Chapter 7: Data footprint reduction: Enabling cost-effective data demand growth
  • Chapter 8: Enabling data footprint reduction: Storage capacity optimization
  • Chapter 9: Storage services and systems
  • Chapter 10: Server virtualization
  • Chapter 11: Connectivity: Networking with your servers and storage
  • Chapter 12: Cloud and solution packages
  • Chapter 13: Management and tools
  • Section IV: Putting IT all together
  • Chapter 14: Applying what you have learned
  • Chapter 15: Wrap-up, what’s next and book summary
  • Appendices:
  • Where to Learn More
  • Index and Glossary

Here is the release that went out via Business Wire (aka Bizwire) earlier today.

 

Industry Veteran Greg Schulz of StorageIO Reveals Latest IT Strategies in “Cloud and Virtual Data Storage Networking” Book
StorageIO Founder Launches the Definitive Book for Enabling Cloud, Virtualized, Dynamic, and Converged Infrastructures

Stillwater, Minnesota – September 7, 2011  – The Server and StorageIO Group (www.storageio.com), a leading independent IT industry advisory and consultancy firm, in conjunction with  publisher CRC Press, a Taylor and Francis imprint, today announced the release of “Cloud and Virtual Data Storage Networking,” a new book by Greg Schulz, noted author and StorageIO founder. The book examines strategies for the design, implementation, and management of hardware, software, and services technologies that enable the most advanced, dynamic, and flexible cloud and virtual environments.

Cloud and Virtual Data Storage Networking

The book supplies real-world perspectives, tips, recommendations, figures, and diagrams on creating an efficient, flexible and optimized IT service delivery infrastructures to support demand without compromising quality of service (QoS) in a cost-effective manner. “Cloud and Virtual Data Storage Networking” looks at converging IT resources and management technologies to facilitate efficient and effective delivery of information services, including enabling information factories. Schulz guides readers of all experience levels through various technologies and techniques available to them for enabling efficient information services.

Topics covered in the book include:

  • Information services model options and best practices
  • Metrics for efficient E2E IT management and measurement
  • Server, storage, I/O networking, and data center virtualization
  • Converged and cloud storage services (IaaS, PaaS, SaaS)
  • Public, private, and hybrid cloud and managed services
  • Data protection for virtual, cloud, and physical environments
  • Data footprint reduction (archive, backup modernization, compression, dedupe)
  • High availability, business continuance (BC), and disaster recovery (DR)
  • Performance, availability and capacity optimization

This book explains when, where, with what, and how to leverage cloud, virtual, and data storage networking as part of an IT infrastructure today and in the future. “Cloud and Virtual Data Storage Networking” comprehensively covers IT data storage networking infrastructures, including public, private and hybrid cloud, managed services, virtualization, and traditional IT environments.

“With all the chatter in the market about cloud storage and how it can solve all your problems, the industry needed a clear breakdown of the facts and how to use cloud storage effectively. Greg’s latest book does exactly that,” said Greg Brunton of EDS, an HP company.

Click here to listen and watch Schulz discuss his new book in this Video about Cloud and Virtual Data Storage Networking book by Greg Schulz video.

About the Book

Cloud and Virtual Data Storage Networking has 370 pages, with more than 100 figures and tables, 15 chapters plus appendices, as well as a glossary. CRC Press catalog number K12375, ISBN-10: 1439851735, ISBN-13: 9781439851739, publication September 2011. The hard cover book can be purchased now at global venues including Amazon, Barnes and Noble, Digital Guru and CRCPress.com. Companion material is located at https://storageioblog.com/book3 including images, additional information, supporting site links at CRC Press, LinkedIn Cloud and Virtual Data Storage Networking group, and other books by the author. Direct book editorial review inquiries to John Wyzalek of CRC Press at john.wyzalek@taylorfrancis.com (twitter @jwyzalek) or +1 (917) 351-7149. For bulk and special orders contact Chris Manion of CRC Press at chris.manion@taylorandfrancis.com or +1 (561) 998-2508. For custom, derivative works and excerpts, contact StorageIO at info@storageio.com.

About the Author

Greg Schulz is the founder of the independent IT industry advisory firm StorageIO. Before forming StorageIO, Schulz worked for several vendors in systems engineering, sales, and marketing technologist roles. In addition to having been an analyst, vendor and VAR, Schulz also gained real-world hands on experience working in IT organizations across different industry sectors. His IT customer experience spans systems development, systems administrator, disaster recovery consultant, and capacity planner across different technology domains, including servers, storage, I/O networking hardware, software and services. Today, in addition to his analyst and research duties, Schulz is a prolific writer, blogger, and sought-after speaker, sharing his expertise with worldwide technology manufacturers and resellers, IT users, and members of the media. With an insightful and thought-provoking style, Schulz is also author of the books “The Green and Virtual Data Center” (CRC Press, 2009) which is on the Intel developers recommended reading list and the SNIA-endorsed reading book “Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures” (Elsevier, 2004). Schulz is available for interviews and commentary, briefings, speaking engagements at conferences and private events, webinars, video and podcast along with custom advisory consultation sessions. Learn more at https://storageio.com.

End of press release.

Wrap up

I want to express thanks to all of those involved with the project that spanned over the past year.

Stayed tuned for more news and updates pertaining to Cloud and Virtual Data Storage Networking along with related material including upcoming events as well as chapter excerpts. Speaking of events, here is information on an upcoming workshop seminar that I will be involved with for IT storage and networking professionals to be held October 4th and 5th in the Netherlands.

You can get your copy now at global venues including Amazon, Barnes and Noble, Digital Guru and CRCPress.com.

Ok, nuff said, for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Industry trend: People plus data are aging and living longer

Lets face it, people and information are living longer and thus there are more of each along with a strong interdependency by both.

People living and data being retained longer should not be a surprise, take a step back and look at the bigger picture. There is no such thing as an information recession with more data being generated, processed, moved and stored for longer periods of time not to mention that a data object is also getting larger.

Industry trend and performance

By data objects getting larger, think about a digital photo taken on a typical camera ten years ago which whose resolution was lower and thus its file size would have been measured in kilo bytes (thousands). Today megapixel resolutions are common from cell phones, smart phones, PDAs and even larger with more robust digital and high definition (HD) still and video cameras. This means that a photo of the same object that resulted in a file of hundreds of Kbytes ten years ago would be measured in Megabytes today. With three dimensional (3D) cameras appearing along with higher resolution, you do not need to be a rocket scientist or industry pundit to figure out what that growth trend trajectory looks like.

However it is not just the size of the data that is getting larger, there are also more instances along with copies of those files, photos, videos and other objects being created, stored and retained. Similar to data, there are more people now than ten years ago and some of those have also grown larger, or at least around the waistline. This means that more people are creating and relying on larger amounts of information being available or accessible when and where needed. As people grow older, the amount of data that they generate will naturally increase as will the information that they consume and rely upon.

Where things get interesting is that looking back in history, that is more than ten or even a hundred years, the trend is that there are more people, they are living longer, and they are generating larger amounts of data that is taking on new value or meaning. Heck you can even go back from hundreds to thousands of years and see early forms of data archiving and storage with drawings on walls of caves or other venues. I Wonder if had the cost (and ease of use) to store and keep data had been lower back than would there have been more information saved? Or was it a case of being too difficult to use the then state of art data and information storage medium combined with limited capacities so they simply ran out of storage and retention mediums (e.g. walls and ceilings)?

Lets come back to the current for a moment which is another trend of data that in the past would have been kept offline or best case near line due to cost and limits or constraints are finding their way online either in public or private venues (or clouds if you prefer).

Thus the trend of expanding data life cycles with some types of data being kept online or readily accessible as its value is discovered.

Evolving data life cycle and access patterns

Here is an easy test, think of something that you may have googled or searched for a year or two ago that either could not be found or was very difficult to find. Now take that same search or topic query and see if anything appears and if it does, how many instances of it appear. Now make a note to do the same test again in a year or even six months and compare the results.

Now back to the future however with an eye to the past and things get even more interesting in that some researchers are saying that in centuries to come, we should expect to see more people not only living into their hundreds, however even longer. This follows the trend of the average life expectancy of people continues to increase over decades and centuries.

What if people start to live hundreds of years or even longer, what about the information they will generate and rely upon and its later life cycle or span?

More information and data

Here is a link to a post where a researcher sees that very far down the road, people could live to be a thousand years old which brings up the question, what about all the data they generate and rely upon during their lifetime.

Ok, now back to the 21st century and it is safe to say that there will be more data and information to process, move, store and keep for longer periods of time in a cost effective way. This means applying data footprint reduction (DFR) such as archiving, backup and data protection modernization, compression, consolidation where possible, dedupe and data management including deletion where applicable along with other techniques and technologies combined with best practices.

Will you out live your data, or will your data survive you?

These are among other things to ponder while you enjoy your summer (northern hemisphere) vacation sitting on a beach or pool side enjoying a cool beverage perhaps gazing at the passing clouds reflecting on all things great and small.

Clouds: Dont be scared, however look before you leap and be prepared

Ok, nuff said for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

Cloud storage: Dont be scared, however look before you leap

Here is a link to a web cast on BrightTalk I will be doing live on Thursday June 9, 2011 at 1PM Pacific, 3PM Central or 4PM Eastern time lasting about 45 minutes. The web cast is titled: Cloud storage: Dont be scared, however look before you leap.

This web cast session takes a look at the state of public, private and hybrid cloud storage solutions and services including what you need to know to be prepared for a successful deployment. Topics to be covered include best practices, management and data protection in addition to navigating the hype and FUD associated with cloud storage today.

Cloud storage: Dont be scared, however look before you leap and do your homework

Check out the web cast either live or the replay later.

Cheers Gs

Greg Schulz – Author The Green and Virtual Data Center (CRC), Resilient Storage Networks (Elsevier) and coming summer 2011 Cloud and Virtual Data Storage Networking (CRC)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved