Data Protection Diaries Fundamental Resources Where to Learn More

Data Protection Diaries Fundamental Resources Where to Learn More

Companion to Software Defined Data Infrastructure Essentials – Cloud, Converged, Virtual Fundamental Server Storage I/O Tradecraft ( CRC Press 2017)

server storage I/O data infrastructure trends

By Greg Schulzwww.storageioblog.com November 26, 2017

This is the last in a multi-part series on Data Protection fundamental tools topics techniques terms technologies trends tradecraft tips as a follow-up to my Data Protection Diaries series, as well as a companion to my new book Software Defined Data Infrastructure Essentials – Cloud, Converged, Virtual Server Storage I/O Fundamental tradecraft (CRC Press 2017).

Click here to view the previous post Part 9 – who’s Doing What ( Toolbox Technology Tools).

Software Defined Data Infrastructure Essentials Book SDDC

Post in the series includes excerpts from Software Defined Data Infrastructure (SDDI) pertaining to data protection for legacy along with software defined data centers ( SDDC), data infrastructures in general along with related topics. In addition to excerpts, the posts also contain links to articles, tips, posts, videos, webinars, events and other companion material. Note that figure numbers in this series are those from the SDDI book and not in the order that they appear in the posts.

In this post the focus is around Data Protection Resources Where to Learn More.

SDDC, SDI, SDDI data infrastructure
Figure 1.5 Data Infrastructures and other IT Infrastructure Layers

Software Defined Data Infrastructure Essentials Table of Contents (TOC)

Here is a link (PDF) to the table of contents (TOC) for Software Defined Data Infrastructure Essentials.

The following is a Software Defined Data Infrastructure Essentials book TOC summary:

Chapter 1: Server Storage I/O and Data Infrastructure Fundamentals
Chapter 2: Application and IT Environments
Chapter 3: Bits, Bytes, Blobs, and Software-Defined Building Blocks
Chapter 4: Servers: Physical, Virtual, Cloud, and Containers
Chapter 5: Server I/O and Networking
Chapter 6: Servers and Storage-Defined Networking
Chapter 7: Storage Mediums and Component Devices
Chapter 8: Data Infrastructure Services: Access and Performance
Chapter 9: Data Infrastructure Services: Availability, RAS, and RAID
Chapter 10: Data Infrastructure Services: Availability, Recovery-Point Objective, and Security
Chapter 11: Data Infrastructure Services: Capacity and Data Reduction
Chapter 12: Storage Systems and Solutions (Products and Cloud)
Chapter 13: Data Infrastructure and Software-Defined Management
Chapter 14: Data Infrastructure Deployment Considerations
Chapter 15: Software-Defined Data Infrastructure Futures, Wrap-up, and Summary
Appendix A: Learning Experiences
Appendix B: Additional Learning, Tools, and tradecraft Tricks
Appendix C: Frequently Asked Questions
Appendix D: Book Shelf and Recommended Reading
Appendix E: Tools and Technologies Used in Support of This Book
Appendix F: How to Use This Book for Various Audiences
Appendix G: Companion Website and Where to Learn More
Glossary
Index

Click here to view (PDF) table of contents (TOC).

Data Protection Resources Where To Learn More

Learn more about Data Infrastructure and Data Protection related technology, trends, tools, techniques, tradecraft and tips with the following links.

The following are the various posts that are part of this data protection series:

  • Part 1Data Infrastructure Data Protection Fundamentals
  • Part 2 – Reliability, Availability, Serviceability ( RAS) Data Protection Fundamentals
  • Part 3 – Data Protection Access Availability RAID Erasure Codes ( EC) including LRC
  • Part 4 – Data Protection Recovery Points (Archive, Backup, Snapshots, Versions)
  • Part 5 – Point In Time Data Protection Granularity Points of Interest
  • Part 6 – Data Protection Security Logical Physical Software Defined
  • Part 7 – Data Protection Tools, Technologies, Toolbox, Buzzword Bingo Trends
  • Part 8 – Data Protection Diaries Walking Data Protection Talk
  • Part 9 – who’s Doing What ( Toolbox Technology Tools)
  • Part 10Data Protection Resources Where to Learn More

  • The following are various data protection blog posts:

  • Welcome to the Data Protection Diaries
  • Until the focus expands to data protection, backup is staying alive!
  • The blame game, Does cloud storage result in data loss?
  • Loss of data access vs. data loss
  • Revisiting RAID storage remains relevant and resources
  • Only you can prevent cloud (or other) data loss
  • Data protection is a shared responsibility
  • Time for CDP (Commonsense Data Protection)?
  • Data Infrastructure Server Storage I/O Tradecraft Trends (skills, experiences, knowledge)
  • My copies were corrupted: The [4] 3-2-1 rule and more about 4 3 2 1 as well as 3 2 1 here and here
  • The following are various data protection tips and articles:

  • Via Infostor Cloud Storage Concerns, Considerations and Trends
  • Via Network World What’s a data infrastructure?
  • Via Infostor Data Protection Gaps, Some Good, Some Not So Good
  • Via Infostor Object Storage is in your future
  • Via Iron Mountain Preventing Unexpected Disasters
  • Via InfoStor – The Many Variations of RAID Storage
  • Via InfoStor – RAID Remains Relevant, Really!
  • Via WservNews Cloud Storage Considerations (Microsoft Azure)
  • Via ComputerWeekly Time to restore from backup: Do you know where your data is?
  • Via Network World Ensure your data infrastructure remains available and resilient
  • The following are various data protection related webinars and events:

  • BrightTalk Webinar Data Protection Modernization – Protect, Preserve and Serve you Information
  • BrightTalk Webinar BCDR and Cloud Backup Protect Preserve and Secure Your Data Infrastructure
  • TechAdvisor Webinar (Free with registration) All You Need To Know about ROBO data protection
  • TechAdvisor Webinar (Free with registration) Tips for Moving from Backup to Full Disaster Recovery
  • The following are various data protection tools, technologies, services, vendor and industry resource links:

  • Various Data Infrastructure related news commentary, events, tips and articles
  • Data Center and Data Infrastructure industry links (vendors, services, tools, technologies, hardware, software)
  • Data Infrastructure server storage I/O network Recommended Reading List Book Shelf
  • Software Defined Data Infrastructure Essentials (CRC 2017) Book
  • Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    What This All Means

    Everything is not the same across environments, data centers, data infrastructures including SDDC, SDX and SDDI as well as applications along with their data.

    Likewise everything is and does not have to be the same when it comes to Data Protection.

    Since everything is not the same, various data protection approaches are needed to address various application performance, availability, capacity economic (PACE) needs, as well as SLO and SLAs.

    Data protection encompasses many different hardware, software, services including cloud technologies, tools, techniques, best practices, policies and tradecraft experience skills (e.g. knowing what to use when, where, why and how).

    Software Defined Data Infrastructure Essentials Book SDDC

    Context is important as different terms have various meanings depending on what they are being discussed with. Likewise different technologies and topics such as object, blob, backup, replication, RAID, erasure code (EC), mirroring, gaps (good, bad, ugly), snapshot, checkpoint, availability, durability among others have various meanings depending on context, as well as implementation approach.

    In most cases there is no bad technology or tool, granted there are some poor or bad (even ugly) implementations, as well as deployment or configuration decisions. What this means is the best technology or approach for your needs may be different from somebody else’s and vice versa.

    Some other points include there is no such thing as an information recession with more data generated every day, granted, how that data is transformed or stored can be in a smaller footprint. Likewise there is an increase in the size of data including unstructured big data, as well as the volume (how much data), as well as velocity (speed at which it is created, moved, processed, stored). This also means there is an increased dependency on data being available, accessible and intact with consistency. Thus the fundamental role of data Infrastructures (e.g. what’s inside the data center or cloud) is to combine resources, technologies, tools, techniques, best practices, policies, people skill set, experiences (e.g. tradecraft) to protect, preserve, secure and serve information (applications and data).

    modernizing data protection including backup, availability and related topics means more than swapping out one hardware, software, service or cloud for whatever is new, and then using it in old ways.

    What this means is to start using new (and old) things in new ways, for example move beyond using SSD or HDDs like tape as targets for backup or other data protection approaches. Instead use SSD, HDDs or cloud as a tier, yet also to enable faster protection and recovery by stepping back and rethinking what to protect, when, where, why, how and apply applicable techniques, tools and technologies. Find a balance between knowing all about the tools and trends while not understanding how to use those toolbox items, as well as knowing all about the techniques of how to use the tools, yet not knowing what the tools are.

    Want to learn more, have questions about specific tools, technologies, trends, vendors, products, services or techniques discussed in this series, send a note (info at storageio dot com) or via our contact page. We can set up a time to discuss your questions or needs pertaining to Data Protection as well as data infrastructures related topics from legacy to software defined virtual, cloud, container among others. For example consulting, advisory services, architecture strategy design, technology selection and acquisition coaching, education knowledge transfer sessions, seminars, webinars, special projects, test drive lab reviews or audits, content generation, videos, podcasts, custom content, chapter excerpts, demand generation among many other things.

    Get your copy of Software Defined Data Infrastructure Essentials here at Amazon.com, at CRC Press among other locations and learn more here.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Like Data They Protect For Now Quantum Revenues Continue To Grow

    For Now Quantum Revenues Continue To Grow

    server storage I/O data infrastructure trends

    For Now Quantum Revenues Continue To Grow. The other day following their formal announced, I received an summary update from Quantum pertaining to their recent Q1 Results (show later below).

    Data Infrastructures Protect Preserve Secure and Serve Information
    Various IT and Cloud Infrastructure Layers including Data Infrastructures

    Quantums Revenues Continue To Grow Like Data

    One of the certainties in life is change and the other is continued growth in data that gets transformed into information via IT and other applications. Data Infrastructures fundamental role is to enable an environment for applications and data to be transformed into information and delivered as services. In other words, Data Infrastructures exist to protect, preserve, secure and serve information along with the applications and data they depend on. Quantums role is to provide solutions and technologies for enabling legacy and cloud or other software defined data infrastructures to protect, preserve, secure and serve data.

    What caught my eye in Quantums announcements was that while not earth shattering growth numbers normally associated with a hot startup, being a legacy data infrasture and storage vendor, Quantum’s numbers are hanging in there.

    At a time when some legacy as well as startups struggle with increased competition from others including cloud, Quantum appears for at least now to be hanging in there with some gains.

    The other thing that caught my eye is that most of the growth not surprisingly is non tape related solutions, particular around their bulk scale out StorNext storage solutions, there is some growth in tape.

    Here is the excerpt of what Quantum sent out:

    
    Highlights for the quarter (all comparisons are to the same period a year ago):
    
    •	Grew total revenue and generated profit for 5th consecutive quarter
    •	Total revenue was up slightly to $117M, with 3% increase in branded revenue
    •	Generated operating profit of $1M with earnings per share of 4 cents, up 2 cents
    •	Grew scale-out tiered storage revenue 10% to $34M, with strong growth in video surveillance and technical workflows
    o	Key surveillance wins included deals with an Asian government for surveillance at a presidential palace and other government facilities, with a major U.S. port and with four new police department customers
    o	Established several new surveillance partnerships – one of top three resellers/integrators in China (Uniview) and two major U.S. integrators (Protection 1 and Kratos)
    o	Won two surveillance awards for StorNext – Security Industry Association’s New Product Showcase award and Security Today magazine’s Platinum Govies Government Security award
    o	Key technical workflow wins included deals at an international defense and aerospace company to expand StorNext archive environment, a leading biotechnology firm for 1 PB genomic sequencing archive, a top automaker involving autonomous driving research data and a U.S. technology institute involving high performance computing  
    o	Announced StorNext 6, which adds new advanced data management features to StorNext’s industry-leading performance and is now shipping
    o	Announced scale-out partnerships with Veritone on artificial intelligence and DataFrameworks on data visualization and management  
    •	Tape automation, devices and media revenue increased 6% overall while branded revenue for this product category was up 14%
    o	Strong sales of newest generation Scalar i3 and i6 tape libraries
    •	Established new/enhanced data protection partnerships
    o	Enhanced partnership with Veeam, making it easier for their customers to deploy 3-2-1 data protection best practices
    o	Became Pure Storage alliance partner, providing our data protection and archive solutions for their customers through mutual channel partners
    

    Where To Learn More

    Learn more about related technology, trends, tools, techniques, and tips with the following links.

    What This All Means

    Keep in mind that Data Infrastructures fundamental role is to enable an environment for applications and data to be transformed into information and delivered as services. Data Infrastructures exist to protect, preserve, secure and serve information along with the applications and data they depend on. Quantum continues to evolve their business as they have for several years from one focused on tape and related technologies to one that includes tape as well as many other solutions for legacy as well as software defined, cloud and virtual environments. For now, quantum revenues continue to grow and diversify.

    Ok, nuff said, for now.
    Gs

    Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.

    Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

    NVMe Wont Replace Flash By Itself They Complement Each Other

    NVMe Wont Replace Flash By Itself They Complement Each Other

    server storage I/O data infrastructure trends

    Updated 2/2/2018

    NVMe Wont Replace Flash By Itself They Complement Each Other

    >various NVM flash and SSD devices
    Various Solid State Devices (SSD) including NVMe, SAS, SATA, USB, M.2

    There has been some recent industry marketing buzz generated by a startup to get some attention by claiming via a study sponsored by including the startup that Non-Volatile Memory (NVM) Express (NVMe) will replace flash storage. Granted, many IT customers as well as vendors are still confused by NVMe thinking it is a storage medium as opposed to an interface used for accessing fast storage devices such as nand flash among other solid state devices (SSDs). Part of that confusion can be tied to common SSD based devices rely on NVM that are persistent memory retaining data when powered off (unlike the memory in your computer).

    NVMe is an access interface and protocol

    Instead of saying NVMe will mean the demise of flash, what should or could be said however some might be scared to say it is that other interfaces and protocols such as SAS (Serial Attached SCSI), AHCI/SATA, mSATA, Fibre Channel SCSI Protocol aka FCP aka simply Fibre Channel (FC), iSCSI and others are what can be replaced by NVMe. NVMe is simply the path or roadway along with traffic rules for getting from point a (such as a server) to point b (some storage device or medium e.g. flash SSD). The storage medium is where data is stored such as magnetic for Hard Disk Drive (HDD) or tape, nand flash, 3D XPoint, Optane among others.

    NVMe and NVM better together

    NVMe and NVM including flash are better together

    The simple quick get to the point is that NVMe (e.g. Non Volatile Memory aka NVM Express [NVMe]) is an interface protocol (like SAS/SATA/iSCSI among others) used for communicating with various nonvolatile memory (NVM) and solid state device (SSDs). NVMe is how data gets moved between a computer or other system and the NVM persistent memory such as nand flash, 3D XPoint, Spintorque or other storage class memories (SCM).

    In other words, the only thing NVMe will, should, might or could kill off would be the use of some other interface such as SAS, SATA/AHCI, Fibre Channel, iSCSI along with propritary driver or protocols. On the other hand, given the extensibility of NVMe and how it can be used in different configurations including as part of fabrics, it is an enabler for various NVMs also known as persistent memories, SCMs, SSDs including those based on NAND flash as well as emerging 3D XPoint (or Intel version) among others.

    Where To Learn More

    View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Context matters for example, NVM as the medium compared to NVMe as the interface and access protocols. With context in mind you can compare like or similar apples to apples such as nand flash, MRAM, NVRAM, 3D XPoint, Optane among other persistent memories also known as storage class memories, NVMs and SSDs. Likewise with context in mind NVMe can be compared to other interfaces and protocols such as SAS, SATA, PCIe, mSATA, Fibre Channel among others. The following puts all of this into context including various packaging options, interfaces and access protocols, functionality and media.

    NVMe is the access for NVM flash
    Putting IT all together

    Will NVMe kill off flash? IMHO no not by itself, however NVMe combined with some other form of NVM, SCM, persistent memory as a storage medium may eventually combine as an alternative to NVMe and flash (or SAS/SATA and flash). However, for now at least for many applications, NVMe is in your future (along with flash among other storage mediums), the questions include when, where, why, how, with what among other questions (and answers). NVMe wont replace flash by itself (at least yet) as they complement each other.

    Keep in mind, if NVMe is the answer, what are the questions.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Data Storage Tape Update V2014, Its Still Alive

    Data Storage Tape Update V2014, It’s Still Alive

    server storage I/O trends

    A year or so ago I did a piece tape summit resources. Despite being declared dead for decades, and will probably stay being declared dead for years to come, magnetic tape is in fact still alive being used by some organizations, granted its role is changing while the technology still evolves.

    Here is the memo I received today from the PR folks of the Tape Storage Council (e.g. tape vendors marketing consortium) and for simplicity (mine), I’m posting it here for you to read in its entirety vs. possibly in pieces elsewhere. Note that this is basically a tape status and collection of marketing and press release talking points, however you can get an idea of the current messaging, who is using tape and technology updates.

    Tape Data Storage in 2014 and looking towards 2015

    True to the nature of magnetic tape as a data storage medium, this is not a low latency small post, rather a large high-capacity bulk post or perhaps all you need to know about tape for now, or until next year. Otoh, if you are a tape fan, you can certainly take the memo from the tape folks, as well as visit their site for more info.

    From the tape storage council industry trade group:

    Today the Tape Storage Council issued its annual memo to highlight the current trends, usages and technology innovations occurring within the tape storage industry. The Tape Storage Council includes representatives of BDT, Crossroads Systems, FUJIFILM, HP, IBM, Imation, Iron Mountain, Oracle, Overland Storage, Qualstar, Quantum, REB Storage Systems, Recall, Spectra Logic, Tandberg Data and XpresspaX.  

    Data Growth and Technology Innovations Fuel Tape’s Future
    Tape Addresses New Markets as Capacity, Performance, and Functionality Reach New Levels

    Abstract
    For the past decade, the tape industry has been re-architecting itself and the renaissance is well underway. Several new and important technologies for both LTO (Linear Tape Open) and enterprise tape products have yielded unprecedented cartridge capacity increases, much longer media life, improved bit error rates, and vastly superior economics compared to any previous tape or disk technology. This progress has enabled tape to effectively address many new data intensive market opportunities in addition to its traditional role as a backup device such as archive, Big Data, compliance, entertainment and surveillance. Clearly disk technology has been advancing, but the progress in tape has been even greater over the past 10 years. Today’s modern tape technology is nothing like the tape of the past.

    The Growth in Tape  
    Demand for tape is being fueled by unrelenting data growth, significant technological advancements, tape’s highly favorable economics, the growing requirements to maintain access to data “forever” emanating from regulatory, compliance or governance requirements, and the big data demand for large amounts of data to be analyzed and monetized in the future. The Digital Universe study suggests that the world’s information is doubling every two years and much of this data is most cost-effectively stored on tape.

    Enterprise tape has reached an unprecedented 10 TB native capacity with data rates reaching 360 MB/sec. Enterprise tape libraries can scale beyond one exabyte. Enterprise tape manufacturers IBM and Oracle StorageTek have signaled future cartridge capacities far beyond 10 TBs with no limitations in sight.  Open systems users can now store more than 300 Blu-ray quality movies with the LTO-6 2.5 TB cartridge. In the future, an LTO-10 cartridge will hold over 14,400 Blu-ray movies. Nearly 250 million LTO tape cartridges have been shipped since the format’s inception. This equals over 100,000 PB of data protected and retained using LTO Technology. The innovative active archive solution combining tape with low-cost NAS storage and LTFS is gaining momentum for open systems users.

    Recent Announcements and Milestones
    Tape storage is addressing many new applications in today’s modern data centers while offering welcome relief from constant IT budget pressures. Tape is also extending its reach to the cloud as a cost-effective deep archive service. In addition, numerous analyst studies confirm the TCO for tape is much lower than disk when it comes to backup and data archiving applications. See TCO Studies section below.

    • On Sept. 16, 2013 Oracle Corp announced the StorageTek T10000D enterprise tape drive. Features of the T10000D include an 8.5 TB native capacity and data rate of 252 MB/s native. The T10000D is backward read compatible with all three previous generations of T10000 tape drives.
    • On Jan. 16, 2014 Fujifilm Recording Media USA, Inc. reported it has manufactured over 100 million LTO Ultrium data cartridges since its release of the first generation of LTO in 2000. This equates to over 53 thousand petabytes (53 exabytes) of storage and more than 41 million miles of tape, enough to wrap around the globe 1,653 times.
    • April 30, 2014, Sony Corporation independently developed a soft magnetic under layer with a smooth interface using sputter deposition, created a nano-grained magnetic layer with fine magnetic particles and uniform crystalline orientation. This layer enabled Sony to successfully demonstrate the world’s highest areal recording density for tape storage media of 148 GB/in2. This areal density would make it possible to record more than 185 TB of data per data cartridge.
    • On May 19, 2014 Fujifilm in conjunction with IBM successfully demonstrated a record areal data density of 85.9 Gb/in2 on linear magnetic particulate tape using Fujifilm’s proprietary NANOCUBIC™ and Barium Ferrite (BaFe) particle technologies. This breakthrough in recording density equates to a standard LTO cartridge capable of storing up to 154 terabytes of uncompressed data, making it 62 times greater than today’s current LTO-6 cartridge capacity and projects a long and promising future for tape growth.
    • On Sept. 9, 2014 IBM announced LTFS LE version 2.1.4 4 extending LTFS (Linear Tape File System) tape library support.
    • On Sept. 10, 2014 the LTO Program Technology Provider Companies (TPCs), HP, IBM and Quantum, announced an extended roadmap which now includes LTO generations 9 and 10. The new generation guidelines call for compressed capacities of 62.5 TB for LTO-9 and 120 TB for generation LTO-10 and include compressed transfer rates of up to 1,770 MB/second for LTO-9 and a 2,750 MB/second for LTO-10. Each new generation will include read-and-write backwards compatibility with the prior generation as well as read compatibility with cartridges from two generations prior to protect investments and ease tape conversion and implementation.
    • On Oct. 6, 2014 IBM announced the TS1150 enterprise drive. Features of the TS1150 include a native data rate of up to 360 MB/sec versus the 250 MB/sec native data rate of the predecessor TS1140 and a native cartridge capacity of 10 TB compared to 4 TB on the TS1140. LTFS support was included.
    • On Nov. 6, 2014, HP announced a new release of StoreOpen Automation that delivers a solution for using LTFS in automation environments with Windows OS, available as a free download. This version complements their already existing support for Mac and Linux versions to help simplify integration of tape libraries to archiving solutions.

    Significant Technology Innovations Fuel Tape’s Future
    Development and manufacturing investment in tape library, drive, media and management software has effectively addressed the constant demand for improved reliability, higher capacity, power efficiency, ease of use and the lowest cost per GB of any storage solution. Below is a summary of tape’s value proposition followed by key metrics for each:

    • Tape drive reliability has surpassed disk drive reliability
    • Tape cartridge capacity (native) growth is on an unprecedented trajectory
    • Tape has a faster device data rate than disk
    • Tape has a much longer media life than any other digital storage medium
    • Tape’s functionality and ease of use is now greatly enhanced with LTFS
    • Tape requires significantly less energy consumption than any other digital storage technology
    • Tape storage has  a much lower acquisition cost and TCO than disk

    Reliability. Tape reliability levels have surpassed HDDs. Reliability levels for tape exceeds that of the most reliable disk drives by one to three orders of magnitude. The BER (Bit Error Rate – bits read per hard error) for enterprise tape is rated at 1×1019 and 1×1017 for LTO tape. This compares to 1×1016 for the most reliable enterprise Fibre Channel disk drive.

    Capacity and Data Rate. LTO-6 cartridges provide 2.5 TB capacity and more than double the compressed capacity of the preceding LTO-5 drive with a 14% data rate performance boost to 160 MB/sec. Enterprise tape has reached 8.5 TB native capacity and 252 MB/sec on the Oracle StorageTek T10000D and 10 TB native capacity and 360 MB/sec on the IBM TS1150. Tape cartridge capacities are expected to grow at unprecedented rates for the foreseeable future.

    Media Life. Manufacturers specifications indicate that enterprise and LTO tape media has a life span of 30 years or more while the average tape drive will be deployed 7 to 10 years before replacement. By comparison, the average disk drive is operational 3 to 5 years before replacement.

    LTFS Changes Rules for Tape Access. Compared to previous proprietary solutions, LTFS is an open tape format that stores files in application-independent, self-describing fashion, enabling the simple interchange of content across multiple platforms and workflows. LTFS is also being deployed in several innovative “Tape as NAS” active archive solutions that combine the cost benefits of tape with the ease of use and fast access times of NAS. The SNIA LTFS Technical Working Group has been formed to broaden cross–industry collaboration and continued technical development of the LTFS specification.

    TCOStudies. Tape’s widening cost advantage compared to other storage mediums makes it the most cost-effective technology for long-term data retention. The favorable economics (TCO, low energy consumption, reduced raised floor) and massive scalability have made tape the preferred medium for managing vast volumes of data. Several tape TCO studies are publicly available and the results consistently confirm a significant TCO advantage for tape compared to disk solutions.

    According to the Brad Johns Consulting Group, a TCO study for an LTFS-based ‘Tape as NAS’ solution totaled $1.1M compared with $7.0M for a disk-based unified storage solution.  This equates to a savings of over $5.9M over a 10-year period, which is more than 84 percent less than the equivalent amount for a storage system built on a 4 TB hard disk drive unified storage system.  From a slightly different perspective, this is a TCO savings of over $2,900/TB of data. Source: Johns, B. “A New Approach to Lowering the Cost of Storing File Archive Information,”.

    Another comprehensive TCO study by ESG (Enterprise Strategies Group) comparing an LTO-5 tape library system with a low-cost SATA disk system for backup using de-duplication (best case for disk) shows that disk deduplication has a 2-4x higher TCO than the tape system for backup over a 5 year period. The study revealed that disk has a TCO of 15x higher than tape for long-term data archiving.

    Select Case Studies Highlight Tape and Active Archive Solutions
    CyArk Is a non-profit foundation focused on the digital preservation of cultural heritage sites including places such as Mt. Rushmore, and Pompeii. CyArk predicted that their data archive would grow by 30 percent each year for the foreseeable future reaching one to two petabytes in five years. They needed a storage solution that was secure, scalable, and more cost-effective to provide the longevity required for these important historical assets. To meet this challenge CyArk implemented an active archive solution featuring LTO and LTFS technologies.

    Dream Works Animation a global Computer Graphic (CG) animation studio has implemented a reliable, cost-effective and scalable active archive solution to safeguard a 2 PB portfolio of finished movies and graphics, supporting a long-term asset preservation strategy. The studio’s comprehensive, tiered and converged active archive architecture, which spans software, disk and tape, saves the company time, money and reduces risk.

    LA Kings of the NHL rely extensively on digital video assets for marketing activities with team partners and for its broadcast affiliation with Fox Sports. Today, the Kings save about 200 GB of video per game for an 82 game regular season and are on pace to generate about 32-35 TB of new data per season. The King’s chose to implement Fujifilm’s Dternity NAS active archive appliance, an open LTFS based architecture. The Kings wanted an open source archiving solution which could outlast its original hardware while maintaining data integrity. Today with Dternity and LTFS, the Kings don’t have to decide what data to keep because they are able to cost-effectively save everything they might need in the future. 

    McDonald’s primary challenge was to create a digital video workflow that streamlines the management and distribution of their global video assets for their video production and post-production environment. McDonald’s implemented the Spectra T200 tape library with LTO-6 providing 250 TB of McDonald’s video production storage. Nightly, incremental backup jobs store their media assets into separate disk and LTO- 6 storage pools for easy backup, tracking and fast retrieval. This system design allows McDonald’s to effectively separate and manage their assets through the use of customized automation and data service policies.

    NCSA employs an Active Archive solution providing 100 percent of the nearline storage for the NCSA Blue Waters supercomputer, which is one of the world’s largest active file repositories stored on high capacity, highly reliable enterprise tape media. Using an active archive system along with enterprise tape and RAIT (Redundant Arrays of Inexpensive Tape) eliminates the need to duplicate tape data, which has led to dramatic cost savings.

    Queensland Brain Institute (QBI) is a leading center for neuroscience research.  QBI’s research focuses on the cellular and molecular mechanisms that regulate brain function to help develop new treatments for neurological and mental disorders.  QBI’s storage system has to scale extensively to store, protect, and access tens of terabytes of data daily to support cutting-edge research.  QBI choose an Oracle solution consisting of Oracle’s StorageTek SL3000 modular tape libraries with StorageTek T10000 enterprise tape drives.   The Oracle solution improved QBI’s ability to grow, attract world-leading scientists and meet stringent funding conditions.

    Looking Ahead to 2015 and Beyond
    The role tape serves in today’s modern data centers is expanding as IT executives and cloud service providers address new applications for tape that leverage its significant operational and cost advantages. This recognition is driving investment in new tape technologies and innovations with extended roadmaps, and it is expanding tape’s profile from its historical role in data backup to one that includes long-term archiving requiring cost-effective access to enormous quantities of stored data. Given the current and future trajectory of tape technology, data intensive markets such as big data, broadcast and entertainment, archive, scientific research, oil and gas exploration, surveillance, cloud, and HPC are expected to become significant beneficiaries of tape’s continued progress. Clearly the tremendous innovation, compelling value proposition and development activities demonstrate tape technology is not sitting still; expect this promising trend to continue in 2015 and beyond. 

    Visit the Tape Storage Council at tapestorage.org

    What this means and summary

    Like it not tape is still alive being used along with the technology evolving with new enhancements as outlined above.

    Good to see the tape folks doing some marketing to get their story told and heard for those who are still interested.

    Does that mean I still use tape?

    Nope, I stopped using tape for local backups and archives well over a decade ago using disk to disk and disk to cloud.

    Does that mean I believe that tape is dead?

    Nope, I still believe that for some organizations and some usage scenarios it makes good sense, however like with most data storage related technologies, it’s not a one size or type of technology fits everything scenario value proposition.

    On a related note for cloud and object storage, visit www.objectstoragecenter.com

    Ok, nuff said, for now…

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Data Protection Diaries: March 31 World Backup Day is Restore Data Test Time

    Storage I/O trends

    World Backup Day Generating Awareness About Data Protection

    This World Backup Day piece is part of my ongoing Data Protection Diaries series of posts (www.dataprotecitondiaries.com) about trends, strategies, tools and best practices spanning applications, archiving, backup/restore, business continuance (BC), business resiliency (BR), cloud, data footprint reduction (DFR), security, servers, storage and virtualization among other related topic themes.

    data protection threat risk scenarios
    Different threat risks and reasons to protect your digital assets (data)

    March 31 is World Backup Day which means you should make sure that your data and digital assets (photos, videos, music or audio, scanned items) along with other digital documents are protected. Keep in mind that various reasons for protecting, preserving and serving your data regardless of if you are a consumer with needs to protect your home and personal information, or a large business, institution or government agency.

    Why World Backup Day and Data Protection Focus

    By being protected this means making sure that there are copies of your documents, data, files, software tools, settings, configurations and other digital assets. These copies can be in different locations (home, office, on-site, off-site, in the cloud) as well as for various points in time or recovery point objective (RPO) such as monthly, weekly, daily, hourly and so forth.

    Having different copies for various times (e.g. your protection interval) gives you the ability to go back to a specific time to recover or restore lost, stolen, damaged, infected, erased, or accidentally over-written data. Having multiple copies is also a safeguard incase either the data, files, objects or items being backed up or protected are bad, or the copy is damaged, lost or stolen.

    Restore Test Time

    While the focus of world backup data is to make sure that you are backing up or protecting your data and digital assets, it is also about making sure what you think is being protected is actually occurring. It is also a time to make sure what you think is occurring or know is being done can actually be used when needed (restore, recover, rebuild, reload, rollback among other things that start with R). This means testing that you can find the files, folders, volumes, objects or data items that were protected, use those copies or backups to restore to a different place (you don’t want to create a disaster by over-writing your good data).

    In addition to making sure that the data can be restored to a different place, go one more step to verify that the data can actually be used which means has it be decrypted or unlocked, have the security or other rights and access settings along with meta data been applied. While that might seem obvious it is often the obvious that will bite you and cause problems, hence take some time to test that all is working, not to mention get some practice doing restores.

    Data Protection and Backup 3 2 1 Rule and Guide

    Recently I did a piece based on my own experiences with data protection including Backup as well as Restore over at Spiceworks called My copies were corrupted: The 3-2-1 rule. For those not familiar, or as a reminder 3 2 1 means have more than three copies or better yet, versions stored on at least two different devices, systems, drives, media or mediums in at least one different location from the primary or main copy.

    Following is an excerpt from the My copies were corrupted: The 3-2-1 rule piece:

    Not long ago I had a situation where something happened to an XML file that I needed. I discovered it was corrupted, and I needed to do a quick restore.

    “No worries,” I thought, “I’ll simply copy the most recent version that I had saved to my file server.” No such luck. That file had been just copied and was damaged.

    “OK, no worries,” I thought. “That’s why I have a periodic backup copy.” It turns out that had worked flawlessly. Except there was a catch — it had backed up the damaged file. This meant that any and all other copies of the file were also damaged as far back as to when the problem occurred.

    Read the full piece here.

    Backup and Data Protection Walking the Talk

    Yes I eat my own dog food meaning that I practice what I talk about (e.g. walking the talk) leveraging not just a  3 2 1 approach, actually more of a 4 3 2 1 hybrid which means different protection internals, various retention’s and frequencies, not all data gets treated the same, using local disk, removable disk to go off-site as well as cloud. I also test candidly more often by accident using the local, removable and cloud copies when I accidentally delete something, or save the wrong version.

    Some of my data and applications are protected throughout the day, others on set schedules that vary from hours to days to weeks to months or more. Yes, some of my data such as large videos or other items that are static do not change, so why backup them up or protect every day, week or month? I also align the type of protection, frequency, retention to meet different threat risks, as well as encrypt data. Part of actually testing and using the restores or recoveries is also determining what certificates or settings are missing, as well as where opportunities exist or needed to enhance data protection.

    Closing comments (for now)

    Take some time to learn more about data protection including how you can improve or modernize while rethinking what to protect, when, where, why how and with what.

    In addition to having copies from different points in time and extra copies in various locations, also make sure that they are secured or encrypted AND make sure to protect your encryption keys. After all, try to find a digital locksmith to unlock your data who is not working for a government agency when you need to get access to your data ;)…

    Learn more about data protection including Backup/Restore at www.storageioblog.com/data-protection-diaries-main/ where there are a collection of related posts and presentations including:

    Also check out the collection of technology and vendor / product neutral data protection and backup/restore content at BackupU (disclosure: sponsored by Dell Data Protection Software) that includes various webinars and Google+ hangout sessions that I have been involved with.

    Watch for more data protection conversations about related trends, themes, technologies, techniques perspectives in my ongoing data protection diaries discussions as well as read more about Backup and other related items at www.storageioblog.com/data-protection-diaries-main/.

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Welcome to the Data Protection Diaries

    Updated 1/10/2018

    Storage I/O trends

    Welcome to the Data Protection Diaries

    This is a series of posts about data protection which includes security (logical and physical), backup/restore, business continuance (BC), disaster recovery (DR), business resiliency (BR) along with high availability (HA), archiving and related topic themes, technologies and trends.

    Think of data protection like protect, preserve and serve information across cloud, virtual and physical environments spanning traditional servers, storage I/O networking along with mobile (ok, some IoT as well), SOHO/SMB to enterprise.

    Getting started, taking a step back

    Recently I have done a series of webinars and Google+ hangouts as part of the BackupU initiative brought to you by Dell Software (that’s a disclosure btw ;) ) that are vendor and technology neutral. Instead of the usual vendor product or technology focused seminars and events, these are about getting back to the roots, the fundamentals of what to protect when and why, then decide your options as well as different approaches (e.g. what tools to use when).

    In addition over the past year (ok, years) I have also been doing other data protection related events, seminars, workshops, articles, tips, posts across cloud, virtual and physical from SOHO/SMB to enterprise. These are in addition to the other data infrastructure server and storage I/O stuff (e.g. SSD, object storage, software defined, big data, little data, buzzword bingo and others).

    Keep in mind that in the data center or information factory everything is not the same as there are different applications, threat risk scenarios, availability and durability among other considerations. In this series like the cloud conversations among others, I’m going to be pulling various data protection themes together hopefully to make it easier for others to find, as well as where I know where to get them.

    data protection diaries
    Some notes for an upcoming post in this series using my Livescribe about data protection

    Data protection topics, trends, technologies and related themes

    Here are some more posts to checkout pertaining to data protection trends, technologies and perspectives:

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Part II: EMC Evolves Enterprise Data Protection with Enhancements

    Storage I/O trends

    This is the second part of a two-part series on recent EMC backup and data protection announcements. Read part I here.

    What about the products, what’s new?

    In addition to articulating their strategy for modernizing data protection (covered in part I here), EMC announced enhancements to Avamar, Data Domain, Mozy and Networker.

    Data protection storage systems (e.g. Data Domain)

    Building off of previously announced Backup Recovery Solutions (BRS) including Data Domain operating system storage software enhancements, EMC is adding more application and software integration along with new platform (systems) support.

    Data Domain (e.g. Protection Storage) enhancements include:

    • Application integration with Oracle, SAP HANA for big data backup and archiving
    • New Data Domain protection storage system models
    • Data in place upgrades of storage controllers
    • Extended Retention now available on added models
    • SAP HANA Studio backup integration via NFS
    • Boost for Oracle RMAN, native SAP tools and replication integration
    • Support for backing up and protecting Oracle Exadata
    • SAP (non HANA) support both on SAP and Oracle

    Data in place upgrades of controllers for 4200 series models on up (previously available on some larger models). This means that controllers can be upgraded with data remaining in place as opposed to a lengthy data migration.

    Extended Retention facility is a zero cost license that enables more disk drive shelves to be attached to supported Data Domain systems. Thus there is a not a license fee, however you do pay for the storage shelves and drives to increase the available storage capacity. Note that this feature increases the storage capacity by adding more disk drives and does not increase the performance of the Data Domain system. Extended Retention has been available in the past however is now supported via more platform models. The extra storage capacity is essentially placed into a different tier that an archive policy can then migrate data into.

    Boost for accelerating data movement to and from Data Domain systems is only available using Fibre Channel. When asked about FC over Ethernet (FCoE) or iSCSI EMC indicated its customers are not asking for this ability yet. This has me wondering if it is that the current customer focus is around FC, or if those customers are not yet ready for iSCSI or FCoE, or, if there were iSCSI or FCoE support, more customers would ask for it?

    With the new Data Domain protection storage systems EMC is claiming up to:

    • 4x faster performance than earlier models
    • 10x more scalable and 3x more backup/archive streams
    • 38 percent lower cost per GB based on holding price points and applying improvements


    EMC Data Domain data protection storage platform family


    Data Domain supporting both backup and archive

    Expanding Data Domain from backup to archive

    EMC continues to evolve the Data Domain platform from just being a backup target platform with dedupe and replication to a multi-function, multi-role solution. In other words, one platform with many uses. This is an example of using one tool or technology for different purposes such as backup and archiving, however with separate polices. Here is a link to a video where I discuss using common tools for backup and archiving, however with separate polices. In the above figure EMC Data Domain is shown as being used for backup along with storage tiering and archiving (file, email, Sharepoint, content management and databases among other workloads).


    EMC Data Domain supporting different functions and workloads

    Also shown are various tools from other vendors such as Commvault Simpana that can be used as both a backup or archiving tool with Data Domain as a target. Likewise Dell products acquired via the Quest acquisition are shown along with those from IBM (e.g. Tivoli), FileTek among others. Note that if you are a competitor of EMC or simply a fan of other technology you might come to the conclusion that the above may not be different from others. Then again others who are not articulating their version or vision of something like the above figure probably should be also stating the obvious vs. arguing they did it first.

    Data source integration (aka data protection software tools)

    It seems like just yesterday that EMC acquired Avamar (2006) and NetWorker aka Legato (2003), not to mention Mozy (2007) or Dantz (Retrospect, since divested) in 2004. With the exception of Dantz (Retrospect) which is now back in the hands of its original developers, EMC continues to enhance and evolve Avamar, Mozy and NetWorker including with this announcement.

    General Avamar 7 and Networker 8.1 enhancements include:

    • Deeper integration with primary storage and protection storage tiers
    • Optimization for VMware vSphere virtual server environments
    • Improved visibility and control for data protection of enterprise applications

    Additional Avamar 7 enhancements include:

    • More Data Domain integration and leveraging as a repository (since Avamar 6)
    • NAS file systems with NDMP accelerator access (EMC Isilon & Celera, NetApp)
    • Data Domain Boost enhancements for faster backup / recovery
    • Application integration with IBM (DB2 and Notes), Microsoft (Exchange, Hyper-V images, Sharepoint, SQL Server), Oracle, SAP, Sybase, VMware images

    Note that Avamar dat is still used mainly for ROBO and desktop, laptop type backup scenarios that do not yet support Data Domain. Also see Mozy enhancements below).

    Avamar supports VMware vSphere virtual server environments using granular change block tracking (CBT) technology as well as image level backup and recovery with vSphere plugins. This includes an Instant Access recovery when images are stored on Data Domain storage.

    Instant Access enables a VM that has been protected using Avamar image level technology on Data Domain to be booted via an NFS VMware Dat. VMware sees the VM and is able to power it on and boot directly from the Data Domain via the NFS Dat. Once the VM is active, it can be Storage vMotion to a production storage VMware Dat while active (e.g. running) for recovery on the fly capabilities.


    Instant Access to a VM on Data Domain storage

    EMC NetWorker 8.1 enhancements include:

    • Enhanced visibility and control for owners of data
    • Collaborative protection for Oracle environments
    • Synchronize backup and data protection between DBA and Backup admin’s
    • Oracle DBAs use native tools (e.g. RMAN)
    • Backup admin implements organizations SLA’s (e.g. using Networker)
    • Deeper integration with EMC primary storage (e.g. VMAX, VNX, etc)
    • Isilon integration support
    • Snapshot management (VMAX, VNX, RecoverPoint)
    • Automation and wizards for integration, discovery, simplified management
    • Policy-based management, fast recovery from snapshots
    • Integrating snapshots into and as part of data protection strategy. Note that this is more than basic snapshot management as there is also the ability to roll over a snapshot into a Data Domain protection storage tier.
    • Deeper integration with Data Domain protection storage tier
    • Data Domain Boost over Fibre Channel for faster backups and restores
    • Data Domain Virtual Synthetics to cut impact of full backups
    • Integration with Avamar for managing image level backup recovery (Avamar services embedded as part of NetWorker)
    • vSphere Web Client enabling self-service recovery of VMware images
    • Newly created VMs inherit backup polices automatically

    Mozy is being positioned for enterprise remote office branch office (ROBO) or distributed private cloud where Avamar, NetWorker or Data Domain solutions are not as applicable. EMC has mentioned that they have over 800 enterprises using Mozy for desktop, laptop, ROBO and mobile data protection. Note that this is a different target market than the Mozy consumer product focused which also addresses smaller SMBs and SOHOs (Small Office Home Offices).

    EMC Mozy enhancements to be more enterprise grade:

    • Simplified management services and integration
    • Active Directory (AD) for Microsoft environments
    • New storage pools (multiple types of pools) vs. dedicated storage per client
    • Keyless activation for faster provisioning of backup clients

    Note that EMC enhanced earlier this year Data Protection Advisor (DPA) with version 6.0.

    What does this all mean?

    Storage I/O trends

    Data protection and backup discussions often focus around tape summit resources or cloud arguments, although this is changing. What is changing is growing awareness and discussion around how data protection storage mediums, systems and services are used along with the associated software management tools.

    Some will say backup is broke often pointing a finger at a media or medium (e.g. tape and disk) about what is wrong. Granted in some environments the target medium (or media) destination is an easy culprit to point a finger to as the problem (e.g. the usual tape sucks or is dead) mantra. However, for many environments while there can be issues, it is more often than not the media, medium, device or target storage system that is broke, instead how it is being used or abused.

    This means revisiting how tools are used along with media or storage systems allocated, used and retained with respect to different threat risk scenarios. After all, not everything is the same in the data center or information factory.

    Thus modernizing data protection is more than swapping media or mediums including types of storage system from one to another. It is also more than swapping out one backup or data protection tool for another. Modernizing data protection means rethinking what different applications and data need to be protected against various threat risks.

    Storage I/O trends

    What this has to do with today’s announcement is that EMC is among others in the industry moving towards a holistic data protection modernizing thought model.

    In my opinion what you are seeing out of EMC and some others is taking that step back and expanding the data protection conversation to revisit, rethink why, how, where, when and by whom applications and information get protected.

    This announcement also ties into finding and removing costs vs. simply cutting cost at the cost of something elsewhere (e.g. service levels, performance, availability). In other words, finding and removing complexities or overhead associated with data protection while making it more effective.

    Some closing points, thoughts and more links:

    There is no such thing as a data or information recession
    People and data are living longer while getting larger
    Not everything is the same in the data center or information factory
    Rethink data protection including when, why, how, where, with what and by whom
    There is little data, big data, very big data and big fast data
    Data protection modernization is more than playing buzzword bingo
    Avoid using new technology in old ways
    Data footprint reduction (DFR) can be help counter changing data life-cycle patterns
    EMC continues to leverage Avamar while keeping Networker relevant
    Data Domain evolving for both backup and archiving as an example of tool for multiple uses

    Ok, nuff said (for now).

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    EMC Evolves Enterprise Data Protection with Enhancements (Part I)

    Storage I/O trends

    A couple of months ago at EMCworld there were announcements around ViPR, Pivotal along with trust and clouds among other topics. During the recent EMCworld event there were some questions among attendees what about backup and data protection announcements (or lack there of)?

    Modernizing Data Protection

    Today EMC announced enhancements to its Backup Recovery Solutions (BRS) portfolio (@EMCBackup) that continue to enable information and applications data protection modernizing including Avamar, Data Domain, Mozy and Networker.

    Keep in mind you can’t go forward if you can’t go back, which means if you do not have good data protection to go to, you can’t go forward with your information.

    EMC Modern Data Protection Announcements

    As part of their Backup to the Future event, EMC announced the following:

    • New generation of data protection products and technologies
    • Data Domain systems: enhanced application integration for backup and archive
    • Data protection suite tools Avamar 7 and Networker 8.1
    • Enhanced Cloud backup capabilities for the Mozy service
    • Paradigm shift as part of data protection modernizing including revisiting why, when, where, how, with what and by whom data protection is accomplished.

    What did EMC announce for data protection modernization?

    While much of the EMC data protection announcement is around product, there is also the aspect of rethinking data protection. This means looking at data protection modernization beyond swapping out media (e.g. tape for disk, disk for cloud) or one backup software tool for another. Instead, revisiting why data protection needs to be accomplished, by whom, how to remove complexity and cost, enable agility and flexibility. This also means enabling data protection to be used or consumed as a service in traditional, virtual and private or hybrid cloud environments.

    EMC uses as an example (what they refer to as Accidental Architecture) of how there are different group and areas of focus, along with silos associated with data protection. These groups span virtual, applications, database, server, storage among others.

    The results are silos that need to be transformed in part using new technology in new ways, as well as addressing a barrier to IT convergence (people and processes). The theme behind EMC data protection strategy is to enable the needs and requirements of various groups (servers, applications, database, compliance, storage, BC and DR) while removing complexity.

    Moving from Silos of data protection to a converged service enabled model

    Three data protection and backup focus areas

    This sets the stage for the three components for enabling a converged data protection model that can be consumed or used as a service in traditional, virtual and private cloud environments.


    EMC three components of modernized data protection (EMC Future Backup)

    The three main components (and their associated solutions) of EMC BRS strategy are:

    • Data management services: Policy and storage management, SLA, SLO, monitoring, discovery and analysis. This is where tools such as EMC Data Protection Advisor (aka via WysDM acquisition) fit among others for coordination or orchestration, setting and managing polices along with other activities.
    • Data source integration: Applications, Database, File systems, Operating System, Hypervisors and primary storage systems. This is where data movement tools such as Avamar and Networker among others fit along with interfaces to application tools such as Oracle RMAN.
    • Protection storage: Targets, destination storage system with media or mediums optimized for protecting and preserving data along with enabling data footprint reduction (DFR). DFR includes functionality such as compression and dedupe among others. Example of data protection storage is EMC Data Domain.

    Read more about product items announced and what this all means here in the second of this two-part series.

    Ok, nuff said (for now).

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Tape is still alive, or at least in conversations and discussions

    StorageIO Industry trends and perspectives image

    Depending on whom you talk to or ask, you will get different views and opinions, some of them stronger than others on if magnetic tape is dead or alive as a data storage medium. However an aspect of tape that is alive are the discussions by those for, against or that simply see it as one of many data storage mediums and technologies whose role is changing.

    Here is a link to an a ongoing discussion over in one of the Linked In group forums (Backup & Recovery Professionals) titled About Tape and disk drives. Rest assured, there is plenty of fud and hype on both sides of the tape is dead (or alive) arguments, not very different from the disk is dead vs. SSD or cloud arguments. After all, not everything is the same in data centers, clouds and information factories.

    Fwiw, I removed tape from my environment about 8 years ago, or I should say directly as some of my cloud providers may in fact be using tape in various ways that I do not see, nor do I care one way or the other as long as my data is safe, secure, protected and SLA’s are meet. Likewise, I consult and advice for organizations where tape still exists yet its role is changing, same with those using disk and cloud.

    Storage I/O data center image

    I am not ready to adopt the singular view that tape is dead yet as I know too many environments that are still using it, however agree that its role is changing, thus I am not part of the tape cheerleading camp.

    On the other hand, I am a fan of using disk based data protection along with cloud in new and creative (including for my use) as part of modernizing data protection. Although I see disk as having a very bright and important future beyond what it is being used for now, at least today, I am not ready to join the chants of tape is dead either.

    StorageIO Industry trends and perspectives image

    Does that mean I can’t decide or don’t want to pick a side? NO

    It means that I do not have to nor should anyone have to choose a side, instead look at your options, what are you trying to do, how can you leverage different things, techniques and tools to maximize your return on innovation. If that means that tape is, being phased out of your organization good for you. If that means there is a new or different role for tape in your organization co-existing with disk, then good for you.

    If somebody tells you that tape sucks and that you are dumb and stupid for using it without giving any informed basis for those comments then call them dumb and stupid requesting they come back when then can learn more about your environment, needs, and requirements ready to have an informed discussion on how to move forward.

    Likewise, if you can make an informed value proposition on why and how to migrate to new ways of modernizing data protection without having to stoop to the tape is dead argument, or cite some research or whatever, good for you and start telling others about it.

    StorageIO Industry trends and perspectives image

    Otoh, if you need to use fud and hype on why tape is dead, why it sucks or is bad, at least come up with some new and relevant facts, third-party research, arguments or value propositions.

    You can read more about tape and its changing role at tapeisalive.com or Tapesummit.com.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Industry trends and perspectives: SNW 2012 Rapping with Dave Raffo of SearchStorage

    Now also available via

    This is the seventh (here is the first, second, third, fourth, fifth and sixth) in a series of StorageIO industry trends and perspective audio blog and pod cast discussions from Storage Networking World (SNW) Fall 2012 in Santa Clara California.

    StorageIO industry trends cloud, virtualization and big data

    Given how at conference conversations tend to occur in the hallways, lobbies and bar areas of venues, what better place to have candid conversations with people from throughout the industry, some you know, some you will get to know better.

    In this episode, my co-host Bruce Rave aka Bruce Ravid of Ravid and Associates (twitter @brucerave) meets up Sr. News Director Dave Raffo of TechTarget and Search Storage in the SNW trade show expo hall. Our conversation covers past and present SNWs along with other industry conferences, industry trends, software defined buzzwords, Green Bay Packers smack and more.

    Click here (right-click to download MP3 file) or on the microphone image to listen to the conversation with Dave, Bruce and myself.

    StorageIO podcast

    Also available via

    Watch (and listen) for more StorageIO industry trends and perspectives audio blog posts pod casts from SNW and other upcoming events. Also be sure to heck out other related pod casts, videos, posts, tips and industry commentary at StorageIO.com and StorageIOblog.com.

    Enjoy listening to Rapping with Dave Raffo of Search Storage from the Fall SNW 2012 pod cast.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Spring (May) 2012 StorageIO news letter

    StorageIO News Letter Image
    Spring (May) 2012 News letter

    Welcome to the Spring (May) 2012 edition of the Server and StorageIO Group (StorageIO) news letter. This follows the Fall (December) 2011 edition.

    You can get access to this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions.

    Click on the following links to view the Spring May 2012 edition as an HTML or PDF or, to go to the news letter page to view previous editions.

    You can subscribe to the news letter by clicking here.

    Enjoy this edition of the StorageIO newsletter, let me know your comments and feedback.

    Nuff said for now

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Various cloud, virtualization, server, storage I/O poll’s

    The following are a collection of on-going industry trends and perspectives poll’s pertaining to server, storage, IO, networking, cloud, virtualization, data protection (backup, archive, BC and DR) among other related themes and topics.

    In addition to those listed below, check out the comments section where additional poll’s are added over time.

    Storage I/O Industry Trends and Perspectives

    Here is a link to a poll as a follow-up to a recent blog post Are large storage arrays dead at the hands of SSD? (also check these posts pertaining to storage arrays and SSD and flash SSD’s emerging role).

    Poll: Are large storage arrays day’s numbered?

    Poll: What’s your take on magnetic tape storage?

    Poll: What do you think of IT clouds?

    Poll: Who is responsible for cloud storage data loss?

    Poll: What are the most popular Zombie technologies?

    Storage I/O Industry Trends and Perspectives

    Poll: What’s your take on OVA and other alliances?

    Poll: Where is most common form or concern of vendor lockin?

    Poll: Who is responsible for, or preventing vendor lockin?

    Poll: Is vendor lockin a good or bad thing?

    Poll: Is IBM V7000 relevant?

    Storage I/O Industry Trends and Perspectives

    Poll: What is your take on EMC and NetApp on similar tracks or paths?

    Poll: What’s your take on RAID still being relevant?

    Poll: What do you see as barriers to converged networks?

    Poll: Who are you?

    Poll: What is your preferred converged network?

    Storage I/O Industry Trends and Perspectives

    Poll: What is your converged network status?

    Poll: Are converged networks in your future?

    Poll: What do you think were top 2009 technologies, events or vendors?

    Poll: What technologies, events, products or vendors did not live up to 2009 predictions?

    Storage I/O Industry Trends and Perspectives

    Poll: What do you think of IT clouds?

    Poll: What is your take on the new FTC blogger disclosure guidelines?

    Poll: Is RAID dead?

    Poll: When will you deploy Windows 7? Note: I upgraded all my systems to Windows 7 during summer of 2011

    Poll: EMC and Cisco VCE, what does it mean?

    Poll: Is IBM XIV still relevant?

    Storage I/O Industry Trends and Perspectives

    Note: Feel free to share, use and make reference to the above poll’s and their results however please remember to attribute the source.

    Ok, nuff said for now

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    If March 31st is backup day, dont be fooled with restore on April 1st

    With March 31st as world backup day, hopefully some will keep recovery and restoration in mind to not be fooled on April 1st.

    Lost data

    When it comes to protecting data, it may not be a headline news disaster such as earthquake, fire, flood, hurricane or act of man, rather something as simply accidentally overwriting a file, not to mention virus or other more likely to occur problems. Depending upon who you ask, some will say backup or saving data is more important while others will standby that it is recovery or restoration that matter. Without one the other is not practical, they need each other and both need to be done as well as tested to make sure they work.

    Just the other day I needed to restore a file that I accidentally overwrote and as luck would have it, my local bad copy had also just overwrote my local backup. However I was able to go and pull an earlier version from my cloud provider which gave a good opportunity to test and try some different things. In the course of testing, I did find some things that have since been updated as well as found some things to optimize for the future.

    Destroyed data

    My opinion is that if not used properly including ignoring best practices, any form of data storage medium or media as well as software could result or be blamed for data loss. For some people they have lost data as a result of using cloud storage services just as other people have lost data or access to information on other storage mediums and solutions. For example, data has been lost on cloud, tape, Hard Disk Drives (HDDs), Solid State Devices (SSD), Hybrid HDDs (HHDD), RAID and non RAID, local and remote and even optical based storage systems large and small. In some cases, there have been errors or problems with the medium or media, in other cases storage systems have lost access to, or lost data due to hardware, firmware, software, or configuration including due to human error among other issues.

    Now is the time to start thinking about modernizing data protection, and that means more than simply swapping out media. Data protection modernization the past several years has been focused on treating the symptoms of downstream problems at the target or destination. This has involved swapping out or moving media around, applying data footprint reduction (DFR) techniques downstream to give near term tactical relief as has been the cause with backup, restore, BC and DR for many years. The focus is starting to expand to how to discuss the source of the problem with is an expanding data footprint upstream or at the source using different data footprint reduction tools and techniques. This also means using different metrics including keeping performance and response time in perspective as part of reduction rates vs. ratios while leveraging different techniques and tools from the data footprint reduction tool box. In other words, its time to stop swapping out media like changing tires that keep going flat on a car, find and fix the problem, change the way data is protected (and when) to cut the impact down stream.

    Here is a link to a free download of chapter 5 (Data Protection: Backup/Restore and Business Continuance / Disaster Recovery) from my new book Cloud and Virtual Data Storage Networking (CRC Press).

    Cloud and Virtual Data Storage NetworkingIntel Recommended Reading List

    Additional related links to read more and sources of information:

    Choosing the Right Local/Cloud Hybrid Backup for SMBs
    E2E Awareness and insight for IT environments
    Poll: What Do You Think of IT Clouds?
    Convergence: People, Processes, Policies and Products
    What do VARs and Clouds as well as MSPs have in common?
    Industry adoption vs. industry deployment, is there a difference?
    Cloud conversations: Loss of data access vs. data loss
    Clouds and Data Loss: Time for CDP (Commonsense Data Protection)?
    Clouds are like Electricity: Dont be scared
    Wit and wisdom for BC and DR
    Criteria for choosing the right business continuity or disaster recovery consultant
    Local and Cloud Hybrid Backup for SMBs
    Is cloud disaster recovery appropriate for SMBs?
    Laptop data protection: A major headache with many cures
    Disaster recovery in the cloud explained
    Backup in the cloud: Large enterprises wary, others climbing on board
    Cloud and Virtual Data Storage Networking (CRC Press, 2011)
    Enterprise Systems Backup and Recovery: A Corporate Insurance Policy

    Take a few minutes out of your busy schedule and check to see if your backups and data protection are working, as well as make sure to test restoration and recovery to avoid an April fools type surprise. One last thing, you might want to check out the data storage prayer while you are at it.

    Ok, nuff said for now.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Researchers and marketers dont agree on future of nand flash SSD

    Marketers particular those involved with anything resembling Solid State Devices (SSD) will tell you SSD is the future as will some researchers along with their fans and pundits. Some will tell you that the future only has room for SSD with the current flavor de jour being nand flash (both Single Level Cell aka SLC and Multi Level Cell aka MLC) with any other form of storage medium (e.g. Hard Disk Drives or HDD and tape summit resources) being dead and to avoid wasting your money on them.

    Of course others and their fans or supporters who do not have an SSD play or product will tell forget about them, they are not ready yet.

    Then there are those who take no sides per say, simply providing comments and perspectives along with things to be considered that also get used to spin stories for or against by others.

    For the record, I have been a fan and user of various forms of SSD along with other variations of tiered storage mediums using them for where they fit best for several decades as a customer in IT, as a vendor, analyst and advisory consultant. Thus my perspective and opinion is that SSDs do in fact have a very bright future. However I also believe that other storage mediums are not dead yet although their roles are evolving while their technologies continue be developed. In other words, use the right technology and tool, packaged and deployed in the best most effective way for the task at hand.

    Memory and tiered storage hirearchy
    Memory and tiered storage hierarchy

    Consequently while some SSD vendors, their fans, supporters, pundits and others might be put off by some recent UCSD research that does not paint SSD and particular nand flash in the best long-term light, it caught my attention and here is why. First I have already seen in different venues where some are using the research as a tool, club or weapon against SSD and in particular nand flash which should be no surprise. Secondly I have also seen those who don’t agree with the research at best dismiss the findings. Others are using it as a conversation or topic piece for their columns or other venues such as here.

    The reason the UCSD research caught my eye was that it appeared to be looking at how will nand SSD technology evolve from where it is today to where it will be in ten years or so.

    While ten years may seem like a long time, just look back at how fast things evolved over the past decade. Granted the UCSD research is open to discussion, debate and dismissal as clear in the comments of this article here. However the research does give a counter point or perspective to some of the hype which can mean somewhere between the two extremes, exists reality and where things are headed or need to be discussed. While I do not agree with all the observations or opinions of the research, it does give stimulus for discussing things including best practices around deployment vs. simply talking about adoption.

    It has taken many decades for people to become comfortable or familiar with the pros and cons of HDD or tape for that matter.

    Likewise some are familiar with (good or bad) with DRAM based SSD of earlier generations. On the other hand, while many people use various forms of nand flash SSD ranging from what is inside their cell phone or SD cards for cameras to USB thumb drives to SSD on drives, on PCIe cards or in storage systems and appliances, there is still an evolving comfort and confidence level for business and enterprise storage use. Some have embraced, some have dismissed, many if not most are intrigued wanting to know more, are using nand flash SSD in some shape or form, while gaining confidence.

    Part of gaining confidence is moving beyond the industry hype looking at and understanding what are the pros, cons and how to leverage or work around the constraints. A long time ago a wise person told me that it is better to know the good, bad and ugly about a product, service or technology so that you could leverage the best, configure, plan and manage around the bad to avoid or minimized the ugly. Based on that philosophy I find many IT customers and even some VARs and vendors wanting to know the good, the bad and they ugly not for hanging out a vendor or their technology and products, rather so that they can be comfortable in knowing when, where, why and how to use to be most effective.

    Industry Trends and Perspectives

    Granted to get some of the not so good information may need NDA (Non Disclosure Agreement) or other confidentially discussions as after all, what vendor or solution provider wants to show or let anything less than favorable out into the blogosphere, twittersphere, googleplus, tabloids, news sphere or other competitive landscapes venues.

    Ok, lets bring this back to the UCSD research report titled The Bleak Future of NAND Flash Memory

    UCSD research report: The Bleak Future of NAND Flash Memory
    Click here or on the above image to read the UCSD research report

    I’m not concerned that the UCSD research was less than favorable as some others might be, after all, it is looking out into the future and if a concern, provides a glimpse of what to keep an eye on.

    Likewise, looking back, the research report could be taken as simply a barometer of what could happen if no improvements or new technologies evolve. For example, the HDD would have hit the proverbial brick wall also known as the super parametric barrier many years ago if new recording methods and materials had not been deployed including a shift to perpendicular recording, something that was recently added to tape.

    Tomorrows SSDs and storage mediums will still be based on nand flash including SLC, MLC, eMLC along with other variants not to mention phased change memory (PCM) and other possible contenders.

    Todays SSDs have shifted from being DRAM based with HDD or even flash-based persistent backing storage to nand flash-based, both SLC and MLC with enhanced or enterprise MLC appearing. Likewise the density of SSDs continue to increase meaning more data packed into the same die or footprint, more dies stacked in a chip package to boost capacity while decreasing cost. However what is also happening is behind the scenes which is a big differentiator with SSDs and that is the quality of some firmware and low-level page management at the flash translation layer (FTL). Hence they saying that anybody with a soldering iron and ability to pull together off the shelves FTLs and packaging can create some form of an SSD. How effective a product will be is based on the intelligence and robustness of the combination of the dies, FTL, controller and associated firmware and device drivers along with other packaging options plus the testing, validation and verification they undergo.

    Various packaging options and where SSD can be deployed
    Various SSD locations, types, packaging and usage scenario options

    Good SSD vendors and solution providers I believe will be able to discuss your concerns around endurance, duty cycles, data integrity and other related topics to set up confidence with current and future issues, granted you may have to go under NDA to gain that insight. On the other hand, those who feel threatened or not able or interested in addressing or demonstrating confidence for the long haul will be more likely to dismiss studies, research, reports, opinions or discussions that dig deeper into creating confidence via understanding of how things work so that customers can more fully leverage those technologies.

    Some will view and use reports such as the one from UCSD as a club or weapon against SSD and in particular against nand flash to help their cause or campaign while others will use it to stimulate controversy and page hit views. My reason for bringing up the topic and discussion it to stimulate thinking and help increase awareness and confidence in technologies such as SSD near and long-term. Regardless of if your view is that SSD will replace HDD, or that they will continue to coexist as tiered storage mediums into the future, gaining confidence in the technologies along with when, where and how to use them are important steps in shifting from industry adoption to customer deployment.

    What say you?

    Is SSD the best thing and you are dumb or foolish if you do not embrace it totally or a fan, pundit cheerleader view?

    Or is SSD great when and where used in the right place so embrace it?

    How will SSD continue to evolve including nand and other types of memories?

    Are you comfortable with SSD as a long term data storage medium, or for today, its simply a good way to discuss performance bottlenecks?

    On the other hand, is SSD interesting, however you are not comfortable or have confidence with the technology, yet you want to learn more, in other words a skeptics view?

    Or perhaps the true cynic view which is that SSD are nothing but the latest buzzword bandwagon fad technology?

    Ok, nuff said for now, other than here is some extra related SSD material:
    SSD options for Virtual (and Physical) Environments: Part I Spinning up to speed on SSD
    SSD options for Virtual (and Physical) Environments, Part II: The call to duty, SSD endurance
    Part I: EMC VFCache respinning SSD and intelligent caching
    Part II: EMC VFCache respinning SSD and intelligent caching
    IT and storage economics 101, supply and demand
    2012 industry trends perspectives and commentary (predictions)
    Speaking of speeding up business with SSD storage
    New Seagate Momentus XT Hybrid drive (SSD and HDD)
    Are Hard Disk Drives (HDDs) getting too big?
    Industry adoption vs. industry deployment, is there a difference?
    Data Center I/O Bottlenecks Performance Issues and Impacts
    EMC VPLEX: Virtual Storage Redefined or Respun?
    EMC interoperability support matrix

    Cheers
    gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved