Microsoft Azure September 2017 Software Defined Data Infrastructure Updates

Microsoft Azure September 2017 Software Defined Data Infrastructure Updates

server storage I/O data infrastructure trends

Microsoft and Azure September 2017 Software Defined Data infrastructure Updates

September was a busy month for data infrastructure topics as well as Microsoft in terms of new and enhanced technologies. Wrapping up September was Microsoft Ignite where Azure, Azure Stack, Windows, O365, AI, IoT, development tools announcements occurred, along with others from earlier in the month. As part of the September announcements, Microsoft released a new version of Windows server (e.g. 1709) that has a focus for enhanced container support. Note that if you have deployed Storage Spaces Direct (S2D) and are looking to upgrade to 1709, do your homework as there are some caveats that will cause you to wait for the next release. Note that there had been new storage related enhancements slated for the September update, however those were announced at Ignite to being pushed to the next semi-annual release. Learn more here and also here.

Azure Files and NFS

Microsoft made several Azure file storage related announcements and public previews during September including Native NFS based file sharing as companion to existing Azure Files, along with public preview of new Azure File Sync Service. Native NFS based file sharing (public preview announced, service is slated to be available in 2018) is a software defined storage deployment of NetApp OnTAP running on top of Azure data infrastructure including virtual machines and leverage Azure underlying storage.

Note that the new native NFS is in addition to the earlier native Azure Files accessed via HTTP REST and SMB3 enabling sharing of files inside Azure public cloud, as well as accessible externally from Windows based and Linux platforms including on premises. Learn more about Azure Storage and Azure Files here.

Azure File Sync (AFS)

Azure File Sync AFS

Azure File Sync (AFS) has now entered public preview. While users of Windows-based systems have been able to access and share Azure Files in the past, AFS is something different. I have used AFS for some time now during several private preview iterations having seen how it has evolved, along with how Microsoft listens incorporating feedback into the solution.

Lets take a look at what is AFS, what it does, how it works, where and when to use it among other considerations. With AFS, different and independent systems can now synchronize file shares through Azure. Currently in the AFS preview Windows Server 2012 and 2016 are supported including bare metal, virtual, and cloud based. For example I have had bare metal, virtual (VMware), cloud (Azure and AWS) as part of participating in a file sync activities using AFS.

Not to be confused with some other storage related AFS including Andrew File System among others, the new Microsoft Azure File Sync service enables files to be synchronized across different servers via Azure. This is different then the previous available Azure File Share service that enables files stored in Azure cloud storage to be accessed via Windows and Linux systems within Azure, as well as natively by Windows platforms outside of Azure. Likewise this is different from the recently announced Microsoft Azure native NFS file sharing serving service in partnership with NetApp (e.g. powered by OnTAP cloud).

AFS can be used to synchronize across different on premise as well as cloud servers that can also function as cache. What this means is that for Windows work folders served via different on premise servers, those files can be synchronized across Azure to other locations. Besides providing a cache, cloud tiering and enterprise file sync share (EFSS) capabilities, AFS also has robust optimization for data movement to and from the cloud and across sites, along with management tools. Management tools including diagnostics, performance and activity monitoring among others.

Check out the AFS preview including planning for an Azure File Sync (preview) deployment (Docs Microsoft), and for those who have Yammer accounts, here is the AFS preview group link.

Microsoft Azure Blob Events via Microsoft

Azure Blob Storage Tiering and Event Triggers

Two other Azure storage features that are in public preview include blob tiering (for cold archiving) and event triggers for events. As their names imply, blob tiering enables automatic migration from active to cold inactive storage of dormant date. Event triggers are policies rules (code) that get executed when a blob is stored to do various functions or tasks. Here is an overview of blob events and a quick start from Microsoft here.

Keep in mind that not all blob and object storage are the same, a good example is Microsoft Azure that has page, block and append blobs. Append blobs are similar to what you might be familiar with other services objects. Here is a Microsoft overview of various Azure blobs including what to use when.

Project Honolulu and Windows Server Enhancements

Microsoft has evolved from command prompt (e.g. early MSDOS) to GUI with Windows to command line extending into PowerShell that left some thinking there is no longer need for GUI. Even though Microsoft has extended its CLI with PowerShell spanning WIndows platforms and Azure, along with adding Linux command shell, there are those who still want or need a GUI. Project Honolulu is the effort to bring GUI based management back to Windows in a simplified way for what had been headless, and desktop less deployments (e.g. Nano, Server Core). Microsoft had Server Management Tools (SMT) accessible via the Azure Portal which has been discontinued.


Project Honolulu Image via Microsoft.com

This is where project Honolulu comes into play for managing Windows Server platforms. What this means is that for those who dont want to rely on or have a PowerShell dependency have an alternative option. Learn more about Project Honolulu here and here, including download the public preview here.

Storage Spaces Direct (S2D) Kepler Appliance

Data Infrastructure provider DataOn has announced a new turnkey Windows Server 2016 Storage Spaces Direct (S2D) powered Hyper-Converged Infrastructure (e.g. productization of project Kepler-47) solution with two node small form factor servers (partner with MSI). How small? Think suitcase or airplane roller board carry on luggage size.

What this means is that you can get into the converged, hyper-converged software defined storage game with Windows-based servers supporting Hyper-V virtual machines (Windows and Linux) including hardware for around $10,000 USD (varies by configuration and other options).

Azure and Microsoft Networking News

Speaking of Microsoft Azure public cloud, ever wonder what the network that enables the service looks like and some of the software defined networking (SDN) along with network virtualization function (NFV) objectives are, have a look at this piece from over at Data Center Knowledge.

In related Windows, Azure and other focus areas, Microsoft, Facebook and Telxius have completed the installation of a high-capacity subsea cable (network) to cross the atlantic ocean. Whats so interesting from a data infrastructure, cloud or legacy server storage I/O and data center focus perspective? The new network was built by the combined companies vs. in the past by a Telco provider consortium with the subsequent bandwidth sold or leased to others.

This new network is also 4,000 miles long including in depths of 11,000 feet, supports with current optics 160 terabits (e.g. 20 TeraBytes) per second capable of supporting 71 million HD videos streamed simultaneous. To put things into perspective, some residential Fiber Optic services can operate best case up to 1 gigabit per second (line speed) and in an asymmetrical fashion (faster download than uploads). Granted there are some 10 Gbit based services out there more common with commercial than residential. Simply put, there is a large amount of bandwidth increased across the atlantic for Microsoft and Facebook to support growing demands.

Where To Learn More

Learn more about related technology, trends, tools, techniques, and tips with the following links.

What This All Means

Microsoft announced a new release of Windows Server at Ignite as part of its new semi-annual release cycle. This latest version of Windows server is optimized for containers. In addition to Windows server enhancements, Microsoft continues to extend Azure and related technologies for public, private and hybrid cloud as well as software defined data infrastructures.

By the way, if you have not heard, its Blogtober, check out some of the other blogs and posts occurring during October here.

Ok, nuff said, for now.
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (and vSAN). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

Data Infrastructure Industry Trends WekaIO Matrix Software Defined Storage SDS

WekaIO Matrix Scale Out Software Defined Storage SDS

server storage I/O trends

Updated 2/11/2018

WekaIO Matrix is a scale out software defined solution (SDS).

WekaIO Matrix software defined scale out storage SDS

This Server StorageIO Industry Trends Perspective report looks at common issues, trends, and how to address different application server storage I/O challenges. In this report, we look at WekaIO Matrix, an elastic, flexible, highly scalable easy to use (and manage) software defined (e.g. software based) storage solution. WekaIO Matrix enables flexible elastic scaling with stability and without compromise.

Matrix is a new scale out software defined storage solution that:

  • Installs on bare metal, virtual or cloud servers
  • Has POSIX, NFS, SMB, and HDFS storage access
  • Adaptable performance for little and big data
  • Tiering of flash SSD and cloud object storage
  • Distributed resilience without compromise
  • Removes complexity of traditional storage
  • Deploys on bare metal, virtual and cloud environments

Where To Learn More

View additional SDS and related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Read more about WekaIO Matrix in this (free, no registration required) Server StorageIO Industry Trends Perspective (ITP) Report compliments of WekaIO.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

August Server StorageIO Update Newsletter – NVM and Flash SSD Focus

Volume 15, Issue VIII

Hello and welcome to this August 2015 Server StorageIO update newsletter. Summer is wrapping up here in the northern hemisphere which means the fall conference season has started, holidays in progress as well as getting ready for back to school time. I have been spending my summer working on various things involving servers, storage, I/O networking hardware, software, services from cloud to containers, virtual and physical. This includes OpenStack, VMware vCloud Air, AWS, Microsoft Azure, GCS among others, as well as new versions of Microsoft Windows and Servers, Non Volatile Memory (NVM) including flash SSD, NVM Express (NVMe), databases, data protection, software defined, cache, micro-tiering and benchmarking using various tools among other things (some are still under wraps).

Enjoy this edition of the Server StorageIO update newsletter and watch for new tips, articles, StorageIO lab report reviews, blog posts, videos and podcast’s along with in the news commentary appearing soon.

Cheers GS

In This Issue

  • Feature Topic
  • Industry Trends News
  • Commentary in the news
  • Tips and Articles
  • StorageIOblog posts
  • Videos and Podcasts
  • Events and Webinars
  • Recommended Reading List
  • Industry Activity Trends
  • Server StorageIO Lab reports
  • New and Old Vendor Update
  • Resources and Links
  • Feature Topic – Non Volatile Memory including NAND flash SSD

    Via Intel History of Memory
    Via Intel: Click above image to view history of memory

    This months feature topic theme is Non Volatile Memory (NVM) which includes technologies such as NAND flash commonly used in Solid State Devices (SSDs) storage today, as well as in USB thumb drive, mobile and hand-held devices among many other uses. NVM spans servers, storage, I/O devices along with mobile and handheld among many other technologies. In addition to NAND flash, other forms of NVM include Non Volatile Random Access Memory (NVRAM), Read Only Memory (ROM) along with some emerging new technologies including the recently announced Intel and Micron 3D XPoint among others.

    • NVMe: The Golden Ticket for Faster Flash Storage? (Via EnterpriseStorageForum)
    • What should I consider when using SSD cloud? (Via SearchCloudStorage)
    • MSP CMG, Sept. 2014 Presentation (Flash back to reality – Myths and Realities
    • Flash and SSD Industry trends perspectives plus benchmarking tips) – PDF
    • Selecting Storage: Start With Requirements (Via NetworkComputing)
    • Spot The Newest & Best Server Trends (Via Processor)
    • Market ripe for embedded flash storage as prices drop (Via Powermore (Dell))

    Continue reading more about NVM, NVMe, NAND flash, SSD Server and storage I/O related topics at www.thessdplace.com as well as about I/O performance, monitoring and benchmarking tools at www.storageperformance.us.

     

    StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    View other recent as well as past blog posts here

    Server Storage I/O Industry Activity Trends (Cloud, Virtual, Physical)

    StorageIO news (image licensed for use from Shutterstock by StorageIO)

    • PMC Announces NVMe SSD Controllers (Via TomsITpro)
    • New SATA SSD powers elastic cloud agility for CSPs (Via Cbronline)
    • Toshiba Solid-State Drive Family Features PCIe Technology (Via Eweek)
    • SanDisk aims CloudSpeed Ultra SSD at cloud providers (Via ITwire)
    • Everspin & Aupera reveal MRAM Module M.2 Form Factor (Via BusinessWire)
    • PMC-Sierra Scales Storage with PCIe, NVMe (Via EEtimes)
    • Seagate Grows Its Nytro Enterprise Flash Storage Line (Via InfoStor)
    • New SAS Solid State Drive From Seagate Micron Alliance (Via Seagate)
    • Samsung ups the SSD ante with faster, higher capacity drives (Via ITworld)

    View other recent news and industry trends here

    StorageIO Commentary in the news

    StorageIO news (image licensed for use from Shutterstock by StorageIO)
    Recent Server StorageIO commentary and industry trends perspectives about news, activities tips, and announcements.

    • Processor: Comments on Spot The Newest & Best Server Trends
    • Processor: Comments on A Snapshot Strategy For Backups & Data Recovery
    • EnterpriseStorageForum: Comments on Defining the Future of DR Storage
    • EnterpriseStorageForum: Comments on Top Ten Tips for DR as a Service
    • EnterpriseStorageForum: Comments on NVMe: Golden Ticket for Faster Storage

    View more Server, Storage and I/O hardware as well as software trends comments here

    Vendors you may not have heard of

    Various vendors (and service providers) you may not know or heard about recently.

    • Scala – Scale out storage management software tools
    • Reduxio – Enterprise hybrid storage with data services
    • Jam TreeSize Pro – Data discovery and storage resource analysis and reporting

    Check out more vendors you may know, have heard of, or that are perhaps new on the Server StorageIO Industry Links page here (over 1,000 entries and growing).

    StorageIO Tips and Articles

    Recent Server StorageIO articles appearing in different venues include:

    • IronMountain:  Information Lifecycle Management: Which Data Types Have Value?
      It’s important to keep in mind that on a fundamental level, there are three types of data: information that has value, information that does not have value and information that has unknown value. Data value can be measured along performance, availability, capacity and economic attributes, which define how the data gets managed across different tiers of storage. In general data can have value, unknown value or no value. Read more here.
    • EnterpriseStorageForum:  Is Future Storage Converging Around Hyper-Converged?
      Depending on who you talk or listen to, hyper-converged storage is either the future of storage, or it is a hype niche market that is not for everybody, particular not larger environments. How converged is the hyper-converged market? There are many environments that can leverage CI along with HCI, CiB or other bundles solutions. Granted, not all of those environments will converge around the same CI, CiB and HCI or pod solution bundles as everything is not the same in most IT environments and data centers. Not all markets, environments or solutions are the same. Read more here.

    Check out these resources and links technology, techniques, trends as well as tools. View more tips and articles here

    StorageIO Videos and Podcasts

    StorageIO podcasts are also available via and at StorageIO.tv

    StorageIO Webinars and Industry Events

    Server Storage I/O Workshop Seminars
    Nijkerk Netherlands October 13-16 2015

    VMworld August 30-September 3 2015

    See additional webinars and other activities on the Server StorageIO Events page here.

    From StorageIO Labs

    Research, Reviews and Reports

    Enmotus FuzeDrive (Server based Micro-Tiering)
    Enmotus FuzeDrive
    • Micro-teiring of reads and writes
    • FuzeDrive for transparent tiering
    • Dynamic tiering with selectable options
    • Monitoring and diagnostics tools
    • Transparent to operating systems
    • Hardware transparent (HDD and SSD)
    • Server I/O interface agnostic
    • Optional RAM cache and file pinning
    • Maximize NVM flash SSD investment
    • Compliment other SDS solutions
    • Use for servers or workstations

    Enmotus FuzeDrive provides micro-tiering boosting performance (reads and writes) of storage attached to physical bare metal servers, virtual and cloud instances including Windows and Linux operating systems across various applications. In the simple example above five separate SQL Server databases (260GB each) were placed on a single 6TB HDD. A TPCC workload was run concurrently against all databases with various numbers of users. One workload used a single 6TB HDD (blue) while the other used a FuzeDrive (green) comprised of a 6TB HDD and a 400GB SSD showing basic micro-tiering improvements.

    View other StorageIO lab review reports here

    Server StorageIO Recommended Reading List

    The following are various recommended reading including books, blogs and videos. If you have not done so recently, also check out the Intel Recommended Reading List (here) where you will also find a couple of my books.

    Get Whats Yours via Amazon.com
    While not a technology book, you do not have to be at or near retirement age to be planning for retirement. Some of you may already be at or near retirement age, for others, its time to start planning or refining your plans. A friend recommended this book and I’m recommending it to others. Its pretty straight forward and you might be surprised how much money people may be leaving on the table! Check it out here at Amazon.com.

    Server StorageIO Industry Resources and Links

    Check out these useful links and pages:

    storageio.com/links
    objectstoragecenter.com
    storageioblog.com/data-protection-diaries-main/
    storageperformance.us
    thenvmeplace
    thessdplace.com
    storageio.com/raid
    storageio.com/ssd

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Summary, EMC VMAX 10K, high-end storage systems stayin alive

    StorageIO industry trends cloud, virtualization and big data

    This is a follow-up companion post to the larger industry trends and perspectives series from earlier today (Part I, Part II and Part III) pertaining to today’s VMAX 10K enhancement and other announcements by EMC, and the industry myth of if large storage arrays or systems are dead.

    The enhanced VMAX 10K scales from a couple of dozen up to 1,560 HDDs (or mix of HDD and SSDs). There can be a mix of 2.5 inch and 3.5 inch devices in different drive enclosures (DAE). There can be 25 SAS based 2.5 inch drives (HDD or SSD) in the 2U enclosure (see figure with cover panels removed), or 15 3.5 inch drives (HDD or SSD) in a 3U enclosure. As mentioned, there can be all 2.5 inch (including for vault drives) for up to 1,200 devices, all 3.5 inch drives for up to 960 devices, or a mix of 2.5 inch (2U DAE) and 3.5 inch (3U DAE) for a total of 1,560 drives.

    Image of EMC 2U and 3U DAE for VMAX 10K via EMC
    Image courtesy EMC

    Note carefully in the figure (courtesy of EMC) that the 2U 2.5 inch DAE and 3U 3.5 inch DAE along with the VMAX 10K are actually mounted in a 3rd cabinet or rack that is part of today’s announcement.

    Also note that the DAE’s are still EMC; however as part of today’s announcement, certain third-party cabinets or enclosures such as might be found in a collocation (colo) or other data center environment can be used instead of EMC cabinets.  The VMAX 10K can however like the VMAX 20K and 40K support external storage virtualized similar to what has been available from HDS (VSP/USP) and HP branded Hitachi equivalent storage, or using NetApp V-Series or IBM V7000 in a similar way.

    As mentioned in one of the other posts, there are various software functionality bundles available. Note that SRDF is a separate license from the bundles to give customers options including RecoverPoint.

    Check out the three post industry trends and perspectives posts here, here and here.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Storage comments from the field and customers in the trenches

    StorageIO industry trends cloud, virtualization and big data

    When I was in Europe presenting some sessions at conferences and doing some seminars last month I meet and spoke with one of the attendees at the StorageExpo Holland event. The persons name (Han Breemer) came up to visit with me after one of my presentations that include SSD is in your future: When, where, with what and how, and Cloud and Virtual Data Storage Networking industry trends and perspectives. Note you can find additional material from various conferences and events on the backup, restore, BC, DR and archiving accessible via the resources menu on the StorageIO web site.

    As I always do, I invite attendees to feel free and follow-up via email, twitter, Linked In, Google+ or other venue with questions, comments, discussions and what they are seeing or running into in their environments.

    Some of the many different items discussed during my StorageExpo presentations included:

    Recently Hans followed up and sent me some comments and asked if I would be willing to share them with others such as who ever happens to read this. I also suggested to Hans that he also start a blog (here is link to his new blog), and that I would be happy to post his comments for others to see and join in the conversation which are shown below.

    Hans Breemer wrote:

    Hi Greg,

    we met each other recently at the Dutch Storage Expo after one of your sessions. We briefly discussed the current trends in the storage market, and the “risks” or “threats” (read: challenges) it means to “us”, the storage guys. Often neglected by the sales guys…

    Please allow me a few lines to elaborate a bit more and share some thoughts from the field. :-)

    1. Bigger is not better?

    Each iteration in the new disk technologies (SATA or SAS) means we get less IOPS for the bucks. Pound for pound that is. Of course the absolute amount of IOPS we can get from a HDD increases all the time. where 175 IOPS was top speed a few years ago, we sometimes see figures close to 220 IOPS per physical drive now. This looks good in the brochure, just as the increased capacity does. However, what the brochure doesn’t tell us that if we look at the IOPS/capacity ratio, we’re walking backwards. a few years ago we could easily sell over 1000 IOPS/TB. Currently we can’t anymore. We’re happy to reach 500 IOPS/TB. I know this has always been like that. However with the introduction of SATA in the enterprise storage world, I feel things have gotten even worse.

    2. But how about SSD’s then?

    True and agree. In the world of HDD’s growing bigger and bigger, we actually need SSD’s, and this technology is the way forward in an IOPS perspective. SSD’s have a great future ahead of them (despite being with us already for some time). I do doubt that at the moment SSD’s already have the economical ability to fill the gap though. They offer many of thousands of IOPS, and for dedicated high-end solutions they offer what we weren’t able to deliver for decades. More IOPS than you need! But what about the “1000 IOPS/TB” market? Let’s call it the middle market.

    3. SSD’s as a lubricant?

    You must have heard every vendor about Adaptive Storage Tiering, Auto Tiering etc. All based on the theorem that most of our IO’s come from a relative small disk section. Thus we can improve the total performance of our array by only adding a few percent of SSD. Smart technology identifies the hot tracks on our disks, and promotes these to SSD’s. We can even demote cold tracks to big SATA drives. Think green, think ecological footprint, etc. For many applications this works well. Regular Windows server, file servers, VMWare ESX server actually seems to like adaptive storage tiering ,and I think I know why, a positive tradeoff of using VMDK’s. (I might share a few lines about FAST VP do’s and dont’s next time if you don’t mind)

    4. How about the middle market them you might ask? or, SSD’s as a band-aid?

    For the middle market, the above developments is sort of disaster. Think SAP running on Sun Solaris, think the average Microsoft SQL Server, think Oracle databases. These are the typical applications that need “middle market” IOPS. Many of these applications have a freakish IO pattern. OLTP during daytime, backup in the evening and batch jobs at night. Not to mention end of month runs, DTA (Dev-Test-Acceptance) streets that sleep for two weeks or are constantly upgraded or restored. These applications hardly benefit from “smart technologies”. The IO behavior is too random, too unpredictable leading to saturated SATA pools, and EFD’s that are hardly doing more IO’s than the FC drives they’re supposed to relief. Add more SSD’s we’re told. Use less SATA we’re told. but it hardly works. Recently we acquired a few new Vmax arrays without EFD or FASTVP, for the sole purpose of hosting these typical middle market applications. Affordable, predictable performance. But then again, our existing Vmax 20k had full size 600GB 15rpm drives, with the Vmax 40k we’re “encouraged” to use small form factor 600GB 10krpm drives. Again a small step backwards?

    5. The storage tiering debacle.

    Last but not least, some words I’d like to share with you about storage tiering. We’re encouraged (again) to sell storage in different tiers. Makes sense. To some extent it does yes. Host you most IO eager application on expensive, SSD based storage. And host your DTA or other less business critical application on FC or SATA quality HDD’s. But what if the less business critical application needs to be backed up in the evening, and while doing so completely saturates your SATA pool? Or what if the Dev server creates just as many IO’s as the Prod environment does? People don’t seem to care it seems. To have people realize how much IO’s they actually need and use, we are reporting IO graphs for all servers in our environment. Our tiering model is based on IOPS/TB and IO response time.

    Tier X would be expensive, offering 800 IOPS/TB @ avg 10ms
    Tier Y would be the cheaper option offering 400 IOPS/TB @ avg 15 ms

    The next step will be to implement front end controls an actually limit a host to some ceiling. for instance, 2 times the limit described in the tier description. thus allowing for peak loads and backups.

    Do we need to? I think so…

    Greg, this small message is slowly turning into a plea. And that is actually what it is, a plea to our storage vendors, and to our evangelists. If they want us to deliver, I feel they should talk to us, and listen to us (and you!).

    Cheers,

    Hans Breemer 

    ps, I love my job, this world and my role to translate promises and demands into solutions that work for my customers. I do take care though not to create solution that will not work, despite what the brochure said.

    pps, please feel free to share the above if needed.

    Here is my response to Hans:

    Hello Hans good to hear from you and thanks for the comments.

    Great perspectives and in the course of talking with your peers around the world, you are not alone in your thinking.

    Often I see disconnects between customers and vendors. Vendors (often driven by their market research) they know what the customer needs and issues are, and many actually do. However I often see a reliance on market research data with many degrees of separation as opposed to direct and candied insight. Likewise some vendors spend more time talking about how they listen to the customer vs. how time they actually do so.

    On the other hand, I routinely see customers fall into the trap of communicating wants (nice to haves) instead of articulating needs (what is required). Then there is confusing industry adoption with customer deployment, not to mention concerns over vendor, technology or services lock-in.

    Hope all else is well.

    Cheers
    gs

    Check out Hans new blog and feel free to leave your comments and perspectives here or via other venues.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Speaking of speeding up business with SSD storage

    Solid state devices (SSD) are a popular topic gaining both industry adoption and customer deployment to speed up storage performance. Here is a link to a recent conversation that I had with John Hillard to discuss industry trends and perspectives pertaining to using SSD to boost performance and productivity for SMB and other environments.

    I/O consolidation from Cloud and Virtual Data Storage Networking (CRC Press) www.storageio.com/book3.html

    SSDs can be a great way for organizations to do IO consolidation to reduce costs in place of using many hard disk drives (HDDs) grouped together to achieve a certain level of performance. By consolidating the IOs off of many HDDs that often end up being under utilized from a space capacity basis, organizations can boost performance for applications while reducing, or reusing HDD based storage capacity for other purposes including growth.

    Here is some related material and comments:
    Has SSD put Hard Disk Drives (HDDs) On Endangered Species List?
    SSD and Storage System Performance
    Are Hard Disk Drives (HDDs) getting too big?
    Solid state devices and the hosting industry
    Achieving Energy Efficiency using FLASH SSD
    Using SSD flash drives to boost performance

    Four ways to use SSD storage
    4 trends that shape how agencies handle storage
    Giving storage its due

    You can read a transcript of the conversation and listen to the pod cast here, or download the MP3 audio here.

    Ok, nuff said about SSD (for now)

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

    Dude, is Dell doing a disk deal again with Compellent?

    Over in Eden Prairie (Minneapolis Minnesota suburb) where data storage vendor Compellent (CML) is based, they must be singing in the hallways today that it is beginning to feel a lot like Christmas.

    Sure we had another dusting of snow this morning here in the Minneapolis area and the temp is actually up in the balmy 20F temperature range (was around 0F yesterday) and holiday shopping is in full swing.

    The other reason I think that the Compellent folks are thinking that it feels a lot like Christmas are the reports that Dell is in exclusive talks to buy them at about $29 per share or about $876 million USD.

    Dell is no stranger to holiday or shopping sprees, check these posts out as examples:

    Dell Will Buy Someone, However Not Brocade (At least for now)

    Back to school shopping: Dude, Dell Digests 3PAR Disk storage (we now know Dell was out bid)

    Data footprint reduction (Part 2): Dell, IBM, Ocarina and Storwize

    Data footprint reduction (Part 1): Life beyond dedupe and changing data lifecycles

    Post Holiday IT Shopping Bargains, Dell Buying Exanet?

    Did someone forget to tell Dell that Tape is dead?

    Now some Compellent fans are not going to be happy with only about $29 a share or about $876 million USD price given the recent stock run up into the $30 plus range. Likewise, some of the Compellent fans may be hoping for or expecting a bidding war to drive the stock back up into the $30 range however keep in mind that it was earlier this year when the stock adjusted itself down into the mid teens.

    In the case of 3PAR and the HP Dell budding war, that was a different product and company focused in a different space than where Compellent has a good fit.

    Sure both 3PAR and Compellent do Fibre Channel (FC) where Dells EqualLogic only does iSCSI, however a valuation based just on FC would be like saying Dell has all the storage capabilities they need with their MD3000 series that can do SAS, iSCSI and FC.

    In other words, there are different storage products for different markets or price bands and customer application needs. Kind of like winter here in Minnesota, sure one type of shovel will work for moving snow or you can leverage different technologies and techniques (tiering) to get the job done effectively the same holds for storage solutions.

    Compellent has a good Cadillac product that is a good fit for some SMB environments. However the SMB space is also where Dell has several storage products some of which they own (e.g. EqualLogic), some they OEM (MD3000 series and NX) as well as resell (e.g. EMC CLARiiON).

    Can the Compellent product replace the lowered CLARiiON business that Dell has itself been shifting more to their flagship EqualLogic product?

    Sure however at the risk of revenue cannibalization or worse, introduction of revenue prevention teams.

    Can the Compellent product then be positioned lower down under the EqualLogic product?

    Sure, however why hold it back not to mention force a higher priced product down into that market segment.

    Can the Compellent product be taken up market to compete above the EqualLogic head to head with the larger CLARiiON systems from EMC or comparable solutions from other vendors?

    Sure, however I can hear choruses of its sounding a lot like Christmas from New England, the bay area and Tucson among others.

    Does this mean that Dell is being overly generous and that this is not a good deal?

    No, not at all.

    Sure it is the holiday season and Dell has several billion dollars of cash laying around however that in itself does not guarantee a large handout or government sized bailout (excuse me, infusion). At $30 or more, that would be overly generous simply based on where the technology fits as well as aligns to the market realities. Consequently, at $29, this is a great deal for Compellent and also for Dell.

    Why is it a good deal for Dell?

    I think that it is as much about Dell getting a good deal (ok, paying a premium) to acquire a competitor that they can use to fill some product gaps where they have common VARs. However I also think that this is very much about the channel and the VAR as much if not more than it is just about a storage product. Servers are part of the game here which in turn supports storage, networking, management tools, backup/recovery, archiving and services.

    Sure Dell can maybe take some cost out of the Compellent solution by replacing the Supermicro PCs that are the hardware platform for their storage controllers with Dell servers. However the bigger play is around further developing its channel and VAR ecosystems, some of whom were with EqualLogic before Dell bought them. This can also be seen as a means of Dell getting that partner ecosystem to sell overall, more dell products and solutions instead of those from Apple, EMC, Futjisu, HP, IBM, Oracle and many others.

    Likewise, I doubt that Mr. Dell is paying a premium simply to make the Compellent shareholders and fans happy to create monetary velocity to stimulate holiday shopping and economic stimulus. However, for the fans, sure, while drowning your sorrows in egg nogg of holiday cheer that you are not getting $30 or higher, instead buy a round for your mates and toast Dell for your holiday gift.

    The real reason I think this is a good reason for Dell is that from a business and financial perspective, assuming they stick to the $29 range, it is a good bargain for both parties. Dell gets a company who has been competing with their EqualLogic product in some cases with the same VARs or resellers. Sure it gets a Fibre Channel based product however Dell already has that with the MD3000 series which I realize is less function laden then Compellent or EqualLogic; however it is also more affordable for a different market.

    If Dell can close on the deal sticking to its offer which they have the upper hand on, execute including rolling out a strategy as well as product positioning plan. Then educate their own teams as well as VARs and customers of what products fit where and when in such a manner that does not cause revenue prevention (e.g. one product or team blocking the other) or cannibalization instead expanding markets, they can do well.

    While Compellent gets a huge price multiple based on their revenue (about $125M USD), if Dell can get the product revenue up from the $125 to $150 million plateau to around $250 to $300 million without cannibalizing other Dell products, the deal pays for itself in many ways.

    Keep in mind that a large pile of cash sitting in the bank these days is not exactly yielding the best returns on investment.

    For the Compellent fans and shareholders, congratulations!

    You have gotten or perhaps are about to get a good holiday gift so knock of the complaining that you should be getting more. The option is that instead of $28 per share, you could be getting 28 lumps of coal in your Christmas stocking.

    For the Dell folks, assuming the deal is done on their terms and that they can quickly rationalize the product overlap, convey and then execute on a strategy while keeping the revenue prevention teams on the sidelines you too have a holiday gift to work with (some assembly will be required however). This also is good for Dell outside of storage which may turn out to be one of the gems of the deal in keeping or expanding VARs selling Dell based servers and associated technologies.

    For EMC who was slapped in the face earlier this year when Dell took a run at 3PAR, sure there will be more erosion on the lower end CLARiiOn as has been occurring with the EqualLogic. However Dell still needs a solution to effectively compete with EMC and others at the higher end of the SMB or lower end of the enterprise market.

    Sure the EqualLogic or Compellent products could be deployed into such scenarios; however those solutions are then playing on a different field and out of their market sweet spots.

    Lets see what happens shall we.

    In the meantime, what say you?

    Is this a good deal for Dell, who is the deal good for assuming it goes through and at the terms mentioned, what is your take?

    Who benefits from this proposed deal?

    Note that in the holiday gift giving spirit, Chicago style voting or polling will be enabled.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    End to End (E2E) Systems Resource Analysis (SRA) for Cloud and Virtual Environments

    A new StorageIO Industry Trends and Perspective (ITP) white paper titled “End to End (E2E) Systems Resource Analysis (SRA) for Cloud, Virtual and Abstracted Environments” is now available at www.storageioblog.com/reports compliments of SANpulse technologies.

    End to End (E2E) Systems Resource Analysis (SRA) for Virtual, Cloud and abstracted environments: Importance of Situational Awareness for Virtual and Abstracted Environments

    Abstract:
    Many organizations are in the planning phase or already executing initiatives moving their IT applications and data to abstracted, cloud (public or private) virtualized or other forms of efficient, effective dynamic operating environments. Others are in the process of exploring where, when, why and how to use various forms of abstraction techniques and technologies to address various issues. Issues include opportunities to leverage virtualization and abstraction techniques that enable IT agility, flexibility, resiliency and salability in a cost effective yet productive manner.

    An important need when moving to a cloud or virtualized dynamic environment is to have situational awareness of IT resources. This means having insight into how IT resources are being deployed to support business applications and to meet service objectives in a cost effective manner.

    Awareness of IT resource usage provides insight necessary for both tactical and strategic planning as well as decision making. Effective management requires insight into not only what resources are at hand but also how they are being used to decide where different applications and data should be placed to effectively meet business requirements.

    Learn more about the importance and opportunities associated with gaining situational awareness using E2E SRA for virtual, cloud and abstracted environments in this StorageIO Industry Trends and Perspective (ITP) white paper compliments of SANpulse technologies by clicking here.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    ILM = Has It Losts its Meaning

    Disclaimer, warning, be advised, heads up, disclosure, this post is partially for fun so take it that way.

    Remember ILM, that is, Information Lifecycle Management among other meanings.

    It was a popular buzzword de jour a few years ago similar to how cloud is being tossed around lately, or in the recent past, virtualization, clusters, grids and SOA among others.

    One of the challenges with ILM besides its overuse and thus confusion was what it meant, after all was or is it a product, process, paradigm or something else?

    That depends of course on who you talk to and their view or definition.

    For some, ILM was a new name for archiving, or storage and data tiering, or data management, or hierarchical storage management (HSM) or system managed storage (SMS) and software managed storage (SMS) among others.

    So where is ILM today?

    Better yet, what does ILM stand for?

    Well here are a few thoughts; some are oldies but goodies, some new, some just for fun.

    ILM = I Like Marketing or Its a Lot of Marketing or Its a Lot of Money
    ILM = It Losts its Meaning or Its a Lot of Meetings
    ILM = Information Loves Magnetic media or I Love Magnetic media
    ILM = IBM Loves Mainframes or Intel Loves Memory
    ILM = Infrastructure Lifecycle Management or iPods/iPhones Like Macintosh

    Then there are many other variations of xLM where I is replaced with X (similar to XaaS) where X is any letter you want or need for a particular purpose or message theme. For example, how about replacing X with an A for Application Lifecycle Management (ALM), or a B for Buzzword or Backup Lifecycle Management (BLM), C for Content Lifecycle Management (CLM) and D for Document or Data Lifecycle Management (DLM). There are many others including Hardware Lifecycle Management (HLM), Product or Program Lifecycle Management (PLM) not to mention Server, Storage or Security Lifecycle Management (SLM).

    While ILM or xLM specific product and marketing buzz for the most part has subsided, perhaps it is about time to reappear to give current buzzwords such as cloud a bread or rest. After all, ILM and xLM as buzzwords should be well rested after their break at the Buzzword Rest Spa (BRS) perhaps located on someday isle. You know about someday isle dont you? Its that place of dreams, a visionary place to be visited in the future.

    There are already signs of the impending rested, rejuvenated and re branded appearance of ILM in the form of automated tiering, intelligent storage and data management, file virtualization, policy managed server and storage among others.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Greg Schulz – StorageIO, Author “The Green and Virtual Data Center” (CRC)

    Technorati tags: ILM

    Shifting from energy avoidance to energy efficiency

    Storage I/O trends

    I’m continually amazed at the number of people in the IT industry from customers to vendors, vars to media and even analysts who associate Green IT with and only with reducing carbon footprints. I guess I should not be surprised given the amount of rhetoric around Green and carbon both in the IT industry as well as in general resulting in a Green Gap.

    The reality as I have discussed in the past is that Green IT while addressing carbon footprint topics, is really more about efficiency and optimization for business economic benefits that also help the environment. From a near-term tactical perspective, Green IT is about boosting productivity and enabling business sustainability during tough economic times, doing more with less, or, doing more with what you have. On a strategic basis, Green IT is about continued sustainability while also improving top and bottom line economics and repositioning IT as a competitive advantage resource.

    There is a lot of focus on energy avoidance, as it is relatively easy to understand and it is also easy to implement. Turning off the lights, turning off devices when they are not in use, enabling low-power, energy-savings or Energy Star® (now implemented for servers with storage being a new focus) modes are all means to saving or reducing energy consumption, emissions, and energy bills.

    Ideal candidates for powering down when not in use or inactive include desktop workstations, PCs, laptops, and associated video monitors and printers. Turning lights off or implementing motion detectors to turn lights off automatically, along with powering off or enabling energy-saving modes on general-purpose and consumer products has a significant benefit. New generations of processors such as the Intel Xeon 5xxx or 7xxx series (formerly known as Nehalem) provide the ability to boost performance when needed, or, go into various energy conservation modes when possible to balance performance, availability and energy needs to applicable service requirements, a form of intelligent power management.

    In Figure 1 are shown four basic approaches (in addition to doing nothing) to energy efficiency. One approach is to avoid energy usage, similar to following a rationing model, but this approach will affect the amount of work that can be accomplished. Another approach is to do more work using the same amount of energy, boosting energy efficiency, or the complement—do the same work using less energy.

    Tiered Storage
    Figure 1 the Many Faces of Energy Efficiency (Source: “The Green and Virtual Data Center” (CRC)

    The energy efficiency gap is the difference between the amount of work accomplished or information stored in a given footprint and the energy consumed. In other words, the bigger the energy efficiency gap, the better, as seen in the fourth scenario, doing more work or storing more information in a smaller footprint using less energy.

    Given the shared nature of their use along with various intersystem dependencies, not all data center resources can be powered off completely. Some forms of storage devices can be powered off when they are not in use, such as offline storage devices or mediums for backups and archiving. Technologies such as magnetic tape or removable hard disk drives that do not need power when they are not in use can be used for storing inactive and dormant data.

    Avoiding energy use can be part of an approach to address power, cooling, floor space and environmental (PCFE) challenges, particularly for servers, storage, and networks that do not need to be used or accessible at all times. However, not all applications, data or workloads can be consolidated, or, powered down due to performance, availability, capacity, security, compatibility, politics, financial and many other reasons. For those applications that cannot be consolidated, the trick is to support them in a more efficient and effective means.

    Simply put, when work needs to be done or information needs to be stored or retrieved or data moved, it should be done so in the most energy-efficient manner aligned to a given level of service which can mean leveraging faster, higher performing resources (servers, storage and networks) to get the job done fast resulting in improved productivity and efficiency.

    Tiering is an approach that applies to servers, storage, and networks as well as data protection. For example, tiered servers include large frame or mainframes, rack mount as well as blades with various amounts of memory, I/O or expansion slots and number of processor cores at different speeds. Tiered storage includes different types of mediums and storage system architectures such as those shown in figure 2. Tiered networking or tiered access includes 10Gb and 1Gb Ethernet, 2/4/8 Gb Fibre Channel, Fibre Channel over Ethernet (FCoE), iSCSI, NAS and shared SAS among others. Tiered data protection includes various technologies to meet various recovery time objectives (RTO) and recovery point objectives (RPO) such as real-time synchronous mirroring with snapshots, to periodic backup to disk or tape among other approaches, techniques and technologies.

    Technology alignment (Figure 2), that is aligning the applicable type of storage or server resource and devices to the task at hand to meet application service requirements is essential to archiving an optimized and efficient IT environment. For example, for very I/O intensive active data as shown in figure 2, leveraging ultra fast tier-0 high-performance SSD (FLASH or RAM) storage, or for high I/O active data, tier-1 fast 15.5K SAS and Fibre Channel storage based systems would be applicable.

    For active and on-line data, that’s where energy efficiency in the form of fast disk drives including RAM SSD or FLASH SSD (for reads, writes are another story) and in particular fast 15.5K or 10K FC and SAS energy efficient disks and their associated storage systems come into play. The focus for active data and storage systems should be around more useful work per unit of energy consumed in a given footprint. For example, more IOPS per watt, more transactions per watt, more bandwidth or video streams per watt, more files or emails processed per watt.

    Tiered Storage

    Figure 2 Tiered Storage: Balancing Performance, Availability, Capacity and Energy to QoS (Source: “The Green and Virtual Data Center” (CRC)

    For low-performance, low activity applications where the focus is around storing as much data as possible with the lowest cost including for disk to disk based backup, slower high capacity SATA based storage systems are the fit (lower right in figure 2). For long-term bulk storage to meet archiving, data retention or other retention needs as well as storing large monthly full backups or long term data preservation, tape remains the ticket for large environments with the best combination of performance, availability capacity and energy efficiency and cost per footprint.

    General approaches to boost energy efficiency include:

    • Do more work using the same or less amount of power and subsequently cooling
    • Leverage faster processors/controllers that use the same or less power
    • Apply applicable RAID level to application and data QoS requirements
    • Consolidate slower storage or servers to a faster, more energy-efficient solution
    • Use faster disk drives with capacity boost and that draw less power
    • Upgrade to newer, faster, denser, more energy-efficient technologies
    • Look beyond capacity utilization; keep response time and availability in mind
    • Leverage IPM, AVS, and other techniques to vary performance and energy usage
    • Manage data both locally and remote; gain control and insight before moving problems
    • Leverage a data footprint reduction strategy across all data and storage tiers
    • Utilize multiple data footprint techniques including archive, compression and de-dupe
    • Reduce data footprint impact, enabling higher densities of stored on-line data

    Find a balance between energy avoidance and energy efficiency, consolidation and business enablement for sustainably, hardware and software, best practices including policy and producers, as well as leveraging available financial rebates and incentives. Addressing green and PCFE issues is a process; there is no one single solution or magic formula.

    Efficient and Optimized IT Wheel of Oppourtunity

    Figure 3 Wheel of Opportunity – Various Techniques and Technologies for Infrastructure Optimization (Source: “The Green and Virtual Data Center” (CRC)

    Instead, leverage a combination of technologies, techniques, and best practices to address various issues and requirements is needed (Figure 3). Some technologies and techniques include among others infrastructure resource management (IRM), data management, archiving (including for non-compliance), and compression (on-line and off-line, primary and secondary) as well as de-dupe for backups, space saving snapshots, and effective use of applicable raid levels.

    Green washing and green hype may fade away, however power, cooling, footprint, energy (PCFE) and related issues and initiatives that enable IT infrastructure optimization and business sustainability will not fade away. Addressing IT infrastructure optimization and efficiency is thus essential to IT and business sustainability and growth in an environmentally friendly manner which enables shifting from talking about green to being green and efficient.

    Learn more on the tips, tools, articles, videos and reports page as well as in “Cloud and Virtual Data Storage Networking” (CRC) pages, “The Green and Virtual Data Center” (CRC) pages at StorageIO.com.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved