GDPR (General Data Protection Regulation) Resources Are You Ready?

The new European General Data Protection Regulation (GDPR) go into effect in a year on May 25 2018 are you ready?

What Is GDPR

If your initial response is that you are not in Europe and do not need to be concerned about GDPR you might want to step back and review that thought. While it is possible that some organizations may not be affected by GDPR in Europe directly, there might be indirect considerations. For example, GDPR, while focused on Europe, has ties to other initiatives in place or being planned for elsewhere in the world. Likewise unlike earlier regulatory compliance that tended to focus on specific industries such as healthcare (HIPPA and HITECH) or financial (SARBOX, Dodd/Frank among others), these new regulations can be more far-reaching.

Where To Learn More

Acronis GDPR Resources

  • Acronis Outlines GDPR position

Quest GDPR Resources

Microsoft and Azure Cloud GDPR Resources

Do you have or know of relevant GDPR information and resources? Feel free to add them via comments or send us an email, however please watch the spam and sales pitches as they will be moderated.

What This All Means

Now is the time to start planning, preparing for GDPR if you have not done so and need to, as well as becoming more generally aware of it and other initiatives. One of the key takeaways is that while the word compliance is involved, there is much more to GDPR than just compliance as we have seen in the part. With GDPR and other initiatives data protection becomes the focus including privacy, protect, preserve, secure, serve as well as manage, have insight, awareness along with associated reporting.

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

AWS S3 Storage Gateway Revisited (Part I)

server storage I/O trends

AWS S3 Storage Gateway Revisited (Part I)

This Amazon Web Service (AWS) Storage Gateway Revisited posts is a follow-up to the AWS Storage Gateway test drive and review I did a few years ago (thus why it’s called revisited). As part of a two-part series, the first post looks at what AWS Storage Gateway is, how it has improved since my last review of AWS Storage Gateway along with deployment options. The second post in the series looks at a sample test drive deployment and use.

If you need an AWS primer and overview of various services such as Elastic Cloud Compute (EC2), Elastic Block Storage (EBS), Elastic File Service (EFS), Simple Storage Service (S3), Availability Zones (AZ), Regions and other items check this multi-part series (Cloud conversations: AWS EBS, Glacier and S3 overview (Part I) ).

AWS

As a quick refresher, S3 is the AWS bulk, high-capacity unstructured and object storage service along with its companion deep cold (e.g. inactive) Glacier. There are various S3 storage service classes including standard, reduced redundancy storage (RRS) along with infrequent access (IA) that have different availability durability, performance, service level and cost attributes.

Note that S3 IA is not Glacier as your data always remains on-line accessible while Glacier data can be off-line. AWS S3 can be accessed via its API, as well as via HTTP rest calls, AWS tools along with those from third-party’s. Third party tools include NAS file access such as S3FS for Linux that I use for my Ubuntu systems to mount S3 buckets and use similar to other mount points. Other tools include Cloudberry, S3 Motion, S3 Browser as well as plug-ins available in most data protection (backup, snapshot, archive) software tools and storage systems today.

AWS S3 Storage Gateway and What’s New

The Storage Gateway is the AWS tool that you can use for accessing S3 buckets and objects via your block volume, NAS file or tape based applications. The Storage Gateway is intended to give S3 bucket and object access to on-premises applications and data infrastructures functions including data protection (backup/restore, business continuance (BC), business resiliency (BR), disaster recovery (DR) and archiving), along with storage tiering to cloud.

Some of the things that have evolved with the S3 Storage Gateway include:

  • Easier, streamlined download, installation, deployment
  • Enhanced Virtual Tape Library (VTL) and Virtual Tape support
  • File serving and sharing (not to be confused with Elastic File Services (EFS))
  • Ability to define your own bucket and associated parameters
  • Bucket options including Infrequent Access (IA) or standard
  • Options for AWS EC2 hosted, or on-premises VMware as well as Hyper-V gateways (file only supports VMware and EC2)

AWS Storage Gateway Three Functions

AWS Storage Gateway can be deployed for three basic functions:

    AWS Storage Gateway File Architecture via AWS.com

  • File Gateway (NFS NAS) – Files, folders, objects and other items are stored in AWS S3 with a local cache for low latency access to most recently used data. With this option, you can create folders and subdirectory similar to a regular file system or NAS device as well as configure various security, permissions, access control policies. Data is stored in S3 buckets that you specify policies such as standard or Infrequent Access (IA) among other options. AWS hosted via EC2 as well as VMware Virtual Machine (VM) for on-premises file gateway.

    Also, note that AWS cautions on multiple concurrent writers to S3 buckets with Storage Gateway so check the AWS FAQs which may have changed by the time you read this. Current file share limits (subject to change) include 1 file gateway share per S3 bucket (e.g. a one to one mapping between file share and a bucket). There can be 10 file shares per gateway (e.g. multiple shares each with its own bucket per gateway) and a maximum file size of 5TB (same as maximum S3 object size). Note that you might hear about object storage systems supporting unlimited size objects which some may do, however generally there are some constraints either on their API front-end, or what is currently tested. View current AWS Storage Gateway resource and specification limits here.

  • AWS Storage Gateway Non-Cached Volume Architecture via AWS.com

    AWS Storage Gateway Cached Volume Architecture via AWS.com

  • Volume Gateway (Block iSCSI) – Leverages S3 with a point in time backup as an AWS EBS snapshot. Two options exist including Cached volumes with low-latency access to most recently used data (e.g. data is stored in AWS, with a local cache copy on disk or SSD). The other option is Stored Volumes (e.g. non-cached) where primary copy is local and periodic snapshot backups are sent to AWS. AWS provides EC2 hosted, as well as VMs for VMware and various Hyper-V Windows Server based VMs.

    Current Storage Gateway volume limits (subject to change) include maximum size of a cached volume 32TB, maximum size of a stored volume 16TB. Note that snapshots of cached volumes larger than 16TB can only be restored to a storage gateway volume, they can not be restored as an EBS volume (via EC2). There are a maximum of 32 volumes for a gateway with total size of all volumes for a gateway (cached) of 1,024TB (e.g. 1PB). The total size of all volumes for a gateway (stored volume) is 512TB. View current AWS Storage Gateway resource and specification limits here.

  • AWS Storage Gateway VTL Architecture via AWS.com

  • Virtual Tape Library Gateway (VTL) – Supports saving your data for backup/BC/DR/archiving into S3 and Glacier storage tiers. Being a Virtual Tape Library (e.g. VTL) you can specify emulation of tapes for compatibility with your existing backup, archiving and data protection software, management tools and processes.

    Storage Gateway limits for tape include minimum size of a virtual tape 100GB, maximum size of a virtual tape 2.5TB, maximum number of virtual tapes for a VTL is 1,500 and total size of all tapes in a VTL is 1PB. Note that the maximum number of virtual tapes in an archive is unlimited and total size of all tapes in an archive is also unlimited. View current AWS Storage Gateway resource and specification limits here.

    AWS

Where To Learn More

What This All Means

As to which gateway function and mode (cached or non-cached for Volumes) depends on what it is that you are trying to do. Likewise choosing between EC2 (cloud hosted) or on-premises Hyper-V and VMware VMs depends on what your data infrastructure support requirements are. Overall I like the progress that AWS has put into evolving the Storage Gateway, granted it might not be applicable for all usage cases. Continue reading more and view images from the AWS Storage Gateway Revisited test drive in part two located here.

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

Dell EMC World 2017 Day One news announcement summary

server storage I/O trends

Dell EMC World 2017 Day One news announcement summary

This is the first day of the first combined Dell EMC World 2017 being held in Las Vegas Nevada. Last year’s event in Las Vegas was the end of the EMC World, while this being the first of the combined Dell EMC World events that succeeded its predecessors.

What this means is an expanded focus because of the new Dell EMC that has added servers among other items to the event focus. Granted, EMC had been doing servers via its VCE and converged divisions, however with the Dell EMC integration completed as of last fall, the Dell Server group is now part of the Dell EMC organization.

The central theme of this Dell EMC world is REALIZE with a focus on four pillars:

  • Digital Transformation (Pivotal focus) of applications
  • IT Transformation (Dell EMC, Virtustream, VMware) data center modernization
  • Workforce transformation (Dell Client Solutions) devices from mobile to IoT
  • Information Security (RSA and Secureworks)

software defined data infrastructures SDDI and SDDC

What Did Dell EMC Announce Today

Note that while there are focus areas of the different Dell Technologies business units aligned to the pillars, there is also leveraging across those areas and groups. For example, VMware NSX spans into security, and  PowerEdge servers span into other pillars as a core data infrastructure building block.

What Dell EMC and Dell Technologies announced today.

  • Wave of Innovations to help customers realize digital transformation
  • New 14th generation PowerEdge Servers that are core building blocks for data infrastructures
  • Flexible consumption models (financing and more) from desktop to data center
  • Hyper-Converged Infrastructure (HCI), Converged (CI) and Cloud like systems
  • New All-Flash (ADA) SSD Storage Systems (VMAX, XtremIO X2, Unity, SC, Isilon)
  • Integrated Data Protection Appliance (IDPA) and Cloud Protection solutions
  • Using Gen14 servers several Software Defined Storage (SDS) enhancements
  • Open Networking and software-defined networks (SDN) with 25G
  • Last week Dell EMC announced Microsoft Azure Stack hybrid cloud solutions

New 14th generation PowerEdge Servers that are core building blocks for data infrastructures

Dell EMC has announced the 14th generation of Intel-powered Dell EMC PowerEdge server portfolio systems. These includes servers that get defined with software for software-defined data centers (SDDC), software-defined data infrastructures (SDDI) for the cloud, virtual, the container as well as storage among other applications. Target application workloads and environments range from high-performance compute (HPC), and high-productivity (or profitability) compute (the other HPC), super compute (SC), little data and big data analytics, legacy and emerging business applications as well as cloud and beyond. Enhancements besides new Intel processor technology includes enhanced iDRAC, OpenManage, REST interface, QuickSync, Secure Boot among other management, automation, security, performance, and capacity updates.

Other Dell EMC enhancements with Gen14 include support for various NVDIMM to enable persistent memory also known as storage class memories such as 3D Xpoint among others. Note at this time, Dell EMC is not saying much about speeds, feeds and other details, stay tuned for more information on these in the weeks and months to come.

Dell EMC has also been leaders with deploying NVMe from PCIe flash cards to 8639 U.2 devices such as 2.5” drives. Thus it makes sense to see continued adoption and deployment of those devices along with SAS, SATA support. Note that Broadcom (formerly known as Avago) recently announced the release of their PCIe SAS, SATA and NVMe based adapters.

The reason this is worth mentioning is that in the past Dell has OEM sourced Avago (formerly known as LSI) based adapters. Given Dell EMC use of NVMe drives, it only makes sense to put two and two together.

Let’s wait a few months to see what the speeds, feeds, and specifications are to put the rest of the puzzle together. Speaking of NVMe, also look for Dell EMC to also supporting PCIe AIC and U.2 (8639) NVMe devices, also leverage M.2 Next Generation Form Factor (NGFF) aka Gum sticks as boot devices.

While these are all Intel focused, I would expect Dell EMC not to sit back, instead, watch for what they do with other processors and servers including ARMs among others.

Increased support for more GPUs to support VDI and other graphic intensive workloads such as video rendering, imaging among others. Part of enhanced GPU support is improvements (multi-vector cooling) to power and cooling including sensing the type of PCIe card, and then adjusting cooling fans and subsequent power draw accordingly. The benefit should be more proper cooling to reduce power to support more work and productivity.

Flexible consumption models (financing and more) from desktop to data center

Dell Technologies has announced several financing, procurement, and consumption models with cloud-like flexible options for different IT and data center, along with mobile device technologies. These range from licensing to deployment as a service, consumption and other options via Dell Financial Services (DFS).

Highlights include:

  • DFS Flex on Demand is available now in select countries globally.
  • DFS Cloud Flex for HCI is available now for Dell EMC VxRail and Dell EMC XC Series and has planned availability for Q3 2017 in Dell EMC VxRack Systems.
  • PC as a Service is available now in select countries globally.
  • Dell EMC VDI Complete Solutions are available now in select countries globally.
  • DFS Flex on Demand is available now in select countries globally.
  • DFS Cloud Flex for HCI is available now for Dell EMC VxRail and Dell EMC XC Series and has planned VxRack systems in Q3 2017.
  • PC as a Service solution is available now in select countries globally.
  • Dell EMC VDI Complete Solutions are available now in select countries.
  • Dell Technologies transformation license agreement (TLA) is available now in select countries

Hyper-Converged Infrastructure (HCI), Converged (CI) and Cloud like systems

Enhancements to VxRail system, VxRACK Systems, and XC Series leveraging Del EMC Gen14 PowerEdge servers along with other improvements. Note that this also includes continued support for VMware, Microsoft as well as Nutanix software-defined solutions.

New All-Flash (ADA) SSD Storage Systems (VMAX, XtremIO X2, Unity, SC, Isilon)

Storage system enhancements include from high-end (VMAX and XtremIO) to mid-range (Unity and SC) along with scale-out NAS (Isilon)

Highlights of the announcements include:

  • New VMAX 950F all flash array (AFA)
  • New XtremIO X2 with enhanced software, more powerful hardware
  • New Unity AFA systems
  • New SC5020 midrange hybrid storage
  • New generation of Isilon storage with improved performance, capacity, density

Integrated Data Protection Appliance (IDPA) and Cloud Protection solutions

Data protection enhancement highlights include:

  • New Turnkey Integrated Data Protection Appliance (IDPA) with four models (DP5300, DP5800, DP8300, and DP8800) starting at 34 TB usable scaling up to 1PB usable. Data services including encryption, data footprint reduction such as dedupe, remote monitoring, Maintenance service dispatch, along with application integration. Application integration includes MongoDB, Hadoop, MySQL.

  • Enhanced cloud capabilities powered by Data Domain virtual edition (DD VE 3.1) along with data protection suite enable data to be protected too, and restored from Amazon Web Services (AWS) Simple Storage Service (S3) as well as Microsoft Azure.

Open Networking and software-defined networks (SDN) with 25G

Dell EMC Open Networking highlights include:

  • Dell EMCs first 25GbE open networking top of rack (TOR) switch including S5100-ON series (With OS10 enterprise edition software) complimenting new PowerEdge Gen14 servers with native 25GbE support. Switches support 100GbE uplinks fabric connectivity for east-west (management) network traffic. Also announced is the S4100-ON series and N1100-ON series that are in addition to recently announce N3100-ON and N2100-ON switches.

  • Dell EMCs first optimized Open Networking platform for unified storage network switching including support for 16Gb/32GB Fibre Channel

  • New Network Function Virtualization (NFV) and IoT advisory consulting services

Note that Dell EMC is announcing the availability of these networking solutions in Dell Technologies 2018 fiscal year which occurs before the traditional calendar year.

Using Gen14 servers, several Software Defined Storage (SDS) enhancements

Dell EMC announced enhancements to their Software Defined Storage (SDS) portfolio that leveraging the PowerEdge 14th generation server portfolio. These improvements include ScaleIO, Elastic Cloud Storage (ECS), IsilonSD Edge and Preview of Project Nautilus.

Where to learn more

What this all means

This is a summary of what has been announced so far on the first morning of the first day of the first new Dell EMC world. Needless to say, there is more detail to look at for the above announcements from speeds, feeds, functionality and related topics that will get addressed in subsequent posts. Overall this is a good set of announcements expanding capabilities of the combined Dell EMC while enhancing existing systems as well as well as solutions.

Ok, nuff said (for now…)

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book "Software-Defined Data Infrastructure Essentials" (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

Azure Stack Technical Preview 3 (TP3) Overview Preview Review

server storage I/O trends

Azure Stack Technical Preview 3 (TP3) Overview Preview Review

Perhaps you are aware or use Microsoft Azure, how about Azure Stack?

This is part one of a two-part series looking at Microsoft Azure Stack providing an overview, preview and review. Read part two here that looks at my experiences installing Microsoft Azure Stack Technical Preview 3 (TP3).

For those who are not aware, Azure Stack is a private on-premises extension of the Azure public cloud environment. Azure Stack now in technical preview three (e.g. TP3), or what you might also refer to as a beta (get the bits here).

In addition to being available via download as a preview, Microsoft is also working with vendors such as Cisco, Dell EMC, HPE, Lenovo and others who have announced Azure Stack support. Vendors such as Dell EMC have also made proof of concept kits available that you can buy including server with storage and software. Microsoft has also indicated that once launched for production versions scaling from a few to many nodes, that a single node proof of concept or development system will also remain available.

software defined data infrastructure SDDI and SDDC
Software-Defined Data Infrastructures (SDDI) aka Software-defined Data Centers, Cloud, Virtual and Legacy

Besides being an on-premises, private cloud variant, Azure Stack is also hybrid capable being able to work with public cloud Azure. In addition to working with public cloud Azure, Azure Stack services and in particular workloads can also work with traditional Microsoft, Linux and others. You can use pre built solutions from the Azure marketplace, in addition to developing your applications using Azure services and DevOps tools. Azure Stack enables hybrid deployment into public or private cloud to balance flexibility, control and your needs.

Azure Stack Overview

Microsoft Azure Stack is an on premise (e.g. in your own data center) private (or hybrid when connected to Azure) cloud platform. Currently Azure Stack is in Technical Preview 3 (e.g. TP3) and available as a proof of concept (POC) download from Microsoft. You can use Azure Stack TP3 as a POC for learning, demonstrating and trying features among other activities. Here is link to a Microsoft Video providing an overview of Azure Stack, and here is a good summary of roadmap, licensing and related items.

In summary, Microsoft Azure Stack is:

  • A onsite, on premise, in your data center extension of Microsoft Azure public cloud
  • Enabling private and hybrid cloud with strong integration along with common experiences with Azure
  • Adopt, deploy, leverage cloud on your terms and timeline choosing what works best for you
  • Common processes, tools, interfaces, management and user experiences
  • Leverage speed of deployment and configuration with a purpose-built integrate solution
  • Support existing and cloud native Windows, Linux, Container and other services
  • Available as a public preview via software download, as well as vendors offering solutions

What is Azure Stack Technical Preview 3 (TP3)

This version of Azure Stack is a single node running on a lone physical machine (PM) aka bare metal (BM). However can also be installed into a virtual machine (VM) using nesting. For example I have Azure Stack TP3 running nested on a VMware vSphere ESXi 6.5 systems with a Windows Server 2016 VM as its base operating system.

Microsoft Azure Stack architecture
Click here or on the above image to view list of VMs and other services (Image via Microsoft.com)

The TP3 POC Azure Stack is not intended for production environments, only for testing, evaluation, learning and demonstrations as part of its terms of use. This version of Azure Stack is associated with a single node identity such as Azure Active Directory (AAD) integrated with Azure, or Active Directory Federation Services (ADFS) for standalone modes. Note that since this is a single server deployment, it is not intended for performance, rather, for evaluating functionality, features, APIs and other activities. Learn more about Azure Stack TP3 details here (or click on image) including names of various virtual machines (VMs) as well as their roles.

Where to learn more

The following provide more information and insight about Azure, Azure Stack, Microsoft and Windows among related topics.

  • Azure Stack Technical Preview 3 (TP3) Overview Preview Review
  • Azure Stack TP3 Overview Preview Review Part II
  • Azure Stack Technical Preview (get the bits aka software download here)
  • Azure Stack deployment prerequisites (Microsoft)
  • Microsoft Azure Stack troubleshooting (Microsoft Docs)
  • Azure Stack TP3 refresh tips (Azure Stack)
  • Here is a good post with a tip about not applying certain Windows updates to Azure stack TP3 installs.
  • Configure Azure stack TP3 to be available on your own network (Azure Stack)
  • Azure Stack TP3 Marketplace syndication (Azure Stack)
  • Azure Stack TP3 deployment experiences (Azure Stack)
  • Frequently asked questions for Azure Stack (Microsoft)
  • Deploy Azure Stack (Microsoft)
  • Connect to Azure Stack (Microsoft)
  • Azure Active Directory (AAD) and Active Directory Federation Services (ADFS)
  • Azure Stack TP2 deployment experiences by Niklas Akerlund (@vNiklas) useful for tips for TP3
  • Deployment Checker for Azure Stack Technical Preview (Microsoft Technet)
  • Azure stack and other tools (Github)
  • How to enable nested virtualization on Hyper-V Windows Server 2016
  • Dell EMC announce Microsoft Hybrid Cloud Azure Stack (Dell EMC)
  • Dell EMC Cloud for Microsoft Azure Stack (Dell EMC)
  • Dell EMC Cloud for Microsoft Azure Stack Data Sheet (Dell EMC PDF)
  • Dell EMC Cloud Chats (Dell EMC Blog)
  • Microsoft Azure stack forum
  • Dell EMC Microsoft Azure Stack solution
  • Gaining Server Storage I/O Insight into Microsoft Windows Server 2016
  • Overview Review of Microsoft ReFS (Reliable File System) and resource links
  • Via WServerNews.com Cloud (Microsoft Azure) storage considerations
  • Via CloudComputingAdmin.com Cloud Storage Decision Making: Using Microsoft Azure for cloud storage
  • www.thenvmeplace.com, www.thessdplace.com, www.objectstoragecenter.com and www.storageio.com/converge
  • What this all means

    A common question is if there is demand for private and hybrid cloud, in fact, some industry expert pundits have even said private, or hybrid are dead which is interesting, how can something be dead if it is just getting started. Likewise, it is early to tell if Azure Stack will gain traction with various organizations, some of whom may have tried or struggled with OpenStack among others.

    Given a large number of Microsoft Windows-based servers on VMware, OpenStack, Public cloud services as well as other platforms, along with continued growing popularity of Azure, having a solution such as Azure Stack provides an attractive option for many environments. That leads to the question of if Azure Stack is essentially a replacement for Windows Servers or Hyper-V and if only for Windows guest operating systems. At this point indeed, Windows would be an attractive and comfortable option, however, given a large number of Linux-based guests running on Hyper-V as well as Azure Public, those are also primary candidates as are containers and other services.

    Continue reading more in part two of this two-part series here including installing Microsoft Azure Stack TP3.

    Ok, nuff said (for now…).

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book "Software-Defined Data Infrastructure Essentials" (CRC Press).

    Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

    Azure Stack TP3 Overview Preview Review Part II

    server storage I/O trends

    Azure Stack TP3 Overview Preview (Part II) Install Review

    This is part two of a two-part series looking at Microsoft Azure Stack with a focus on my experiences installing Microsoft Azure Stack Technical Preview 3 (TP3) including into a nested VMware vSphere ESXi environment. Read part one here that provides a general overview of Azure Stack.

    Azure Stack Review and Install

    Being familiar with Microsoft Azure public cloud having used it for a few years now, I wanted to gain some closer insight, experience, expand my trade craft on Azure Stack by installing TP3. This is similar to what I have done in the past with OpenStack, Hadoop, Ceph, VMware, Hyper-V and many others, some of which I need to get around to writing about sometime. As a refresher from part one of this series, the following is an image via Microsoft showing the Azure Stack TP3 architecture, click here or on the image to learn more including the names and functions of the various virtual machines (VMs) that make up Azure Stack.

    Microsoft Azure Stack architecture
    Click here or on the above image to view list of VMs and other services (Image via Microsoft.com)

    Whats Involved Installing Azure Stack TP3?

    The basic steps are as follows:

    • Read this Azure Stack blog post (Azure Stack)
    • Download the bits (e.g. the Azure Stack software) from here, where you access the Azure Stack Downloader tool.
    • Planning your deployment making decisions on Active Directory and other items.
    • Prepare the target server (physical machine aka PM, or virtual machine VM) that will be the Azure Stack destination.
    • Copy Azure Stack software and installer to target server and run pre-install scripts.
    • Modify PowerShell script file if using a VM instead of a PM
    • Run the Azure Stack CloudBuilder setup, configure unattend.xml if needed or answer prompts.
    • Server reboots, select Azure Stack from two boot options.
    • Prepare your Azure Stack base system (time, network NICs in static or DHCP, if running on VMware install VMtools)
    • Determine if you will be running with Azure Active Directory (AAD) or standalone Active Directory Federated Services (ADFS).
    • Update any applicable installation scripts (see notes that follow)
    • Deploy the script, then extended Azure Stack TP3 PoC as needed

    Note that this is a large download of about 16GB (23GB with optional WIndows Server 2016 demo ISO).

    Use the AzureStackDownloader tool to download the bits (about 16GB or 23GB with optional Windows Server 2016 base image) which will either be in several separate files which you stitch back together with the MicrosoftAzureStackPOC tool, or as a large VHDX file and smaller 6.8GB ISO (Windows Server 2016). Prepare your target server system for installation once you have all the software pieces downloaded (or do the preparations while waiting for download).

    Once you have the software downloaded, if it is a series of eight .bin files (7 about 2GB, 1 around 1.5GB), good idea to verify their checksums, then stitch them together on your target system, or on a staging storage device or file share. Note that for the actual deployment first phase, the large resulting cloudbuilder.vhdx file will need to reside in the C:\ root location of the server where you are installing Azure Stack.

    server storageio nested azure stack tp3 vmware

    Azure Stack deployment prerequisites (Microsoft) include:

    • At least 12 cores (or more), dual socket processor if possible
    • As much DRAM as possible (I used 100GB)
    • Put the operating system disk on flash SSD (SAS, SATA, NVMe) if possible, allocate at least 200GB (more is better)
    • Four x 140GB or larger (I went with 250GB) drives (HDD or SSD) for data deployment drives
    • A single NIC or adapter (I put mine into static instead of DHCP mode)
    • Verify your physical or virtual server BIOS has VT enabled

    The above image helps to set the story of what is being done. On the left is for bare metal (BM) or physical machine (PM) install of Azure Stack TP3, on the right, a nested VMware (vSphere ESXi 6.5) with virtual machine (VM) 11 approach. Note that you could also do a Hyper-V nested among other approaches. Shown in the image above common to both a BM or VM is a staging area (could be space on your system drive) where Azure Stack download occurs. If you use a separate staging area, then simply copy the individual .bin files and stitch together into the larger .VHDX, or, copy the larger .VHDX, which is better is up to your preferences.

    Note that if you use the nested approach, there are a couple of configuration (PowerShell) scripts that need to be updated. These changes are to trick the installer into thinking that it is on a PM when it checks to see if on physical or virtual environments.

    Also note that if using nested, make sure you have your VMware vSphere ESXi host along with specific VM properly configured (e.g. that virtualization and other features are presented to the VM). With vSphere ESXi 6.5 virtual machine type 11 nesting is night and day easier vs. earlier generations.

    Something else to explain here is that you will initially start the Azure Stack install preparation using a standard Windows Server (I used a 2016 version) where the .VHDX is copied into its C:\ root. From there you will execute some PowerShell scripts to setup some configuration files, one of which needs to be modified for nesting.

    Once those prep steps are done, there is a Cloudbuilder deploy script that gets run that can be done with an unattend.xml file or manual input. This step will cause a dual-boot option to be added to your server where you can select Azure Stack or your base prep Windows Server instance, followed by reboot.

    After the reboot occurs and you choose to boot into Azure Stack, this is the server instance that will actually run the deployment script, as well as build and launch all the VMs for the Azure Stack TP3 PoC. This is where I recommend having a rough sketch like above to annotate layers as you go to remember what layer working at. Don’t worry, it becomes much easier once all is said and done.

    Speaking of preparing your server, refer to Microsoft specs, however in general give the server as much RAM and cores as possible. Also if possible place the system disk on a flash SSD (SAS, SATA, NVMe) and make sure that it has at least 200GB, however 250 or even 300GB is better (just in case you need more space).

    Additional configuration tips include allocating four data disks for Azure, if possible make these SSDs as well as, however more important IMHO to have at least the system on fast flash SSD. Another tip is to enable only one network card or NIC and put it into static vs. DHCP address mode to make things easier later.

    Tip: If running nested, vSphere 6.5 worked the smoothest as had various issues or inconsistencies with earlier VMware versions, even with VMs that ran nested just fine.

    Tip: Why run nested? Simple, I wanted to be able to use using VMware tools, do snapshots to go back in time, plus share the server with some other activities until ready to give Azure Stack TP3 its own PM.

    Tip: Do not connect the POC machine to the following subnets (192.168.200.0/24, 192.168.100.0/27, 192.168.101.0/26, 192.168.102.0/24, 192.168.103.0/25, 192.168.104.0/25) as Azure Stack TP3 uses those.

    storageio azure stack tp3 vmware configuration

    Since I decided to use a nested VM deploying using VMware, there were a few extra steps needed that I have included as tips and notes. Following is view via vSphere client of the ESXi host and VM configuration.

    The following image combines a couple of different things including:

    A: Showing the contents of C:\Azurestack_Supportfiles directory

    B: Modifying the PrepareBootFromVHD.ps1 file if deploying on virtual machine (See tips and notes)

    C: Showing contents of staging area including individual .bin files along with large CloudBuilder.vhdx

    D: Running the PowerShell script commands to prepare the PrepareBootFromVHD.ps1 and related items

    prepariing azure stack tp3 cloudbuilder for nested vmware deployment

    From PowerShell (administrator):

    # Variables
    $Uri = 'https://raw.githubusercontent.com/Azure/Azure stack/master/Deployment/'
    $LocalPath = 'c:\AzureStack_SupportFiles'

    # Create folder
    New-Item $LocalPath -type directory

    # Download files
    ( 'BootMenuNoKVM.ps1', 'PrepareBootFromVHD.ps1', 'Unattend.xml', 'unattend_NoKVM.xml') | foreach { Invoke-WebRequest ($uri + $_) -OutFile ($LocalPath + '\' + $_) }

    After you do the above, decide if you will be using an Unattend.xml or manual entry of items for building the Azure Stack deployment server (e.g. a Windows Server). Note that the above PowerShell script created the C:\azurestack_supportfiles folder and downloads the script files for building the cloud image using the previously downloaded Azure Stack CloudBuilder.vhdx (which should be in C:\).

    Note and tip is that if you are doing a VMware or virtual machine based deployment of TP3 PoC, you will need to change C:\PrepareBootFromVHD.ps1 in the Azure Stack support files folder. Here is a good resource on what gets changed via Github that shows an edit on or about line 87 of PrepareBootFromVHD.ps1. If you run the PrepareBootFromVHD.ps1 script on a virtual machine you will get an error message, the fix is relatively easy (after I found this post).

    Look in PrepareBootFromVHD.ps1 for something like the following around line 87:

    if ((get-disk | where {$_.isboot -eq $true}).Model -match 'Virtual Disk')       {      Write-Host "The server is currently already booted from a virtual hard disk, to boot the server from the CloudBuilder.vhdx you will need to run this script on an Operating System that is installed on the physical disk of this server."      Exit      }
    

    You can either remove the "exit" command, or, change the test for "Virtual Disk" to something like "X", for fun I did both (and it worked).

    Note that you only have to make the above and another change in a later step if you are deploying Azure Stack TP3 as a virtual machine.

    Once you are ready, go ahead and launch the PrepareBootFromVHD.ps1 script which will set the BCDBoot entry (more info here).

    azure stack tp3 cloudbuilder nested vmware deployment

    You will see a reboot and install, this is installing what will be called the physical instance. Note that this is really being installed on the VM system drive as a secondary boot option (e.g. azure stack).

    azure stack tp3 dual boot option

    After the reboot, login to the new Azure Stack base system and complete any configuration including adding VMware Tools if using VMware nested. Some other things to do include make sure you have your single network adapter set to static (makes things easier), and any other updates or customizations. Before you run the next steps, you need to decide if going to use Azure Active Directory (AAD) or local ADFS.

    Note that if you are not running on a virtual machine, simply open a PowerShell (administrator) session, and run the deploy script. Refer to here for more guidance on the various options available including discussion on using AAD or ADFS.

    Note if you run the deployment script on a virtual machine, you will get an error which is addressed in the next section, otherwise, sit back and watch the progress..

    CloudBuilder Deployment Time

    Once you have your Azure Stack deployment system and environment ready, including a snapshot if on virtual machine, launch the PowerShell deployment script. Note that you will need to have decided if deploying with Azure Active Directory (AAD) or Azure Directory Federated Services (ADFS) for standalone aka submarine mode. There are also other options you can select as part of the deployment discussed in the Azure Stack tips here (a must read) and here. I chose to do a submarine mode (e.g. not connected to Public Azure and AAD) deployment.

    From PowerShell (administrator):

    cd C:\CloudDeployment:\Setup
    $adminpass = ConvertTo-SecureString "youradminpass" -AsPlainText -Force
    .\InstallAzureStackPOC.ps1 -AdminPassword $adminpass -UseADFS

    Deploying on VMware Virtual Machines Tips

    Here is a good tip via Gareth Jones (@garethjones294) that I found useful for updating one of the deployment script files (BareMetal_Tests.ps1 located in C:\CloudDeployment\Roles\PhysicalMachines\Tests folder) so that it would skip the bare metal (PM) vs. VM tests. Another good resource, even though it is for TP2 and early versions of VMware is TP2 deployment experiences by Niklas Akerlund (@vNiklas).

    Note that this is a bit of a chick and egg scenario unless you are proficient at digging into script files since the BareMetal_Tests.ps1 file does not get unpacked until you run the CloudBuilder deployment script. If you run the script and get an error, then make the changes below, and rerun the script as noted. Once you make the modification to the BareMetal_Tests.ps1 file, keep a copy in a safe place for future use.

    Here are some more tips for deploying Azure Stack on VMware,

    Per the tip mentioned about via Gareth Jones (tip: read Gareths post vs. simply cut and paste the following which is more of a guide):

    Open BareMetal_Tests.ps1 file in PowerShell ISE and navigate to line 376 (or in that area)
    Change $false to $true which will stop the script failing when checking to see if the Azure Stack is running inside a VM.
    Next go to line 453.
    Change the last part of the line to read “Should Not BeLessThan 0”
    This will stop the script checking for the required amount of cores available.

    After you make the above correction as with any error (and fix) during Azure Stack TP3 PoC deployment, simply run the following.

    cd C:\CloudDeployment\Setup
    .\InstallAzureStackPOC.ps1 -rerun
    

    Refer to the extra links in the where to learn more section below that offer various tips, tricks and insight that I found useful, particular for deploying on VMware aka nested. Also in the links below are tips on general Azure Stack, TP2, TP3, adding services among other insight.

    starting azure stack tp3 deployment

    Tip: If you are deploying Azure Stack TP3 PoC on virtual machine, once you start the script above, copy the modified BareMetal_Tests.ps1 file

    Once the CloudBuilder deployment starts, sit back and wait, if you are using SSDs, it will take a while, if using HDDs, it will take a long while (up to hours), however check in on it now and then to see progress of if any errors. Note that some of the common errors will occur very early in the deployment such as the BareMetal_Tests.ps1 mentioned above.

    azure stack tp3 deployment finished

    Checking in periodically to see how the deployment progress is progressing, as well as what is occurring. If you have the time, watch some of the scripts as you can see some interesting things such as the software defined data center (SDDC) aka software-defined data infrastructure (SDDC) aka Azure Stack virtual environment created. This includes virtual machine creation and population, creating the software defined storage using storage spaces direct (S2D), virtual network and active directory along with domain controllers among others activity.

    azure stack tp3 deployment progress

    After Azure Stack Deployment Completes

    After you see the deployment completed, you can try accessing the management portal, however there may be some background processing still running. Here is a good tip post on connecting to Azure Stack from Microsoft using Remote Desktop (RDP) access. Use RDP from the Azure Stack deployment Windows Server and connect to a virtual machine named MAS-CON01, launch Server Manager and for Local Server disable Internet Explorer Enhanced Security (make sure you are on the right system, see the tip mentioned above). Disconnect from MAS-CON01 (refer to the Azure Stack architecture image above), then reconnect, and launch Internet Explorer with an URL of (note documentation side to use which did not work for me).

    Note the username for the Azure Stack system is AzureStack\AzureStackAdmin with a password of what you set for administrative during setup. If you get an error, verify the URLs, check your network connectivity, wait a few minutes as well as verify what server you are trying to connect from and too. Keep in mind that even if deploying on a PM or BM (e.g. non virtual server or VM), the Azure Stack deployment TP3 PoC creates a "virtual" software-defined environment with servers, storage (Azure Stack uses Storage Spaces Direct [S2D] and software defined network.

    accessing azure stack tp3 management portal dashboard

    Once able to connect to Azure Stack, you can add new services including virtual machine image instances such as Windows (use the Server 2016 ISO that is part of Azure Stack downloads), Linux or others. You can also go to these Microsoft resources for some first learning scenarios, using the management portals, configuring PowerShell and troubleshooting.

    Where to learn more

    The following provide more information and insight about Azure, Azure Stack, Microsoft and Windows among related topics.

  • Azure Stack Technical Preview 3 (TP3) Overview Preview Review
  • Azure Stack TP3 Overview Preview Review Part II
  • Azure Stack Technical Preview (get the bits aka software download here)
  • Azure Stack deployment prerequisites (Microsoft)
  • Microsoft Azure Stack troubleshooting (Microsoft Docs)
  • Azure Stack TP3 refresh tips (Azure Stack)
  • Here is a good post with a tip about not applying certain Windows updates to AzureStack TP3 installs.
  • Configure Azure Stack TP3 to be available on your own network (Azure Stack)
  • Azure Stack TP3 Marketplace syndication (Azure Stack)
  • Azure Stack TP3 deployment experiences (Azure Stack)
  • Frequently asked questions for Azure Stack (Microsoft)
  • Azure Active Directory (AAD) and Active Directory Federation Services (ADFS)
  • Deploy Azure Stack (Microsoft)
  • Connect to Azure Stack (Microsoft)
  • Azure Stack TP2 deployment experiences by Niklas Akerlund (@vNiklas) useful for tips for TP3
  • Deployment Checker for Azure Stack Technical Preview (Microsoft Technet)
  • Azure stack and other tools (Github)
  • How to enable nested virtualization on Hyper-V Windows Server 2016
  • Dell EMC announce Microsoft Hybrid Cloud Azure Stack (Dell EMC)
  • Dell EMC Cloud for Microsoft Azure Stack (Dell EMC)
  • Dell EMC Cloud for Microsoft Azure Stack Data Sheet (Dell EMC PDF)
  • Dell EMC Cloud Chats (Dell EMC Blog)
  • Microsoft Azure stack forum
  • Dell EMC Microsoft Azure Stack solution
  • Gaining Server Storage I/O Insight into Microsoft Windows Server 2016
  • Overview Review of Microsoft ReFS (Reliable File System) and resource links
  • Via WServerNews.com Cloud (Microsoft Azure) storage considerations
  • Via CloudComputingAdmin.com Cloud Storage Decision Making: Using Microsoft Azure for cloud storage
  • www.thenvmeplace.com, www.thessdplace.com, www.objectstoragecenter.com and www.storageio.com/converge
  • What this all means

    A common question is if there is demand for private and hybrid cloud, in fact, some industry expert pundits have even said private, or hybrid are dead which is interesting, how can something be dead if it is just getting started. Likewise, it is early to tell if Azure Stack will gain traction with various organizations, some of whom may have tried or struggled with OpenStack among others.

    Given a large number of Microsoft Windows-based servers on VMware, OpenStack, Public cloud services as well as other platforms, along with continued growing popularity of Azure, having a solution such as Azure Stack provides an attractive option for many environments. That leads to the question of if Azure Stack is essentially a replacement for Windows Servers or Hyper-V and if only for Windows guest operating systems. At this point indeed, Windows would be an attractive and comfortable option, however, given a large number of Linux-based guests running on Hyper-V as well as Azure Public, those are also primary candidates as are containers and other services.

    software defined data infrastructures SDDI and SDDC

    Some will say that if OpenStack is struggling in many organizations and being free open source, how Microsoft can have success with Azure Stack. The answer could be that some organizations have struggled with OpenStack while others have not due to lack of commercial services and turnkey support. Having installed both OpenStack and Azure Stack (as well as VMware among others), Azure Stack is at least the TP3 PoC is easy to install, granted it is limited to one node, unlike the production versions. Likewise, there are easy to use appliance versions of OpenStack that are limited in scale, as well as more involved installs that unlock full functionality.

    OpenStack, Azure Stack, VMware and others have their places, alongside, or supporting containers along with other tools. In some cases, those technologies may exist in the same environment supporting different workloads, as well as accessing various public clouds, after all, Hybrid is the home run for many if not most legality IT environments.

    Ok, nuff said (for now…).

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book "Software-Defined Data Infrastructure Essentials" (CRC Press).

    Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

    Dell EMC Announce Azure Stack Hybrid Cloud Solution

    server storage I/O trends

    Dell EMC Azure Stack Hybrid Cloud Solution

    Dell EMC have announced their Microsoft Azure Stack hybrid cloud platform solutions. This announcement builds upon earlier statements of support and intention by Dell EMC to be part of the Microsoft Azure Stack community. For those of you who are not familiar, Azure Stack is an on premise extension of Microsoft Azure public cloud.

    What this means is that essentially you can have the Microsoft Azure experience (or a subset of it) in your own data center or data infrastructure, enabling cloud experiences and abilities at your own pace, your own way with control. Learn more about Microsoft Azure Stack including my experiences with and installing Technique Preview 3 (TP3) here.

    software defined data infrastructures SDDI and SDDC

    What Is Azure Stack

    Microsoft Azure Stack is an on-premises (e.g. in your own data center) private (or hybrid when connected to Azure) cloud platform. Currently Azure Stack is in Technical Preview 3 (e.g. TP3) and available as a proof of concept (POC) download from Microsoft. You can use Azure Stack TP3 as a POC for learning, demonstrating and trying features among other activities. Here is link to a Microsoft Video providing an overview of Azure Stack, and here is a good summary of roadmap, licensing and related items.

    In summary, Microsoft Azure Stack and this announcement is about:

    • A onsite, on-premises, in your data center extension of Microsoft Azure public cloud
    • Enabling private and hybrid cloud with good integration along with shared experiences with Azure
    • Adopt, deploy, leverage cloud on your terms and timeline choosing what works best for you
    • Common processes, tools, interfaces, management and user experiences
    • Leverage speed of deployment and configuration with a purpose-built integrated solution
    • Support existing and cloud-native Windows, Linux, Container and other services
    • Available as a public preview via software download, as well as vendors offering solutions

    What Did Dell EMC Announce

    Dell EMC announced their initial product, platform solutions, and services for Azure Stack. This includes a Proof of Concept (PoC) starter kit (PE R630) for doing evaluations, prototype, training, development test, DevOp and other initial activities with Azure Stack. Dell EMC also announced a larger for production deployment, or large-scale development, test DevOp activity turnkey solution. The initial production solution scales from 4 to 12 nodes, or from 80 to 336 cores that include hardware (server compute, memory, I/O and networking, top of rack (TOR) switches, management, Azure Stack software along with services. Other aspects of the announcement include initial services in support of Microsoft Azure Stack and Azure cloud offerings.
    server storage I/O trends
    Image via Dell EMC

    The announcement builds on joint Dell EMC Microsoft experience, partnerships, technologies and services spanning hardware, software, on site data center and public cloud.
    server storage I/O trends
    Image via Dell EMC

    Dell EMC along with Microsoft have engineered a hybrid cloud platform for organizations to modernize their data infrastructures enabling faster innovate, accelerate deployment of resources. Includes hardware (server compute, memory, I/O networking, storage devices), software, services, and support.
    server storage I/O trends
    Image via Dell EMC

    The value proposition of Dell EMC hybrid cloud for Microsoft Azure Stack includes consistent experience for developers and IT data infrastructure professionals. Common experience across Azure public cloud and Azure Stack on-premises in your data center for private or hybrid. This includes common portal, Powershell, DevOps tools, Azure Resource Manager (ARM), Azure Infrastructure as a Service (IaaS) and Platform as a Service (PaaS), Cloud Infrastructure and associated experiences (management, provisioning, services).
    server storage I/O trends
    Image via Dell EMC

    Secure, protect, preserve and serve applications VMs hosted on Azure Stack with Dell EMC services along with Microsoft technologies. Dell EMC data protection including backup and restore, Encryption as a Service, host guard and protected VMs, AD integration among other features.
    server storage I/O trends
    Image via Dell EMC

    Dell EMC services for Microsoft Azure Stack include single contact support for prepare, assessment, planning; deploy with rack integration, delivery, configuration; extend the platform with applicable migration, integration with Office 365 and other applications, build new services.
    server storage I/O trends
    Image via Dell EMC

    Dell EMC Hyper-converged scale out solutions range from minimum of 4 x PowerEdge R730XD (total raw specs include 80 cores (4 x 20), 1TB RAM (4 x 256GB), 12.8TB SSD Cache, 192TB Storage, plus two top of row network switches (Dell EMC) and 1U management server node. Initial maximum configuration raw specification includes 12 x R730XD (total 336 cores), 6TB memory, 86TB SSD cache, 900TB storage along with TOR network switch and management server.

    The above configurations initially enable HCI nodes of small (low) 20 cores, 256GB memory, 5.7TB SSD cache, 40TB storage; mid size 24 cores, 384GB memory, 11.5TB cache and 60TB storage; high-capacity with 28 cores, 512GB memory, 11.5TB cache and 80TB storage per node.
    server storage I/O trends
    Image via Dell EMC

    Dell EMC Evaluator program for Microsoft Azure Stack including the PE R630 for PoCs, development, test and training environments. The solution combines Microsoft Azure Stack software, Dell EMC server with Intel E5-2630 (10 cores, 20 threads / logical processors or LPs), or Intel E5-2650 (12 cores, 24 threads / LPs). Memory is 128GB or 256GB, storage includes flash SSD (2 x 480GB SAS) and HDD (6 x 1TB SAS).
    and networking.
    server storage I/O trends
    Image via Dell EMC

    Collaborative support single contact between Microsoft and Dell EMC

    Who Is This For

    This announcement is for any organization that is looking for an on-premises, in your data center private or hybrid cloud turnkey solution stack. This initial set of announcements can be for those looking to do a proof of concept (PoC), advanced prototype, support development test, DevOp or gain cloud-like elasticity, ease of use, rapid procurement and other experiences of public cloud, on your terms and timeline. Naturally, there is a strong affinity and seamless experience for those already using, or planning to use Azure Public Cloud for Windows, Linux, Containers and other workloads, applications, and services.

    What Does This Cost

    Check with your Dell EMC representative or partner for exact pricing which varies for the size and configurations. There are also various licensing models to take into consideration if you have Microsoft Enterprise License Agreements (ELAs) that your Dell EMC representative or business partner can address for you. Likewise being cloud based, there is also time usage-based options to explore.

    Where to learn more

    What this all means

    The dust is starting to settle on last falls Dell EMC integration, both of whom have long histories working with, and partnering along with Microsoft on legacy, as well as virtual software-defined data centers (SDDC), software-defined data infrastructures (SDDI), native, and hybrid clouds. Some may view the Dell EMC VMware relationship as a primary focus, however, keep in mind that both Dell and EMC had worked with Microsoft long before VMware came into being. Likewise, Microsoft remains one of the most commonly deployed operating systems on VMware-based environments. Granted Dell EMC have a significant focus on VMware, they both also sell, service and support many services for Microsoft-based solutions.

    What about Cisco, HPE, Lenovo among others who have to announce or discussed their Microsoft Azure Stack intentions? Good question, until we hear more about what those and others are doing or planning, there is not much more to do or discuss beyond speculating for now. Another common question is if there is demand for private and hybrid cloud, in fact, some industry expert pundits have even said private, or hybrid are dead which is interesting, how can something be dead if it is just getting started. Likewise, it is early to tell if Azure Stack will gain traction with various organizations, some of whom may have tried or struggled with OpenStack among others.

    Given a large number of Microsoft Windows-based servers on VMware, OpenStack, Public cloud services as well as other platforms, along with continued growing popularity of Azure, having a solution such as Azure Stack provides an attractive option for many environments. That leads to the question of if Azure Stack is essentially a replacement for Windows Servers or Hyper-V and if only for Windows guest operating systems. At this point indeed, Windows would be an attractive and comfortable option, however, given a large number of Linux-based guests running on Hyper-V as well as Azure Public, those are also primary candidates as are containers and other services.

    Overall, this is an excellent and exciting move for both Microsoft extending their public cloud software stack to be deployed within data centers in a hybrid way, something that those customers are familiar with doing. This is a good example of hybrid being spanning public and private clouds, remote and on-premises, as well as familiarity and control of traditional procurement with the flexibility, elasticity experience of clouds.

    software defined data infrastructures SDDI and SDDC

    Some will say that if OpenStack is struggling in many organizations and being free open source, how Microsoft can have success with Azure Stack. The answer could be that some organizations have struggled with OpenStack while others have not due to lack of commercial services and turnkey support. Having installed both OpenStack and Azure Stack (as well as VMware among others), Azure Stack is at least the TP3 PoC is easy to install, granted it is limited to one node, unlike the production versions. Likewise, there are easy to use appliance versions of OpenStack that are limited in scale, as well as more involved installs that unlock full functionality.

    OpenStack, Azure Stack, VMware and others have their places, along, or supporting containers along with other tools. In some cases, those technologies may exist in the same environment supporting different workloads, as well as accessing various public clouds, after all, Hybrid is the home run for many if not most legality IT environments.

    Overall this is a good announcement from Dell EMC for those who are interested in, or should become more aware about Microsoft Azure Stack, Cloud along with hybrid clouds. Likewise look forward to hearing more about the solutions from others who will be supporting Azure Stack as well as other hybrid (and Virtual Private Clouds).

    Ok, nuff said (for now…).

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book "Software-Defined Data Infrastructure Essentials" (CRC Press).

    Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

    VMware vSAN 6.6 hyper-converged (HCI) software defined data infrastructure

    server storage I/O trends

    VMware vSAN 6.6 hyper-converged (HCI) software defined data infrastructure

    In case you missed it, VMware announced vSAN v6.6 hyper-converged infrastructure (HCI) software defined data infrastructure solution. This is the first of a five-part series about VMware vSAN V6.6. Part II (just the speeds feeds please) is located here, part III (reducing cost and complexity) located here, part IV (scaling ROBO and data centers today) found here, as well as part V here (VMware vSAN evolution, where to learn more and summary).

    VMware vSAN 6.6
    Image via VMware

    For those who are not aware, vSAN is a VMware virtual Storage Area Network (e.g. vSAN) that is software-defined, part of being a software-defined data infrastructure (SDDI) and software-defined data center (SDDC). Besides being software-defined vSAN is HCI combining compute (server), I/O networking, storage (space and I/O) along with hypervisors, management, and other tools.

    Software-defined data infrastructure

    Excuse Me, What is vSAN and who is if for

    Some might find it odd having to explain what vSAN is, on the other hand, not everybody is dialed into the VMware world ecosystem, so let’s give them some help, for everybody else, and feel free to jump ahead.

    For those not familiar, VMware vSAN is an HCI software-defined storage solution that converges compute (hypervisors and server) with storage space capacity and I/O performance along with networking. Being HCI means that with vSAN as you scale compute, storage space capacity and I/O performance also increases in an aggregated fashion. Likewise, increase storage space capacity and server I/O performance you also get more compute capabilities (along with memory).

    For VMware-centric environments looking to go CI or HCI, vSAN offers compelling value proposition leveraging known VMware tools and staff skills (knowledge, experience, tradecraft). Another benefit of vSAN is the ability to select your hardware platform from different vendors, a trend that other CI/HCI vendors have started to offer as well.

    CI and HCI data infrastructure

    Keep in mind that fast applications need a fast server, I/O and storage, as well as server storage I/O needs CPU along with memory to generate I/O operations (IOPs) or move data. What this all means is that HCI solutions such as VMware vSAN combine or converge the server compute, hypervisors, storage file system, storage devices, I/O and networking along with other functionality into an easy to deploy (and management) turnkey solution.

    Learn more about CI and HCI along with who some other vendors are as well as considerations at www.storageio.com/converge. Also, visit VMware sites to find out more about vSphere ESXi hypervisors, vSAN, NSX (Software Defined Networking), vCenter, vRealize along with other tools for enabling SDDC and SDDI.

    Give Me the Quick Elevator Pitch Summary

    VMware has enhanced vSAN with version 6.6 (V6.6) enabling new functionality, supporting new hardware platforms along with partners, while reducing costs, improving scalability and resiliency for SDDC and SDDI environments. This includes from small medium business (SMB) to mid-market to small medium enterprise (SME) as well as workgroup, departmental along with Remote Office Branch Office (ROBO).

    Being a HCI solution, management functions of the server, storage, I/O, networking, hypervisor, hardware, and software are converged to improve management productivity. Also, vSAN integrated with VMware vSphere among other tools enable modern, robust data infrastructure that serves, protect, preserve, secure and stores data along with their associated applications.

    Where to Learn More

    The following are additional resources to learn more about vSAN and related technologies.

    What this all means

    Overall a good set of enhancements as vSAN continues its evolution looking back just a few years ago, to where it is today and will be in the future. If you have not looked at vSAN recently, take some time beyond reading this piece to learn some more.

    Continue reading more about VMware vSAN 6.6 in part II (just the speeds feeds please) is located here, part III (reducing cost and complexity) located here, part IV (scaling ROBO and data centers today) located here, as well as part V here (VMware vSAN evolution, where to learn more and summary).

    Ok, nuff said (for now…).

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book “Software-Defined Data Infrastructure Essentials” (CRC Press).

    Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

    VMware vSAN V6.6 Part II (just the speeds feeds features please)

    server storage I/O trends

    VMware vSAN v6.6 Part II (just the speeds feeds features please)

    In case you missed it, VMware announced vSAN v6.6 hyper-converged infrastructure (HCI) software defined data infrastructure solution. This is the second of a five-part series about VMware vSAN V6.6. View Part I here, part III (reducing cost and complexity) located here, part IV (scaling ROBO and data centers today) found here, as well as part V here (VMware vSAN evolution, where to learn more and summary).

    VMware vSAN 6.6
    Image via VMware

    For those who are not aware, vSAN is a VMware virtual Storage Area Network (e.g. vSAN) that is software-defined, part of being a software-defined data infrastructure (SDDI) and software-defined data center (SDDC). Besides being software-defined vSAN is HCI combining compute (server), I/O networking, storage (space and I/O) along with hypervisors, management, and other tools.

    Just the Speeds and Feeds Please

    For those who just want to see the list of what’s new with vSAN V6.6, here you go:

    • Native encryption for data-at-rest
    • Compliance certifications
    • Resilient management independent of vCenter
    • Degraded Disk Handling v2.0 (DDHv2)
    • Smart repairs and enhanced rebalancing
    • Intelligent rebuilds using partial repairs
    • Certified file service & data protection solutions
    • Stretched clusters with local failure protection
    • Site affinity for stretched clusters
    • 1-click witness change for Stretched Cluster
    • vSAN Management Pack for vRealize
    • Enhanced vSAN SDK and PowerCLI
    • Simple networking with Unicast
    • vSAN Cloud Analytics with real-time support notification and recommendations
    • vSAN ConfigAssist with 1-click hardware lifecycle management
    • Extended vSAN Health Services
    • vSAN Easy Install with 1-click fixes
    • Up to 50% greater IOPS for all-flash with optimized checksum and dedupe
    • Support for new next-gen workloads
    • vSAN for Photon in Photon Platform 1.1
    • Day 0 support for latest flash technologies
    • Expanded caching tier choice
    • Docker Volume Driver 1.1

    What’s New and Value Proposition of vSAN 6.6

    Let’s take a closer look beyond the bullet list of what’s new with vSAN 6.6, as well as perspectives of those features to address different needs. The VMware vSAN proposition is to evolve and enable modernizing data infrastructures with HCI powered by vSphere along with vSAN.

    Three main themes or characteristics (and benefits) of vSAN 6.6 include addressing (or enabling):

    • Reducing risk while scaling
    • Reducing cost and complexity
    • Scaling for today and tomorrow

    VMware vSAN 6.6 summary
    Image via VMware

    Reducing risk while scaling

    Reducing (or removing) risk while evolving your data infrastructure with HCI including flexibility of choosing among five support hardware vendors along with native security. This includes native security, availability and resiliency enhancements (including intelligent rebuilds) without sacrificing storage efficiency (capacity) or effectiveness (performance productivity), management and choice.

    VMware vSAN DaRE
    Image via VMware

    Dat level Data at Rest Encryption (DaRE) of all vSAN dat objects that are enabled at a cluster level. The new functionality supports hybrid along with all flash SSD as well as stretched clusters. The VMware vSAN DaRE implementation is an alternative to using self-encrypting drives (SEDs) reducing cost, complexity and management activity. All vSAN features including data footprint reduction (DFR) features such as compression and deduplication are supported. For security, vSAN DaRE integrations with compliance key management technologies including those from SafeNet, Hytrust, Thales and Vormetric among others.

    VMware vSAN management
    Image via VMware

    ESXi HTML 5 based host client, along with CLI via ESXCLI for administering vSAN clusters as an alternative in case your vCenter server(s) are offline. Management capabilities include monitoring of critical health and status details along with configuration changes.

    VMware vSAN health management
    Image via VMware

    Health monitoring enhancements include handling of degraded vSAN devices with intelligence proactively detecting impending device failures. As part of the functionality, if a replica of the failing (or possible soon to fail) device exists, vSAN can take action to maintain data availability.

    Where to Learn More

    The following are additional resources to find out more about vSAN and related technologies.

    What this all means

    With each new release, vSAN is increasing its feature, functionality, resiliency and extensiveness associated with traditional storage and non-CI or HCI solutions. Continue reading more about VMware vSAN 6.6 in Part I here, part III (reducing cost and complexity) located here, part IV (scaling ROBO and data centers today) found here, as well as part V here (VMware vSAN evolution, where to learn more and summary).

    Ok, nuff said (for now…).

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the Spring 2017 release of his new book “Software-Defined Data Infrastructure Essentials” (CRC Press).

    Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

    VMware vSAN V6.6 Part III (reducing costs complexity)

    server storage I/O trends

    VMware vSAN V6.6 Part III (Reducing costs complexity)

    In case you missed it, VMware announced vSAN v6.6 hyper-converged infrastructure (HCI) software defined data infrastructure solution. This is the third of a five-part series about VMware vSAN V6.6. View Part I here, Part II (just the speeds feeds please) is located here, part IV (scaling ROBO and data centers today) found here, as well as part V here (VMware vSAN evolution, where to learn more and summary).

    VMware vSAN 6.6
    Image via VMware

    For those who are not aware, vSAN is a VMware virtual Storage Area Network (e.g. vSAN) that is software-defined, part of being a software-defined data infrastructure (SDDI) and software-defined data center (SDDC). Besides being software-defined vSAN is HCI combining compute (server), I/O networking, storage (space and I/O) along with hypervisors, management, and other tools.

    Reducing cost and complexity

    Reducing your total cost of ownership (TCO) including lower capital expenditures (CapEx) and operating (OPEX). VMware is claiming CapEx and OpEx reduced TCO of 50%. Keep in mind that solutions such as vSAN also can help drive return on investment (ROI) as well as return on innovation (the other ROI) via improved productivity, effectiveness, as well as efficiencies (savings). Another aspect of addressing TCO and ROI includes flexibility leveraging stretched clusters to address HA, BR, BC and DR Availability needs cost effectively. These enhancements include efficiency (and effectiveness e.g. productivity) at scale, proactive cloud analytics, and intelligent operations.

    VMware vSAN stretch cluster
    Image via VMware

    Low cost (or cost-effective) Local, Remote Resiliency and Data Protection with Stretched Clusters across sites. Upon a site failure, vSAN maintains availability is leveraging surviving site redundancy. For performance and productivity effectiveness, I/O traffic is kept local where possible and practical, reducing cross-site network workload. Bear in mind that the best I/O is the one you do not have to do, the second is the one with the least impact.

    This means if you can address I/Os as close to the application as possible (e.g. locality of reference), that is a better I/O. On the other hand, when data is not local, then the best I/O is the one involving a local or remote site with least overhead impact to applications, as well as server storage I/O (including networks) resources. Also keep in mind that with vSAN you can fine tune availability, resiliency and data protection to meet various needs by adjusting fault tolerant mode (FTM) to address a different number of failures to tolerate.

    server storage I/O locality of reference

    Network and cloud friendly Unicast Communication enhancements. To improve performance, availability, and capacity (CPU demand reduction) multicast communications are no longer used making for easier, simplified single site and stretched cluster configurations. When vSAN clusters upgrade to V6.6 unicast is enabled.

    VMware vSAN unicast
    Image via VMware

    Gaining insight, awareness, adding intelligence to avoid flying blind, introducing vSAN Cloud Analytics and Proactive Guidance. Part of a VMware customer, experience improvement program, leverages cloud-based health checks for easy online known issue detection along with relevant knowledge bases pieces as well as other support notices. Whether you choose to refer to this feature as advanced analytics, artificial intelligence (AI), proactive rules enabled management problem isolation, solving resolution I will leave that up to you.

    VMware vSAN cloud analytics
    Image via VMware

    Part of the new tools analytics capabilities and prescriptive problem resolution (hmm, some might call that AI or advanced analytics, just saying), health check issues are identified, notifications along with suggested remediation. Another feature is the ability to leverage continuous proactive updates for advance remediation vs. waiting for subsequent vSAN releases. Net result and benefit are reducing time, the complexity of troubleshooting converged data infrastructure issues spanning servers, storage, I/O networking, hardware, software, cloud, and configuration. In other words, enable you more time to be productive vs. finding and fixing problems leveraging informed awareness for smart decision-making.

    Where to Learn More

    The following are additional resources to find out more about vSAN and related technologies.

    What this all means

    Continue reading more about VMware vSAN 6.6 in part I here, part II (just the speeds feeds please) located here, part IV (scaling ROBO and data centers today) found here, as well as part V here (VMware vSAN evolution, where to learn more and summary).

    Ok, nuff said (for now…).

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book “Software-Defined Data Infrastructure Essentials” (CRC Press).

    Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

    VMware vSAN V6.6 Part IV (HCI scaling ROBO and data centers today)

    server storage I/O trends

    VMware vSAN V6.6 Part IV (HCI scaling ROBO and data centers today)

    In case you missed it, VMware announced vSAN v6.6 hyper-converged infrastructure (HCI) software defined data infrastructure solution. This is the fourth of a five-part series about VMware vSAN V6.6. View Part I here, Part II (just the speeds feeds please) is located here, part III (reducing cost and complexity) located here, as well as part V here (VMware vSAN evolution, where to learn more and summary).

    VMware vSAN 6.6
    Image via VMware

    For those who are not aware, vSAN is a VMware virtual Storage Area Network (e.g. vSAN) that is software-defined, part of being a software-defined data infrastructure (SDDI) and software-defined data center (SDDC). Besides being software-defined vSAN is HCI combining compute (server), I/O networking, storage (space and I/O) along with hypervisors, management, and other tools.

    Scaling HCI for ROBO and data centers today and for tomorrow

    Scaling with stability for today and tomorrow. This includes addressing your applications Performance, Availability, Capacity and Economics (PACE) workload requirements today and for the future. By scaling with stability means boosting performance, availability (data protection, security, resiliency, durable, FTT), effective capacity without one of those attributes compromising another.

    VMware vSAN data center scaling
    Image via VMware

    Scaling today for tomorrow also means adapting to today’s needs while also flexible to evolve with new application workloads, hardware as well as a cloud (public, private, hybrid, inter and intra-cloud). As part of continued performance improvements, enhancements to optimize for higher performance flash SSD including NVMe based devices.

    VMware vSAN cloud analytics
    Image via VMware

    Part of scaling with stability means enhancing performance (as well as productivity) or the effectiveness of a solution. Keep in mind that efficiency is often associated with storage (or server or network) space capacity savings or reductions. In that context then effectiveness means performance and productivity or how much work can be done with least overhead impact. With vSAN, V6.6 performance enhancements include reduced checksum overhead, enhanced compression, and deduplication, along with destaging optimizations.

    Other enhancements that help collectively contribute to vSAN performance improvements include VMware object handling (not to be confused with cloud or object storage S3 or Swift objects) as well as faster iSCSI for vSAN. Also improved are more accurate refined cache sizing guidelines. Keep in mind that a little bit of NAND flash SSD or SCM in the right place can have a significant benefit, while a lot of flash cache costs much cash.

    Part of enabling and leveraging new technology today includes support for larger capacity 1.6TB flash SSD drives for cache, as well as lower read latency with 3D XPoint and NVMe drives such as those from Intel among others. Refer to the VMware vSAN HCL for current supported devices which continue evolve along with the partner ecosystem. Future proofing is also enabled where you can grow from today to tomorrow as new storage class memories (SCM) among other flash SSD as well as NVMe enhanced storage among other technologies are introduced into the market as well as VMware vSAN HCL.

    VMware vSAN and data center class applications
    Image via VMware

    Traditional CI and in particular many HCI solutions have been optimized or focused on smaller application workloads including VDI resulting in the perception that HCI, in general, is only for smaller environments, or larger environment non-mission critical workloads. With vSAN V6.6 VMware is addressing and enabling larger environment mission critical applications including Intersystem Cache medical health management software among others. Other application workload extensions including support for higher performance demanding Hadoop big data analytics, a well as extending virtual desktop infrastructure (VDI) workspace with XenDesktop/XenApp, along with Photon 1.1 container support.

    What about VMware vSAN 6.6. Packaging and License Options

    As part of vSAN 6.6 VMware several solution bundle packaged options for the data center as well as smaller ROBO environment. Contact your VMware representative or partner to learn more about specific details.

    VMware vSAN cloud analytics
    Image via VMware

    VMware vSAN cloud analytics
    Image via VMware

    Where to Learn More

    The following are additional resources to find out more about vSAN and related technologies.

    What this all means

    Continue reading more about VMware vSAN 6.6 in part I here, part II (just the speeds feeds please) is located here, part III (reducing cost and complexity) located here as well as part V here (VMware vSAN evolution, where to learn more and summary).

    Ok, nuff said (for now…).

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the Spring 2017 release of his new book “Software-Defined Data Infrastructure Essentials” (CRC Press).

    Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

    March 2017 Server StorageIO Data Infrastructure Update Newsletter

    Volume 17, Issue III

    Hello and welcome to the March 2017 issue of the Server StorageIO update newsletter.

    First a reminder world backup (and recovery) day is on March 31. Following up from the February Server StorageIO update newsletter that had a focus on data protection this edition includes some additional posts, articles, tips and commentary below.

    Other data infrastructure (and tradecraft) topics in this edition include cloud, virtual, server, storage and I/O including NVMe as well as networks. Industry trends include new technology and services announcements, cloud services, HPE buying Nimble among other activity. Check out the Converged Infrastructure (CI), Hyper-Converged (HCI) and Cluster in Box (or Cloud in Box) coverage including a recent SNIA webinar I was invited to be the guest presenter for, along with companion post below.

    In This Issue

    Enjoy this edition of the Server StorageIO update newsletter.

    Cheers GS

    Data Infrastructure and IT Industry Activity Trends

    Some recent Industry Activities, Trends, News and Announcements include:

    Dell EMC has discontinued the NVMe direct attached shared DSSD D5 all flash array has been discontinued. At about the same time Dell EMC is shutting down the DSSD D5 product, it has also signaled they will leverage the various technologies including NVMe across their broad server storage portfolio in different ways moving forward. While Dell EMC is shutting down DSSD D5, they are also bringing additional NVMe solutions to the market including those they have been shipping for years (e.g. on the server-side). Learn more about DSSD D5 here and here including perspectives of how it could have been used (plays for playbooks).

    Meanwhile NVMe industry activity continues to expand with different solutions from startups such as E8, Excelero, Everspin, Intel, Mellanox, Micron, Samsung and WD SANdisk among others. Also keep in mind, if the answer is NVMe, then what were and are the questions to ask, as well as what are some easy to use benchmark scripts (using fio, diskspd, vdbench, iometer).

    Speaking of NVMe, flash and SSDs, Amazon Web Services (AWS) have added new Elastic Cloud Compute (EC2) storage and I/O optimized i3 instances. These new instances are available in various configurations with different amounts of vCPU (cores or logical processors), memory and NVMe SSD capacities (and quantity) along with price.

    Note that the price per i3 instance varies not only by its configuration, also for image and region deployed in. The flash SSD capacities range from an entry-level (i3.large) with 2 vCPU (logical processors), 15.25GB of RAM and a single 475GB NVMe SSD that for example in the US East Region was recently priced at $0.156 per hour. At the high-end there is the i3.16xlarge with 64 vCPU (logical processors), 488GB RAM and 8 x 1900GB NVMe SSDs with a recent US East Region price of $4.992 per hour. Note that the vCPU refers to the available number of logical processors available and not necessarily cores or sockets.

    Also note that your performance will vary, and while NVMe protocol tends to use less CPU per I/O, if generating a large number of I/Os you will need some CPU. What this means is that if you find your performance limited compared to expectations with the lower end i3 instances, move up to a larger instance and see what happens. If you have a Windows-based environment, you can use a tool such as Diskspd to see what happens with I/O performance as you decrease the number of CPUs used.

    Chelsio has announced they are now Microsoft Azure Stack Certified with their iWARP RDMA host adapter solutions, as well as for converged infrastructure (CI), hyper-converged (HCI) and legacy server storage deployments. As part of the announcement, Chelsio is also offering a 30 day no cost trial of their adapters for Microsoft Azure Stack, Windows Server 2016 and Windows 10 client environments. Learn more about the Chelsio trial offer here.

    Everspin (the MRAM Spintorque, persistent RAM folks) have announced a new Storage Class Memory (SCM) NVMe accessible family (nvNITRO) of storage accelerator devices (PCIe AiC, U.2). Whats interesting about Everspin is that they are using NVMe for accessing their persistent RAM (e.g. MRAM) making it easily plug compatible with existing operating systems or hypervisors. This means using standard out of the box NVMe drivers where the Everspin SCM appears as a block device (for compatibility) functioning as a low latency, high performance persistent write cache.

    Something else interesting besides making the new memory compatible with existing servers CPU complex via PCIe, is how Everspin is demonstrating that NVMe as a general access protocol is not just exclusive to nand flash-based SSDs. What this means is that instead of using non-persistent DRAM, or slower NAND flash (or 3D XPoint SCM), Everspin nvNITRO enables high endurance write cache with persistent to compliment existing NAND flash as well as emerging 3D XPoint based storage. Keep an eye on Everspin as they are doing some interesting things for future discussions.

    Google Cloud Services has added additional regions (cloud locations) and other enhancements.

    HPE continued buying into server storage I/O data infrastructure technologies announcing an all cash (e.g. no stock) acquisition of Nimble Storage (NMBL). The cash acquisition for a little over $1B USD amounts to $12.50 USD per Nimble share, double what it had traded at. As a refresh, or overview, Nimble is an all flash shared storage system leverage NAND flash solid storage device (SSD) performance. Note that Nimble also partners with Cisco and Lenovo platforms that compete with HPE servers for converged systems.View additional perspectives here.

    Riverbed has announced the release of Steelfusion 5 which while its name implies physical hardware metal, the solution is available as tin wrapped (e.g. hardware appliance) software. However the solution is also available for deployment as a VMware virtual appliance for remote office branch office (ROBO) among others. Enhancements include converged functionality such as NAS support along with network latency as well as bandwidth among other features.

    Check out other industry news, comments, trends perspectives here.

    Server StorageIOblog Posts

    Recent and popular Server StorageIOblog posts include:

    View other recent as well as past StorageIOblog posts here

    Server StorageIO Commentary in the news

    Recent Server StorageIO industry trends perspectives commentary in the news.

    Via InfoStor: 8 Big Enterprise SSD Trends to Expect in 2017
    Watch for increased capacities at lower cost, differentiation awareness of high-capacity, low-cost and lower performing SSDs versus improved durability and performance along with cost capacity enhancements for active SSD (read and write optimized). You can also expect increased support for NVMe both as a back-end storage device with different form factors (e.g., M.2 gum sticks, U.2 8639 drives, PCIe cards) as well as front-end (e.g., storage systems that are NVMe-attached) including local direct-attached and fiber-attached. This means more awareness around NVMe both as front-end and back-end deployment options.

    Via SearchITOperations: Storage performance bottlenecks
    Sometimes it takes more than an aspirin to cure a headache. There may be a bottleneck somewhere else, in hardware, software, storage system architecture or something else.

    Via SearchDNS: Parsing through the software-defined storage hype
    Beyond scalability, SDS technology aims for freedom from the limits of proprietary hardware.

    Via InfoStor: Data Storage Industry Braces for AI and Machine Learning
    AI could also lead to untapped hidden or unknown value in existing data that has no or little perceived value

    Via SearchDataCenter: New options to evolve data backup recovery

    View more Server, Storage and I/O trends and perspectives comments here

    Various Tips, Tools, Technology and Tradecraft Topics

    Recent Data Infrastructure Tradecraft Articles, Tips, Tools, Tricks and related topics.

    Via ComputerWeekly: Time to restore from backup: Do you know where your data is?
    Via IDG/NetworkWorld: Ensure your data infrastructure remains available and resilient
    Via IDG/NetworkWorld: Whats a data infrastructure?

    Check out Scott Lowe @Scott_Lowe of VMware fame who while having a virtual networking focus has a nice roundup of related data infrastructure topics cloud, open source among others.

    Want to take a break from reading or listening to tech talk, check out some of the fun videos including aerial drone (and some technology topics) at www.storageio.tv.

    View more tips and articles here

    Events and Activities

    Recent and upcoming event activities.

    May 8-10, 2017 – Dell EMCworld – Las Vegas

    April 3-7, 2017 – Seminars – Dutch workshop seminar series – Nijkerk Netherlands

    March 15, 2017 – Webinar – SNIA/BrightTalkHyperConverged and Storage – 10AM PT

    January 26 2017 – Seminar – Presenting at Wipro SDx Summit London UK

    See more webinars and activities on the Server StorageIO Events page here.


    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book Software-Defined Data Infrastructure Essentials(CRC Press).

    Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

    Preparing For World Backup Day 2017 Are You Prepared

    Preparing For World Backup Day 2017 Are You Prepared

    In case you have forgotten, or were not aware, this coming Friday March 31 is World Backup Day 2017 (and recovery day). The annual day is a to remember to make sure you are protecting your applications, data, information, configuration settings as well as data infrastructures. While the emphasis is on Backup, that also means recovery as well as testing to make sure everything is working properly as part of on-prem and cloud data protection.

    What the Vendors Have To Say

    Today I received the following from Kylle over at TOUCHDOWNPR on behalf of their clients providing their perspectives on what World Backup Day means, or how to be prepared. Keep in mind these are not Server StorageIO clients (granted some have been in the past, or I know them, that is a disclosure btw), and this is in no way an endorsement of what they are saying, or advocating. Instead, this is simply passing along to you what was given to me.

    Not included in this list? No worries, add your perspectives (politely) to the comments, or, drop me a note, and perhaps I will do a follow-up or addition to this.

    Kylle O’Sullivan
    TOUCHDOWNPR
    Email: Kosullivan@touchdownpr.com
    Mobile: 508-826-4482
    Skype: Kylle.OSullivan

    “Data loss and disruption happens far too often in the enterprise. Research by Ponemon in 2016 estimates the average cost of an unplanned outage has spiralled to nearly $9,000 a minute, causing crippling downtime as well as financial and reputational damage. Legacy backups simply aren’t equipped to provide seamless operations, with zero Recovery Point Objectives (RPO) should a disaster strike. In order to guarantee the availability of applications, synchronous replication with real-time analytics needs to be simple to setup, monitor and manage for application owners and economical to the organization. That way, making zero data loss attainable suddenly becomes a reality.” – Chuck Dubuque, VP Product Marketing, Tintri

    “With today’s “always-on” business environment, data loss can destroy a company’s brand and customer trust. A multiple software-based strategy with software-defined and hyperconverged storage infrastructure is the most effective route for a flexible backup plan.  With this tactic, snapshots, replication and stretched clusters can help protect data, whether in a local data center cluster, across data centers or across the cloud. IT teams rely on these software-based policies as the backbone of their disaster recovery implementations as the human element is removed. This is possible as the software-based strategy dictates that all virtual machines are accurately, automatically and consistently replicated to the DR sites. Through this automatic and transparent approach, no administrator action is required, saving employees time, money and providing peace of mind that business can carry on despite any outage.” – Patrick Brennan, Senior Product Marketing Manager, Atlantis Computing

    “It’s only a matter of time before your datacenter experiences a significant outage, if it hasn’t already, due to a wide range of causes, from something as simple as human error or power failure to criminal activity like ransomware and cyberattacks, or even more catastrophic events like hurricanes. Shifting thinking to ‘when’ as opposed to ‘if’ something like this happens is crucial; crucial to building a more flexible and resilient IT infrastructure that can withstand any kind of disruption resulting in negative impact on business performance. World Backup Day reminds us of the importance of both having a backup plan in place and as well as conducting regular reviews of current and new technology to do everything possible to keep business running without interruption. Organizations today are highly aware that they are heavily dependent on data and critical applications, and that losing even just an hour of data can greatly harm revenues and brand reputation, sometimes beyond repair. Savvy businesses are taking an all-inclusive approach to this problem that incorporates cloud-based technologies into their disaster recovery plans. And with consistent testing and automation, they are ensuring that those plans are extremely simple to execute against in even the most challenging of situations, a key element of successfully avoiding damaging downtime.” Rob Strechay, VP Product, Zerto

    “Data is one of the most valuable business assets and when it comes to data protection chief among its IT challenges is the ever-growing rate of data and the associated vulnerability. Backup needs to be reliable, fast and cost efficient. Organizations are on the defensive after a disaster and being able to recover critical data within minutes is crucial. Breakthroughs in disk technologies and pricing have led to very dense arrays that are power, cost and performance efficient. Backup has been revolutionized and organizations need to ensure they are safeguarding their most valuable commodity – not just now but for the long term. Secure archive platforms are complementary and create a complete recovery strategy.”  – Geoff Barrall, COO, Nexsan

    Consider the DR Options that Object Storage Adds
    “Data backup and disaster recovery used to be treated as separate processes, which added complexity. But with object storage as a backup target you now have multiple options to bring backup and DR together in a single flow. You can configure a hybrid cloud and tier a portion of your data to the public cloud, or you can locate object storage nodes at different locations and use replication to provide geographic separation. So, this World Backup Day, consider how object storage has increased your options for meeting this critical need.” – Jon Toor, Cloudian CMO

    Whats In Your Data Protection Toolbox

    What tools, technologies do you have in your data protection toolbox? Do you only have a hammer and thus answer to every situation is that it looks like a nail? Or, do you have multiple tools, technologies combined with your various tradecraft experiences to applice different techniques?

    storageio data protection toolbox

    Where To Learn More

    Following these links to additional related material about backup, restore, availability, data protection, BC, BR, DR along with associated topics, trends, tools, technologies as well as techniques.

    Time to restore from backup: Do you know where your data is?
    February 2017 Server StorageIO Update Newsletter
    Data Infrastructure Server Storage I/O Tradecraft Trends
    Data Infrastructure Server Storage I/O related Tradecraft Overview
    Data Infrastructure Primer and Overview (Its Whats Inside The Data Center)
    What’s a data infrastructure?
    Ensure your data infrastructure remains available and resilient
    Part III Until the focus expands to data protection – Taking action
    Welcome to the Data Protection Diaries
    Backup, Big data, Big Data Protection, CMG & More with Tom Becchetti Podcast
    Six plus data center software defined management dashboards
    Cloud Storage Concerns, Considerations and Trends
    Software Defined, Cloud, Bulk and Object Storage Fundamentals (www.objectstoragecenter.com)

    Data Infrastructure Overview, Its Whats Inside of Data Centers
    All You Need To Know about Remote Office/Branch Office Data Protection Backup (free webinar with registration)
    Software Defined, Converged Infrastructure (CI), Hyper-Converged Infrastructure (HCI) resources
    The SSD Place (SSD, NVM, PM, SCM, Flash, NVMe, 3D XPoint, MRAM and related topics)
    The NVMe Place (NVMe related topics, trends, tools, technologies, tip resources)
    Data Protection Diaries (Archive, Backup/Restore, BC, BR, DR, HA, RAID/EC/LRC, Replication, Security)
    Software Defined Data Infrastructure Essentials (CRC Press 2017) including SDDC, Cloud, Container and more
    Various Data Infrastructure related events, webinars and other activities

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Backup of data is important, so to is recovery which also means testing. Testing means more than just if you can read the tape, disk, SSD, USB, cloud or other medium (or location). Go a step further and verify that not only you can read the data from the medium, also if your applications or software are able to use it. Have you protected your applications (e.g. not just the data), security keys, encryption, access, dedupe and other certificates along with metadata as well as other settings? Do you have a backup or protection copy of your protection including recovery tools? What granularity of protection and recovery do you have in place, when did you test or try it recently? In other words, what this all means is be prepared, find and fix issues, as well as in the course of testing, don’t cause a disaster.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Backup, Big data, Big Data Protection, CMG & More with Tom Becchetti Podcast

    server storage I/O trends

    In this Server StorageIO podcast episode, I am joined by Tom Becchetti (@tbecchetti) for a Friday afternoon conversation recorded live at Meisters in Scandia Minnesota (thanks to the Meisters crew!).

    Tom Becchetti

    For those of you who may not know Tom, he has been in the IT, data center, data infrastructure, server and storage (as well as data protection) industry for many years (ok decades) as a customer and vendor in various roles. Not surprising our data infrastructure discussion involves server, software, storage, big data, backup, data protection, big data protection, CMG (Computer Measurement Group @mspcmg), copy data management, cloud, containers, fundamental tradecraft skills among other related topics.

    Check out Tom on twitter @tbecchetti and @mspcmg as well as his new website www.storagegodfather.com. Listen to the podcast discussion here (42 minutes) as well as on iTunes.

    Also available on 

    Ok, nuff said for now…

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book Software-Defined Data Infrastructure Essentials (CRC Press).

    Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

    >

    HPE Continues Buying Into Server Storage I/O Data Infrastructures

    Storage I/O Data Infrastructures trends
    Updated 1/16/2018

    HPE expanded its Storage I/O Data Infrastructures portfolio buying into server storage I/O data infrastructure technologies announcing an all cash (e.g. no stock) acquisition of Nimble Storage (NMBL). The cash acquisition for a little over $1B USD amounts to $12.50 USD per Nimble share, double what it had traded at. As a refresh, or overview, Nimble is an all flash shared storage system leverage NAND flash solid storage device (SSD) performance. Note that Nimble also partners with Cisco and Lenovo platforms that compete with HPE servers for converged systems.

    Earlier this year (keep in mind its only mid-March) HPE also announced acquisition of server storage Hyper-Converged Infrastructure (HCI) vendor Simplivity (about $650M USD cash). In another investment this year HPE joined other investors as part of scale out and software defined storage startups Hedvig latest funding round (more on that later). These acquisitions are in addition to smaller ones such as last years buying of SGI, not to mention various divestitures.

    Data Infrastructures

    What Are Server Storage I/O Data Infrastructures Resources

    Data Infrastructures exists to support business, cloud and information technology (IT) among other applications that transform data into information or services. The fundamental role of data infrastructures is to give a platform environment for applications and data that is resilient, flexible, scalable, agile, efficient as well as cost-effective.

    Technologies that make up data infrastructures include hardware, software, cloud or managed services, servers, storage, I/O and networking along with people, processes, policies along with various tools spanning legacy, software-defined virtual, containers and cloud.

    HPE and Server Storage Acquisitions

    HPE and its predecessor HP (e.g. before the split that resulted in HPE) was familiar with expanding its data infrastructure portfolio spanning servers, storage, I/O networking, hardware, software and services. These range from Compaq who acquired DEC which gave them the StorageWorks brand and product line up (e.g. recall EVA and its predecessors), Lefthand, 3PAR, IBRIX, Polyserve, Autonomy, EDS and others that I’m guessing some at HPE (along with customers and partners) might not want to remember.

    In addition to their own in-house including via technology acquisition, HPE also partners for its entry-level and volume low-end MSA (Modular Storage Array) series with DotHill who was acquired by Seagate a year or so ago. In addition to the MSA, other HPE OEMs for storage include Hitachi Ltd. (e.g. parent of Hitachi Data Systems aka HDS) reselling their high-end enterprise class storage system as the XP7, as well as various other partner arrangements.

    Keep in mind that HPE has a large server business from low to high-end, spanning towers to dense blades to dual, quad and cluster in box (CiB) configurations with various processor architectures. Some of these servers are used as platforms for not only HPE, also other vendors software defined storage, as well as tin wrapped software solutions, appliances and systems. HPE is also one of a handful of partners working with Microsoft to bring the software defined private (and hybrid) Azure Stack cloud stack as an appliance to market.

    HPE acquisitions Dejavu or Something New?

    For some people there may be a sense of Dejavu of what HPE and its predecessors have previously acquired, developed, sold and supported into the market over years (and decades in some cases). What will be interesting to see is how the 3PAR (StoreServ) and Lefthand based (StoreVirtual) as well as ConvergedSystem 250-HC product lines are realigned to make way for Nimble and Simplivity.

    Likewise what will HPE do with MSA at the low-end, continue to leverage it for low-end and high-volume basic storage similar to Dell with the Netapp/Engenio powered MD series? Or will HPE try to move the Nimble down market and displace the MDS? What about in the mid-market, will Nimble be unleashed to replace StoreVirtual (e.g. Lefthand), or will they fence it in (e.g. being restricted to certain scenarios?
    Will the Nimble solution be allowed to move up market into the low-end of where 3PAR has been positioned, perhaps even higher up given its all flash capabilities. Or, will there be a 3PAR everywhere approach?

    Then there is Simplivity as the solution is effectively software running on an HPE server (or with other partners Cisco and Lenovo) along with a PCIe offload card (with Simplivity data services acceleration). Note that Simplivity leverages PCIe offload cards for some of their functionality, this too is familiar ground for HPE given its ASIC use by 3PAR.

    Simplivity has the potential to disrupt some low to mid-range, perhaps even larger opportunities that are looking to go to a converged infrastructure (CI) or HCI deployment as part of their data infrastructure needs. One can speculate that Simplivity after repackaging will be positioned along current HPE CI and HCI solutions.

    This will be interesting to watch to see if the HPE server and storage groups can converge not only from a technology point, also sales, marketing, service, and support perspective. With the Simplivity solution, HPE has an opportunity to move the industry thinking or perception that HCI is only for small environments defined by what some products can do.

    What I mean by this is that HPE with its enterprise and SMB along with SME and cloud managed service provider experience as well as servers can bring hyper-scale out (and up) converged to the market. In other words, start addressing the concern I hear from larger organizations that most CI or HCI solutions (or packaging) are just for smaller environments. HPE has the servers, they have the storage from MSAs to other modules and core data infrastructure building blocks along with the robustness of the Simplivity software to enable hyper-scale out CI.

    What about bulk, object, scale-out storage

    HPE has a robust tape business, yes I know tape is dead, however tell that to the customers who keep buying products providing revenue along with margin to HPE (and others). Likewise HPE has VTLs as well as other solutions for addressing bulk data (e.g. big data, backups, protection copies, archives, high volume, and large quantity, what goes on tape or object). For example HPE has the StoreOnce solution.

    However where is the HPE object storage story?

    Otoh, does HPE its own object storage software, simply partner with others? HPE can continue to provide servers along with underlying storage for other vendors bulk, cloud and object storage systems, and where needed, meet in the channel among other arrangements.

    On the other hand, this is where similar to Polyserve and Ibrix among others in the past have come into play, with HPE via its pathfinder (investment group) joining others in putting some money into Hedvig. HPE gets access to Hedvig for their scale out storage that can be used for bulk as well as other deployments including CI, HCI and CIB (e.g. something to sell HPE servers and storage with).

    HPE can continue to partner with other software providers and software-defined storage stacks. Keep in mind that Milan Shetti (CTO, Data Center Infrastructure Group HPE) is no stranger to these waters given his past at Ibrix among others.

    What About Hedvig

    Time to get back to Hedvig which is a storage startup whose software can run on various server storage platforms, as well as in different topologies. Different topologies include in a CI or HCI, Cloud, as well as scale out with various access including block, file and object. In addition to block, file and object access, Hedvig has interesting management tools, data services, along with support for VMware, Docker, and OpenStack among others.

    Recently Hedvig landed another $21.5M USD in funding bringing their total to about $52M USD. HPE via its investment arm, joins other investors (note HPE was part of the $21.5M, that was not the amount they invested) including Vertex, Atlantic Bridge, Redpoint, edbi and true ventures.

    What does this mean for HPE and Hedvig among others? Tough to say however easy to imagine how Hedvig could be leveraged as a partner using HPE servers, as well as for HPE to have an addition to their bulk, scale-out, cloud and object storage portfolio.

    Where to Learn More

    View more material on HPE, data infrastructure and related topics with the following links.

  • Cloud and Object storage are in your future, what are some questions?
  • PCIe Server Storage I/O Network Fundamentals
  • If NVMe is the answer, what are the questions?
  • Fixing the Microsoft Windows 10 1709 post upgrade restart loop
  • Data Infrastructure server storage I/O network Recommended Reading
  • Introducing Windows Subsystem for Linux WSL Overview
  • IT transformation Serverless Life Beyond DevOps with New York Times CTO Nick Rockwell Podcast
  • HPE Announces AMD Powered Gen 10 ProLiant DL385 For Software Defined Workloads
  • AWS Announces New S3 Cloud Storage Security Encryption Features
  • NVM Non Volatile Memory Express NVMe Place
  • Data Infrastructure Primer and Overview (Its Whats Inside The Data Center)
  • January 2017 Server StorageIO Update Newsletter
  • September and October 2016 Server StorageIO Update Newsletter
  • HP Buys one of the seven networking dwarfs and gets a bargain
  • Did HP respond to EMC and Cisco VCE with Microsoft Hyper-V bundle?
  • Give HP storage some love and short strokin
  • While HP and Dell make counter bids, exclusive interview with 3PAR CEO David Scott
  • Data Protection Fundamental Topics Tools Techniques Technologies Tips
  • Hewlett-Packard beats Dell, pays $2.35 billion for 3PAR
  • HP Moonshot 1500 software defined capable compute servers
  • What Does Converged (CI) and Hyper converged (HCI) Mean to Storage I/O?
  • What’s a data infrastructure?
  • Ensure your data infrastructure remains available and resilient
  • Object Storage Center, The SSD place and The NVMe place
  • Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What this all means

    Generally speaking I think this is a good series of moves for HPE (and their customers) as long as they can execute in all dimensions.

    Let’s see how they execute, and by this, I mean more than simply executing or terminating staff from recently acquired or earlier acquisitions. How will HPE craft go to the market message that leverages the portfolio to compete and hold or take share from other vendors, vs. cannibalize across its own lines (e.g. revenue prevention)? With that strategy and message, how will HPE assure existing customers will be taken care, be given a definite upgrade and migration path vs. giving them a reason to go elsewhere.

    Hopefully HPE unleashes the full potential of Simplivity and Nimble along with 3PAR, XP7 where needed, along with MSA at low-end (or as part of volume scale-out with servers for software defined), to mention sever portfolio. For now, this tells me that HPE is still interested in maintaining, expanding their data infrastructure business vs. simply retrenching selling off assets. Thus this looks like HPE is interested in continuing to invest in data infrastructure technologies including buying into server, storage I/O network, hardware, software solutions, while not simply clinging to what they already have, or previously bought.

    Everything is not the same in data centers and across data infrastructure, so why have a one size fits all approach for organization as large, diverse as HPE.

    Congratulations and best wishes to the folks at Hedvig, Nimble, Simplivity.

    Now, lets see how this all plays out.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.