Microsoft Azure September 2017 Software Defined Data Infrastructure Updates

Microsoft Azure September 2017 Software Defined Data Infrastructure Updates

server storage I/O data infrastructure trends

Microsoft and Azure September 2017 Software Defined Data infrastructure Updates

September was a busy month for data infrastructure topics as well as Microsoft in terms of new and enhanced technologies. Wrapping up September was Microsoft Ignite where Azure, Azure Stack, Windows, O365, AI, IoT, development tools announcements occurred, along with others from earlier in the month. As part of the September announcements, Microsoft released a new version of Windows server (e.g. 1709) that has a focus for enhanced container support. Note that if you have deployed Storage Spaces Direct (S2D) and are looking to upgrade to 1709, do your homework as there are some caveats that will cause you to wait for the next release. Note that there had been new storage related enhancements slated for the September update, however those were announced at Ignite to being pushed to the next semi-annual release. Learn more here and also here.

Azure Files and NFS

Microsoft made several Azure file storage related announcements and public previews during September including Native NFS based file sharing as companion to existing Azure Files, along with public preview of new Azure File Sync Service. Native NFS based file sharing (public preview announced, service is slated to be available in 2018) is a software defined storage deployment of NetApp OnTAP running on top of Azure data infrastructure including virtual machines and leverage Azure underlying storage.

Note that the new native NFS is in addition to the earlier native Azure Files accessed via HTTP REST and SMB3 enabling sharing of files inside Azure public cloud, as well as accessible externally from Windows based and Linux platforms including on premises. Learn more about Azure Storage and Azure Files here.

Azure File Sync (AFS)

Azure File Sync AFS

Azure File Sync (AFS) has now entered public preview. While users of Windows-based systems have been able to access and share Azure Files in the past, AFS is something different. I have used AFS for some time now during several private preview iterations having seen how it has evolved, along with how Microsoft listens incorporating feedback into the solution.

Lets take a look at what is AFS, what it does, how it works, where and when to use it among other considerations. With AFS, different and independent systems can now synchronize file shares through Azure. Currently in the AFS preview Windows Server 2012 and 2016 are supported including bare metal, virtual, and cloud based. For example I have had bare metal, virtual (VMware), cloud (Azure and AWS) as part of participating in a file sync activities using AFS.

Not to be confused with some other storage related AFS including Andrew File System among others, the new Microsoft Azure File Sync service enables files to be synchronized across different servers via Azure. This is different then the previous available Azure File Share service that enables files stored in Azure cloud storage to be accessed via Windows and Linux systems within Azure, as well as natively by Windows platforms outside of Azure. Likewise this is different from the recently announced Microsoft Azure native NFS file sharing serving service in partnership with NetApp (e.g. powered by OnTAP cloud).

AFS can be used to synchronize across different on premise as well as cloud servers that can also function as cache. What this means is that for Windows work folders served via different on premise servers, those files can be synchronized across Azure to other locations. Besides providing a cache, cloud tiering and enterprise file sync share (EFSS) capabilities, AFS also has robust optimization for data movement to and from the cloud and across sites, along with management tools. Management tools including diagnostics, performance and activity monitoring among others.

Check out the AFS preview including planning for an Azure File Sync (preview) deployment (Docs Microsoft), and for those who have Yammer accounts, here is the AFS preview group link.

Microsoft Azure Blob Events via Microsoft

Azure Blob Storage Tiering and Event Triggers

Two other Azure storage features that are in public preview include blob tiering (for cold archiving) and event triggers for events. As their names imply, blob tiering enables automatic migration from active to cold inactive storage of dormant date. Event triggers are policies rules (code) that get executed when a blob is stored to do various functions or tasks. Here is an overview of blob events and a quick start from Microsoft here.

Keep in mind that not all blob and object storage are the same, a good example is Microsoft Azure that has page, block and append blobs. Append blobs are similar to what you might be familiar with other services objects. Here is a Microsoft overview of various Azure blobs including what to use when.

Project Honolulu and Windows Server Enhancements

Microsoft has evolved from command prompt (e.g. early MSDOS) to GUI with Windows to command line extending into PowerShell that left some thinking there is no longer need for GUI. Even though Microsoft has extended its CLI with PowerShell spanning WIndows platforms and Azure, along with adding Linux command shell, there are those who still want or need a GUI. Project Honolulu is the effort to bring GUI based management back to Windows in a simplified way for what had been headless, and desktop less deployments (e.g. Nano, Server Core). Microsoft had Server Management Tools (SMT) accessible via the Azure Portal which has been discontinued.


Project Honolulu Image via Microsoft.com

This is where project Honolulu comes into play for managing Windows Server platforms. What this means is that for those who dont want to rely on or have a PowerShell dependency have an alternative option. Learn more about Project Honolulu here and here, including download the public preview here.

Storage Spaces Direct (S2D) Kepler Appliance

Data Infrastructure provider DataOn has announced a new turnkey Windows Server 2016 Storage Spaces Direct (S2D) powered Hyper-Converged Infrastructure (e.g. productization of project Kepler-47) solution with two node small form factor servers (partner with MSI). How small? Think suitcase or airplane roller board carry on luggage size.

What this means is that you can get into the converged, hyper-converged software defined storage game with Windows-based servers supporting Hyper-V virtual machines (Windows and Linux) including hardware for around $10,000 USD (varies by configuration and other options).

Azure and Microsoft Networking News

Speaking of Microsoft Azure public cloud, ever wonder what the network that enables the service looks like and some of the software defined networking (SDN) along with network virtualization function (NFV) objectives are, have a look at this piece from over at Data Center Knowledge.

In related Windows, Azure and other focus areas, Microsoft, Facebook and Telxius have completed the installation of a high-capacity subsea cable (network) to cross the atlantic ocean. Whats so interesting from a data infrastructure, cloud or legacy server storage I/O and data center focus perspective? The new network was built by the combined companies vs. in the past by a Telco provider consortium with the subsequent bandwidth sold or leased to others.

This new network is also 4,000 miles long including in depths of 11,000 feet, supports with current optics 160 terabits (e.g. 20 TeraBytes) per second capable of supporting 71 million HD videos streamed simultaneous. To put things into perspective, some residential Fiber Optic services can operate best case up to 1 gigabit per second (line speed) and in an asymmetrical fashion (faster download than uploads). Granted there are some 10 Gbit based services out there more common with commercial than residential. Simply put, there is a large amount of bandwidth increased across the atlantic for Microsoft and Facebook to support growing demands.

Where To Learn More

Learn more about related technology, trends, tools, techniques, and tips with the following links.

What This All Means

Microsoft announced a new release of Windows Server at Ignite as part of its new semi-annual release cycle. This latest version of Windows server is optimized for containers. In addition to Windows server enhancements, Microsoft continues to extend Azure and related technologies for public, private and hybrid cloud as well as software defined data infrastructures.

By the way, if you have not heard, its Blogtober, check out some of the other blogs and posts occurring during October here.

Ok, nuff said, for now.
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (and vSAN). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

Azure Stack Technical Preview 3 (TP3) Overview Preview Review

server storage I/O trends

Azure Stack Technical Preview 3 (TP3) Overview Preview Review

Perhaps you are aware or use Microsoft Azure, how about Azure Stack?

This is part one of a two-part series looking at Microsoft Azure Stack providing an overview, preview and review. Read part two here that looks at my experiences installing Microsoft Azure Stack Technical Preview 3 (TP3).

For those who are not aware, Azure Stack is a private on-premises extension of the Azure public cloud environment. Azure Stack now in technical preview three (e.g. TP3), or what you might also refer to as a beta (get the bits here).

In addition to being available via download as a preview, Microsoft is also working with vendors such as Cisco, Dell EMC, HPE, Lenovo and others who have announced Azure Stack support. Vendors such as Dell EMC have also made proof of concept kits available that you can buy including server with storage and software. Microsoft has also indicated that once launched for production versions scaling from a few to many nodes, that a single node proof of concept or development system will also remain available.

software defined data infrastructure SDDI and SDDC
Software-Defined Data Infrastructures (SDDI) aka Software-defined Data Centers, Cloud, Virtual and Legacy

Besides being an on-premises, private cloud variant, Azure Stack is also hybrid capable being able to work with public cloud Azure. In addition to working with public cloud Azure, Azure Stack services and in particular workloads can also work with traditional Microsoft, Linux and others. You can use pre built solutions from the Azure marketplace, in addition to developing your applications using Azure services and DevOps tools. Azure Stack enables hybrid deployment into public or private cloud to balance flexibility, control and your needs.

Azure Stack Overview

Microsoft Azure Stack is an on premise (e.g. in your own data center) private (or hybrid when connected to Azure) cloud platform. Currently Azure Stack is in Technical Preview 3 (e.g. TP3) and available as a proof of concept (POC) download from Microsoft. You can use Azure Stack TP3 as a POC for learning, demonstrating and trying features among other activities. Here is link to a Microsoft Video providing an overview of Azure Stack, and here is a good summary of roadmap, licensing and related items.

In summary, Microsoft Azure Stack is:

  • A onsite, on premise, in your data center extension of Microsoft Azure public cloud
  • Enabling private and hybrid cloud with strong integration along with common experiences with Azure
  • Adopt, deploy, leverage cloud on your terms and timeline choosing what works best for you
  • Common processes, tools, interfaces, management and user experiences
  • Leverage speed of deployment and configuration with a purpose-built integrate solution
  • Support existing and cloud native Windows, Linux, Container and other services
  • Available as a public preview via software download, as well as vendors offering solutions

What is Azure Stack Technical Preview 3 (TP3)

This version of Azure Stack is a single node running on a lone physical machine (PM) aka bare metal (BM). However can also be installed into a virtual machine (VM) using nesting. For example I have Azure Stack TP3 running nested on a VMware vSphere ESXi 6.5 systems with a Windows Server 2016 VM as its base operating system.

Microsoft Azure Stack architecture
Click here or on the above image to view list of VMs and other services (Image via Microsoft.com)

The TP3 POC Azure Stack is not intended for production environments, only for testing, evaluation, learning and demonstrations as part of its terms of use. This version of Azure Stack is associated with a single node identity such as Azure Active Directory (AAD) integrated with Azure, or Active Directory Federation Services (ADFS) for standalone modes. Note that since this is a single server deployment, it is not intended for performance, rather, for evaluating functionality, features, APIs and other activities. Learn more about Azure Stack TP3 details here (or click on image) including names of various virtual machines (VMs) as well as their roles.

Where to learn more

The following provide more information and insight about Azure, Azure Stack, Microsoft and Windows among related topics.

  • Azure Stack Technical Preview 3 (TP3) Overview Preview Review
  • Azure Stack TP3 Overview Preview Review Part II
  • Azure Stack Technical Preview (get the bits aka software download here)
  • Azure Stack deployment prerequisites (Microsoft)
  • Microsoft Azure Stack troubleshooting (Microsoft Docs)
  • Azure Stack TP3 refresh tips (Azure Stack)
  • Here is a good post with a tip about not applying certain Windows updates to Azure stack TP3 installs.
  • Configure Azure stack TP3 to be available on your own network (Azure Stack)
  • Azure Stack TP3 Marketplace syndication (Azure Stack)
  • Azure Stack TP3 deployment experiences (Azure Stack)
  • Frequently asked questions for Azure Stack (Microsoft)
  • Deploy Azure Stack (Microsoft)
  • Connect to Azure Stack (Microsoft)
  • Azure Active Directory (AAD) and Active Directory Federation Services (ADFS)
  • Azure Stack TP2 deployment experiences by Niklas Akerlund (@vNiklas) useful for tips for TP3
  • Deployment Checker for Azure Stack Technical Preview (Microsoft Technet)
  • Azure stack and other tools (Github)
  • How to enable nested virtualization on Hyper-V Windows Server 2016
  • Dell EMC announce Microsoft Hybrid Cloud Azure Stack (Dell EMC)
  • Dell EMC Cloud for Microsoft Azure Stack (Dell EMC)
  • Dell EMC Cloud for Microsoft Azure Stack Data Sheet (Dell EMC PDF)
  • Dell EMC Cloud Chats (Dell EMC Blog)
  • Microsoft Azure stack forum
  • Dell EMC Microsoft Azure Stack solution
  • Gaining Server Storage I/O Insight into Microsoft Windows Server 2016
  • Overview Review of Microsoft ReFS (Reliable File System) and resource links
  • Via WServerNews.com Cloud (Microsoft Azure) storage considerations
  • Via CloudComputingAdmin.com Cloud Storage Decision Making: Using Microsoft Azure for cloud storage
  • www.thenvmeplace.com, www.thessdplace.com, www.objectstoragecenter.com and www.storageio.com/converge
  • What this all means

    A common question is if there is demand for private and hybrid cloud, in fact, some industry expert pundits have even said private, or hybrid are dead which is interesting, how can something be dead if it is just getting started. Likewise, it is early to tell if Azure Stack will gain traction with various organizations, some of whom may have tried or struggled with OpenStack among others.

    Given a large number of Microsoft Windows-based servers on VMware, OpenStack, Public cloud services as well as other platforms, along with continued growing popularity of Azure, having a solution such as Azure Stack provides an attractive option for many environments. That leads to the question of if Azure Stack is essentially a replacement for Windows Servers or Hyper-V and if only for Windows guest operating systems. At this point indeed, Windows would be an attractive and comfortable option, however, given a large number of Linux-based guests running on Hyper-V as well as Azure Public, those are also primary candidates as are containers and other services.

    Continue reading more in part two of this two-part series here including installing Microsoft Azure Stack TP3.

    Ok, nuff said (for now…).

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book "Software-Defined Data Infrastructure Essentials" (CRC Press).

    Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

    Azure Stack TP3 Overview Preview Review Part II

    server storage I/O trends

    Azure Stack TP3 Overview Preview (Part II) Install Review

    This is part two of a two-part series looking at Microsoft Azure Stack with a focus on my experiences installing Microsoft Azure Stack Technical Preview 3 (TP3) including into a nested VMware vSphere ESXi environment. Read part one here that provides a general overview of Azure Stack.

    Azure Stack Review and Install

    Being familiar with Microsoft Azure public cloud having used it for a few years now, I wanted to gain some closer insight, experience, expand my trade craft on Azure Stack by installing TP3. This is similar to what I have done in the past with OpenStack, Hadoop, Ceph, VMware, Hyper-V and many others, some of which I need to get around to writing about sometime. As a refresher from part one of this series, the following is an image via Microsoft showing the Azure Stack TP3 architecture, click here or on the image to learn more including the names and functions of the various virtual machines (VMs) that make up Azure Stack.

    Microsoft Azure Stack architecture
    Click here or on the above image to view list of VMs and other services (Image via Microsoft.com)

    Whats Involved Installing Azure Stack TP3?

    The basic steps are as follows:

    • Read this Azure Stack blog post (Azure Stack)
    • Download the bits (e.g. the Azure Stack software) from here, where you access the Azure Stack Downloader tool.
    • Planning your deployment making decisions on Active Directory and other items.
    • Prepare the target server (physical machine aka PM, or virtual machine VM) that will be the Azure Stack destination.
    • Copy Azure Stack software and installer to target server and run pre-install scripts.
    • Modify PowerShell script file if using a VM instead of a PM
    • Run the Azure Stack CloudBuilder setup, configure unattend.xml if needed or answer prompts.
    • Server reboots, select Azure Stack from two boot options.
    • Prepare your Azure Stack base system (time, network NICs in static or DHCP, if running on VMware install VMtools)
    • Determine if you will be running with Azure Active Directory (AAD) or standalone Active Directory Federated Services (ADFS).
    • Update any applicable installation scripts (see notes that follow)
    • Deploy the script, then extended Azure Stack TP3 PoC as needed

    Note that this is a large download of about 16GB (23GB with optional WIndows Server 2016 demo ISO).

    Use the AzureStackDownloader tool to download the bits (about 16GB or 23GB with optional Windows Server 2016 base image) which will either be in several separate files which you stitch back together with the MicrosoftAzureStackPOC tool, or as a large VHDX file and smaller 6.8GB ISO (Windows Server 2016). Prepare your target server system for installation once you have all the software pieces downloaded (or do the preparations while waiting for download).

    Once you have the software downloaded, if it is a series of eight .bin files (7 about 2GB, 1 around 1.5GB), good idea to verify their checksums, then stitch them together on your target system, or on a staging storage device or file share. Note that for the actual deployment first phase, the large resulting cloudbuilder.vhdx file will need to reside in the C:\ root location of the server where you are installing Azure Stack.

    server storageio nested azure stack tp3 vmware

    Azure Stack deployment prerequisites (Microsoft) include:

    • At least 12 cores (or more), dual socket processor if possible
    • As much DRAM as possible (I used 100GB)
    • Put the operating system disk on flash SSD (SAS, SATA, NVMe) if possible, allocate at least 200GB (more is better)
    • Four x 140GB or larger (I went with 250GB) drives (HDD or SSD) for data deployment drives
    • A single NIC or adapter (I put mine into static instead of DHCP mode)
    • Verify your physical or virtual server BIOS has VT enabled

    The above image helps to set the story of what is being done. On the left is for bare metal (BM) or physical machine (PM) install of Azure Stack TP3, on the right, a nested VMware (vSphere ESXi 6.5) with virtual machine (VM) 11 approach. Note that you could also do a Hyper-V nested among other approaches. Shown in the image above common to both a BM or VM is a staging area (could be space on your system drive) where Azure Stack download occurs. If you use a separate staging area, then simply copy the individual .bin files and stitch together into the larger .VHDX, or, copy the larger .VHDX, which is better is up to your preferences.

    Note that if you use the nested approach, there are a couple of configuration (PowerShell) scripts that need to be updated. These changes are to trick the installer into thinking that it is on a PM when it checks to see if on physical or virtual environments.

    Also note that if using nested, make sure you have your VMware vSphere ESXi host along with specific VM properly configured (e.g. that virtualization and other features are presented to the VM). With vSphere ESXi 6.5 virtual machine type 11 nesting is night and day easier vs. earlier generations.

    Something else to explain here is that you will initially start the Azure Stack install preparation using a standard Windows Server (I used a 2016 version) where the .VHDX is copied into its C:\ root. From there you will execute some PowerShell scripts to setup some configuration files, one of which needs to be modified for nesting.

    Once those prep steps are done, there is a Cloudbuilder deploy script that gets run that can be done with an unattend.xml file or manual input. This step will cause a dual-boot option to be added to your server where you can select Azure Stack or your base prep Windows Server instance, followed by reboot.

    After the reboot occurs and you choose to boot into Azure Stack, this is the server instance that will actually run the deployment script, as well as build and launch all the VMs for the Azure Stack TP3 PoC. This is where I recommend having a rough sketch like above to annotate layers as you go to remember what layer working at. Don’t worry, it becomes much easier once all is said and done.

    Speaking of preparing your server, refer to Microsoft specs, however in general give the server as much RAM and cores as possible. Also if possible place the system disk on a flash SSD (SAS, SATA, NVMe) and make sure that it has at least 200GB, however 250 or even 300GB is better (just in case you need more space).

    Additional configuration tips include allocating four data disks for Azure, if possible make these SSDs as well as, however more important IMHO to have at least the system on fast flash SSD. Another tip is to enable only one network card or NIC and put it into static vs. DHCP address mode to make things easier later.

    Tip: If running nested, vSphere 6.5 worked the smoothest as had various issues or inconsistencies with earlier VMware versions, even with VMs that ran nested just fine.

    Tip: Why run nested? Simple, I wanted to be able to use using VMware tools, do snapshots to go back in time, plus share the server with some other activities until ready to give Azure Stack TP3 its own PM.

    Tip: Do not connect the POC machine to the following subnets (192.168.200.0/24, 192.168.100.0/27, 192.168.101.0/26, 192.168.102.0/24, 192.168.103.0/25, 192.168.104.0/25) as Azure Stack TP3 uses those.

    storageio azure stack tp3 vmware configuration

    Since I decided to use a nested VM deploying using VMware, there were a few extra steps needed that I have included as tips and notes. Following is view via vSphere client of the ESXi host and VM configuration.

    The following image combines a couple of different things including:

    A: Showing the contents of C:\Azurestack_Supportfiles directory

    B: Modifying the PrepareBootFromVHD.ps1 file if deploying on virtual machine (See tips and notes)

    C: Showing contents of staging area including individual .bin files along with large CloudBuilder.vhdx

    D: Running the PowerShell script commands to prepare the PrepareBootFromVHD.ps1 and related items

    prepariing azure stack tp3 cloudbuilder for nested vmware deployment

    From PowerShell (administrator):

    # Variables
    $Uri = 'https://raw.githubusercontent.com/Azure/Azure stack/master/Deployment/'
    $LocalPath = 'c:\AzureStack_SupportFiles'

    # Create folder
    New-Item $LocalPath -type directory

    # Download files
    ( 'BootMenuNoKVM.ps1', 'PrepareBootFromVHD.ps1', 'Unattend.xml', 'unattend_NoKVM.xml') | foreach { Invoke-WebRequest ($uri + $_) -OutFile ($LocalPath + '\' + $_) }

    After you do the above, decide if you will be using an Unattend.xml or manual entry of items for building the Azure Stack deployment server (e.g. a Windows Server). Note that the above PowerShell script created the C:\azurestack_supportfiles folder and downloads the script files for building the cloud image using the previously downloaded Azure Stack CloudBuilder.vhdx (which should be in C:\).

    Note and tip is that if you are doing a VMware or virtual machine based deployment of TP3 PoC, you will need to change C:\PrepareBootFromVHD.ps1 in the Azure Stack support files folder. Here is a good resource on what gets changed via Github that shows an edit on or about line 87 of PrepareBootFromVHD.ps1. If you run the PrepareBootFromVHD.ps1 script on a virtual machine you will get an error message, the fix is relatively easy (after I found this post).

    Look in PrepareBootFromVHD.ps1 for something like the following around line 87:

    if ((get-disk | where {$_.isboot -eq $true}).Model -match 'Virtual Disk')       {      Write-Host "The server is currently already booted from a virtual hard disk, to boot the server from the CloudBuilder.vhdx you will need to run this script on an Operating System that is installed on the physical disk of this server."      Exit      }
    

    You can either remove the "exit" command, or, change the test for "Virtual Disk" to something like "X", for fun I did both (and it worked).

    Note that you only have to make the above and another change in a later step if you are deploying Azure Stack TP3 as a virtual machine.

    Once you are ready, go ahead and launch the PrepareBootFromVHD.ps1 script which will set the BCDBoot entry (more info here).

    azure stack tp3 cloudbuilder nested vmware deployment

    You will see a reboot and install, this is installing what will be called the physical instance. Note that this is really being installed on the VM system drive as a secondary boot option (e.g. azure stack).

    azure stack tp3 dual boot option

    After the reboot, login to the new Azure Stack base system and complete any configuration including adding VMware Tools if using VMware nested. Some other things to do include make sure you have your single network adapter set to static (makes things easier), and any other updates or customizations. Before you run the next steps, you need to decide if going to use Azure Active Directory (AAD) or local ADFS.

    Note that if you are not running on a virtual machine, simply open a PowerShell (administrator) session, and run the deploy script. Refer to here for more guidance on the various options available including discussion on using AAD or ADFS.

    Note if you run the deployment script on a virtual machine, you will get an error which is addressed in the next section, otherwise, sit back and watch the progress..

    CloudBuilder Deployment Time

    Once you have your Azure Stack deployment system and environment ready, including a snapshot if on virtual machine, launch the PowerShell deployment script. Note that you will need to have decided if deploying with Azure Active Directory (AAD) or Azure Directory Federated Services (ADFS) for standalone aka submarine mode. There are also other options you can select as part of the deployment discussed in the Azure Stack tips here (a must read) and here. I chose to do a submarine mode (e.g. not connected to Public Azure and AAD) deployment.

    From PowerShell (administrator):

    cd C:\CloudDeployment:\Setup
    $adminpass = ConvertTo-SecureString "youradminpass" -AsPlainText -Force
    .\InstallAzureStackPOC.ps1 -AdminPassword $adminpass -UseADFS

    Deploying on VMware Virtual Machines Tips

    Here is a good tip via Gareth Jones (@garethjones294) that I found useful for updating one of the deployment script files (BareMetal_Tests.ps1 located in C:\CloudDeployment\Roles\PhysicalMachines\Tests folder) so that it would skip the bare metal (PM) vs. VM tests. Another good resource, even though it is for TP2 and early versions of VMware is TP2 deployment experiences by Niklas Akerlund (@vNiklas).

    Note that this is a bit of a chick and egg scenario unless you are proficient at digging into script files since the BareMetal_Tests.ps1 file does not get unpacked until you run the CloudBuilder deployment script. If you run the script and get an error, then make the changes below, and rerun the script as noted. Once you make the modification to the BareMetal_Tests.ps1 file, keep a copy in a safe place for future use.

    Here are some more tips for deploying Azure Stack on VMware,

    Per the tip mentioned about via Gareth Jones (tip: read Gareths post vs. simply cut and paste the following which is more of a guide):

    Open BareMetal_Tests.ps1 file in PowerShell ISE and navigate to line 376 (or in that area)
    Change $false to $true which will stop the script failing when checking to see if the Azure Stack is running inside a VM.
    Next go to line 453.
    Change the last part of the line to read “Should Not BeLessThan 0”
    This will stop the script checking for the required amount of cores available.

    After you make the above correction as with any error (and fix) during Azure Stack TP3 PoC deployment, simply run the following.

    cd C:\CloudDeployment\Setup
    .\InstallAzureStackPOC.ps1 -rerun
    

    Refer to the extra links in the where to learn more section below that offer various tips, tricks and insight that I found useful, particular for deploying on VMware aka nested. Also in the links below are tips on general Azure Stack, TP2, TP3, adding services among other insight.

    starting azure stack tp3 deployment

    Tip: If you are deploying Azure Stack TP3 PoC on virtual machine, once you start the script above, copy the modified BareMetal_Tests.ps1 file

    Once the CloudBuilder deployment starts, sit back and wait, if you are using SSDs, it will take a while, if using HDDs, it will take a long while (up to hours), however check in on it now and then to see progress of if any errors. Note that some of the common errors will occur very early in the deployment such as the BareMetal_Tests.ps1 mentioned above.

    azure stack tp3 deployment finished

    Checking in periodically to see how the deployment progress is progressing, as well as what is occurring. If you have the time, watch some of the scripts as you can see some interesting things such as the software defined data center (SDDC) aka software-defined data infrastructure (SDDC) aka Azure Stack virtual environment created. This includes virtual machine creation and population, creating the software defined storage using storage spaces direct (S2D), virtual network and active directory along with domain controllers among others activity.

    azure stack tp3 deployment progress

    After Azure Stack Deployment Completes

    After you see the deployment completed, you can try accessing the management portal, however there may be some background processing still running. Here is a good tip post on connecting to Azure Stack from Microsoft using Remote Desktop (RDP) access. Use RDP from the Azure Stack deployment Windows Server and connect to a virtual machine named MAS-CON01, launch Server Manager and for Local Server disable Internet Explorer Enhanced Security (make sure you are on the right system, see the tip mentioned above). Disconnect from MAS-CON01 (refer to the Azure Stack architecture image above), then reconnect, and launch Internet Explorer with an URL of (note documentation side to use which did not work for me).

    Note the username for the Azure Stack system is AzureStack\AzureStackAdmin with a password of what you set for administrative during setup. If you get an error, verify the URLs, check your network connectivity, wait a few minutes as well as verify what server you are trying to connect from and too. Keep in mind that even if deploying on a PM or BM (e.g. non virtual server or VM), the Azure Stack deployment TP3 PoC creates a "virtual" software-defined environment with servers, storage (Azure Stack uses Storage Spaces Direct [S2D] and software defined network.

    accessing azure stack tp3 management portal dashboard

    Once able to connect to Azure Stack, you can add new services including virtual machine image instances such as Windows (use the Server 2016 ISO that is part of Azure Stack downloads), Linux or others. You can also go to these Microsoft resources for some first learning scenarios, using the management portals, configuring PowerShell and troubleshooting.

    Where to learn more

    The following provide more information and insight about Azure, Azure Stack, Microsoft and Windows among related topics.

  • Azure Stack Technical Preview 3 (TP3) Overview Preview Review
  • Azure Stack TP3 Overview Preview Review Part II
  • Azure Stack Technical Preview (get the bits aka software download here)
  • Azure Stack deployment prerequisites (Microsoft)
  • Microsoft Azure Stack troubleshooting (Microsoft Docs)
  • Azure Stack TP3 refresh tips (Azure Stack)
  • Here is a good post with a tip about not applying certain Windows updates to AzureStack TP3 installs.
  • Configure Azure Stack TP3 to be available on your own network (Azure Stack)
  • Azure Stack TP3 Marketplace syndication (Azure Stack)
  • Azure Stack TP3 deployment experiences (Azure Stack)
  • Frequently asked questions for Azure Stack (Microsoft)
  • Azure Active Directory (AAD) and Active Directory Federation Services (ADFS)
  • Deploy Azure Stack (Microsoft)
  • Connect to Azure Stack (Microsoft)
  • Azure Stack TP2 deployment experiences by Niklas Akerlund (@vNiklas) useful for tips for TP3
  • Deployment Checker for Azure Stack Technical Preview (Microsoft Technet)
  • Azure stack and other tools (Github)
  • How to enable nested virtualization on Hyper-V Windows Server 2016
  • Dell EMC announce Microsoft Hybrid Cloud Azure Stack (Dell EMC)
  • Dell EMC Cloud for Microsoft Azure Stack (Dell EMC)
  • Dell EMC Cloud for Microsoft Azure Stack Data Sheet (Dell EMC PDF)
  • Dell EMC Cloud Chats (Dell EMC Blog)
  • Microsoft Azure stack forum
  • Dell EMC Microsoft Azure Stack solution
  • Gaining Server Storage I/O Insight into Microsoft Windows Server 2016
  • Overview Review of Microsoft ReFS (Reliable File System) and resource links
  • Via WServerNews.com Cloud (Microsoft Azure) storage considerations
  • Via CloudComputingAdmin.com Cloud Storage Decision Making: Using Microsoft Azure for cloud storage
  • www.thenvmeplace.com, www.thessdplace.com, www.objectstoragecenter.com and www.storageio.com/converge
  • What this all means

    A common question is if there is demand for private and hybrid cloud, in fact, some industry expert pundits have even said private, or hybrid are dead which is interesting, how can something be dead if it is just getting started. Likewise, it is early to tell if Azure Stack will gain traction with various organizations, some of whom may have tried or struggled with OpenStack among others.

    Given a large number of Microsoft Windows-based servers on VMware, OpenStack, Public cloud services as well as other platforms, along with continued growing popularity of Azure, having a solution such as Azure Stack provides an attractive option for many environments. That leads to the question of if Azure Stack is essentially a replacement for Windows Servers or Hyper-V and if only for Windows guest operating systems. At this point indeed, Windows would be an attractive and comfortable option, however, given a large number of Linux-based guests running on Hyper-V as well as Azure Public, those are also primary candidates as are containers and other services.

    software defined data infrastructures SDDI and SDDC

    Some will say that if OpenStack is struggling in many organizations and being free open source, how Microsoft can have success with Azure Stack. The answer could be that some organizations have struggled with OpenStack while others have not due to lack of commercial services and turnkey support. Having installed both OpenStack and Azure Stack (as well as VMware among others), Azure Stack is at least the TP3 PoC is easy to install, granted it is limited to one node, unlike the production versions. Likewise, there are easy to use appliance versions of OpenStack that are limited in scale, as well as more involved installs that unlock full functionality.

    OpenStack, Azure Stack, VMware and others have their places, alongside, or supporting containers along with other tools. In some cases, those technologies may exist in the same environment supporting different workloads, as well as accessing various public clouds, after all, Hybrid is the home run for many if not most legality IT environments.

    Ok, nuff said (for now…).

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book "Software-Defined Data Infrastructure Essentials" (CRC Press).

    Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

    Cloud conversations: Has Nirvanix shutdown caused cloud confidence concerns?

    Storage I/O trends

    Cloud conversations: Has Nirvanix shutdown caused cloud confidence concerns?

    Recently seven plus year old cloud storage startup Nirvanix announced that they were finally shutting down and that customers should move their data.

    nirvanix customer message

    Nirvanix has also posted an announcement that they have established an agreement with IBM Softlayer (read about that acquisition here) to help customers migrate to those services as well as to those of Amazon Web Services (AWS), (read more about AWS in this primer here), Google and Microsoft Azure.

    Cloud customer concerns?

    With Nirvanix shutting down there has been plenty of articles, blog posts, twitter tweets and other conversations asking if Clouds are safe.

    Btw, here is a link to my ongoing poll where you can cast your vote on what you think about clouds.

    IMHO clouds can be safe if used in safe ways which includes knowing and addressing your concerns, not to mention following best practices, some of which pre-date the cloud era, sometimes by a few decades.

    Nirvanix Storm Clouds

    More on this in a moment, however lets touch base on Nirvanix and why I said they were finally shutting down.

    The reason I say finally shutting down is that there were plenty of early warning signs and storm clouds circling Nirvanix for a few years now.

    What I mean by this is that in their seven plus years of being in business, there have been more than a few CEO changes, something that is not unheard of.

    Likewise there have been some changes to their business model ranging from selling their software as a service to a solution to hosting among others, again, smart startups and establishes organizations will adapt over time.

    Nirvanix also invested heavily in marketing, public relations (PR) and analyst relations (AR) to generate buzz along with gaining endorsements as do most startups to get recognition, followings and investors if not real customers on board.

    In the case of Nirvanix, the indicator signs mentioned above also included what seemed like a semi-annual if not annual changing of CEOs, marketing and others tying into business model adjustments.

    cloud storage

    It was only a year or so ago that if you gauged a company health by the PR and AR news or activity and endorsements you would have believed Nirvanix was about to crush Amazon, Rackspace or many others, perhaps some actually did believe that, followed shortly there after by the abrupt departure of their then CEO and marketing team. Thus just as fast as Nirvanix seemed to be the phoenix rising in stardom their aura started to dim again, which could or should have been a warning sign.

    This is not to solo out Nirvanix, however given their penchant for marketing and now what appears to some as a sudden collapse or shutdown, they have also become a lightning rod of sort for clouds in general. Given all the hype and fud around clouds when something does happen the distract ors will be quick to jump or pile on to say things like "See, I told you, clouds are bad".

    Meanwhile the cloud cheerleaders may go into denial saying there are no problems or issues with clouds, or they may go back into a committee meeting to create a new stack, standard, API set marketing consortium alliance. ;) On the other hand, there are valid concerns with any technology including clouds that in general there are good implementations that can be used the wrong way, or questionable implementations and selections used in what seem like good ways that can go bad.

    This is not to say that clouds in general whether as a service, solution or product on a public, private or hybrid bases are any riskier than traditional hardware, software and services. Instead what this should be is a wake up call for people and organizations to review clouds citing their concerns along with revisiting what to do or can be done about them.

    Clouds: Being prepared

    Ben Woo of Neuralytix posted this question comment to one of the Linked In groups Collateral Considerations If You Were/Are A Nirvanix Customer which I posted some tips and recommendations including:

    1) If you have another copy of your data somewhere else (which you should btw), how will your data at Nirvanix be securely erased, and the storage it resides on be safely (and secure) decommissioned?

    2) if you do have another copy of your data elsewhere, how current is it, can you bring it up to date from various sources (including update from Nirvanix while they stay online)?

    3) Where will you move your data to short or near term, as well as long-term.

    4) What changes will you make to your procurement process for cloud services in the future to protect against situations like this happening to you?

    5) As part of your plan for putting data into the cloud, refine your strategy for getting it out, moving it to another service or place as well as having an alternate copy somewhere.

    Fwiw any data I put into a cloud service there is also another copy somewhere else which even though there is a cost, there is a benefit, The benefit is that ability to decide which to use if needed, as well as having a backup/spare copy.

    Storage I/O trends

    Cloud Concerns and Confidence

    As part of cloud procurement as services or products, the same proper due diligence should occur as if you were buying traditional hardware, software, networking or services. That includes checking out not only the technology, also the companies financial, business records, customer references (both good and not so good or bad ones) to gain confidence. Part of gaining that confidence also involves addressing ahead of time how you will get your data out of or back from that services if needed.

    Keep in mind that if your data is very important, are you going to keep it in just one place? For example I have data backed-up as well as archived to cloud providers, however I also have local copies either on-site or off.

    Likewise there is data I have local kept at alternate locations including cloud. Sure that is costly, however by not treating all of my data and applications the same, I’m able to balance those costs out, plus use cost advantages of different services as well as on-site to be effective. I may be spending no less on data protection, in fact I’m actually spending a bit more, however I also have more copies and versions of important data and in multiple locations. Data that is not changing often does not get protected as often, however there are multiple copies to meet different needs or threat risks.

    Storage I/O trends

    Don’t be scared of clouds, be prepared

    While some of the other smaller cloud storage vendors will see some new customers, I suspect that near to mid-term, it will be the larger, more established and well funded providers that gain the most from this current situation. Granted some customers are looking for alternatives to the mega cloud providers such as Amazon, Google, HP, IBM, Microsoft and Rackspace among others, however there are a long list of others some of which who are not so well-known that should be such as Centurylink/Savvis, Verizon/Terremark, Sungurd, Dimension Data, Peak, Bluehost, Carbonite, Mozy (owned by EMC), Xerox ACS, Evault (owned by Seagate) not to mention a long list of many others.

    Something to be aware of as part of doing your due diligence is determining who or what actually powers a particular cloud service. The larger providers such as Rackspace, Amazon, Microsoft, HP among others have their own infrastructure while some of the smaller service providers may in fact use one of the larger (or even smaller) providers as their real back-end. Hence understanding who is behind a particular cloud service is important to help decide the viability and stability of who it is you are subscribed to or working with.

    Something that I have said for the past couple of years and a theme of my book Cloud and Virtual Data Storage Networking (CRC Taylor & Francis) is do not be scared of clouds, however be ready, do your homework.

    This also means having cloud concerns is a good thing, again don’t be scared, however find what those concerns are along with if they are major or minor. From that list you can start to decide how or if they can be worked around, as well as be prepared ahead of time should you either need all of your cloud data back quickly, or should that service become un-available.

    Also when it comes to clouds, look beyond lowest cost or for free, likewise if something sounds too good to be true, perhaps it is. Instead look for value or how much do you get per what you spend including confidence in the service, service level agreements (SLA), security, and other items.

    Keep in mind, only you can prevent data loss either on-site or in the cloud, granted it is a shared responsibility (With a poll).

    Additional related cloud conversation items:
    Cloud conversations: AWS EBS Optimized Instances
    Poll: What Do You Think of IT Clouds?
    Cloud conversations: Gaining cloud confidence from insights into AWS outages
    Cloud conversations: confidence, certainty and confidentiality
    Cloud conversation, Thanks Gartner for saying what has been said
    Cloud conversations: AWS EBS, Glacier and S3 overview (Part III)
    Cloud conversations: Gaining cloud confidence from insights into AWS outages (Part II)
    Don’t Let Clouds Scare You – Be Prepared
    Everything Is Not Equal in the Datacenter, Part 3
    Amazon cloud storage options enhanced with Glacier
    What do VARs and Clouds as well as MSPs have in common?
    How many degrees separate you and your information?

    Ok, nuff said.

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Cloud conversation, Thanks Gartner for saying what has been said

    StorageIO industry trends cloud, virtualization and big data

    Thank you Gartner for your statements concurring and endorsing the notion of clouds can be viable, however do your homework, welcome to the club.

    Why am I thanking Gartner?

    Simple, I appreciate Gartner now saying what has been said for a couple of years hoping it will help to amplify the theme to the Gartner followers and faithful.

    Gartner: Cloud storage viable option, but proceed carefully


    Images licensed for use by StorageIO via Atomazul / Shutterstock.com

    Sounds like Gartner has come to the same conclusion on what has been said for several years now in posts, articles, keynotes, presentations, webinars and other venues which is when it comes to IT clouds, don’t be scared. However do your homework, be prepared, do your due diligence, proof of concepts.

    Image of clouds, cloud and virtual data storage networking book

    Here are some related materials to prepare and plan for IT clouds (public and private):

    What is your take on IT clouds? Click here to cast your vote and see what others are thinking about clouds.

    Now for those who feel that free information or content is not worth its price, then feel free to go to Amazon and buy some Book copies here, or subscribing to the Kindle version of the StorageIOblog, or contact us for an advisory consultation or other project. For everybody else, enjoy and remember, don’t be scared of clouds, do your homework, be prepared and keep in mind that clouds are a shared responsibility.

    Disclosure: I was a Gartner client when I working in an IT organization and then later as a vendor, however not anymore ;).

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Cloud conversations: Gaining cloud confidence from insights into AWS outages

    StorageIO industry trends cloud, virtualization and big data

    This is the first of a two-part industry trends and perspectives series looking at how to learn from cloud outages (read part II here).

    In case you missed it, there were some public cloud outages during the recent Christmas 2012-holiday season. One incident involved Microsoft Xbox (view the Microsoft Azure status dashboard here) users were impacted, and the other was another Amazon Web Services (AWS) incident. Microsoft and AWS are not alone, most if not all cloud services have had some type of incident and have gone on to improve from those outages. Google has had issues with different applications and services including some in December 2012 along with a Gmail incident that received covered back in 2011.

    For those interested, here is a link to the AWS status dashboard and a link to the AWS December 24 2012 incident postmortem. In the case of the recent AWS incident which affected users such as Netflix, the incident (read the AWS postmortem and Netflix postmortem) was tied to a human error. This is not to say AWS has more outages or incidents vs. others including Microsoft, it just seems that we hear more about AWS when things happen compared to others. That could be due to AWS size and arguably market leading status, diversity of services and scale at which some of their clients are using them.

    Btw, if you were not aware, Microsoft Azure is more than just about supporting SQLserver, Exchange, SharePoint or Office, it is also an IaaS layer for running virtual machines such as Hyper-V, as well as a storage target for storing data. You can use Microsoft Azure storage services as a target for backing up or archiving or as general storage, similar to using AWS S3 or Rackspace Cloud files or other services. Some backup and archiving AaaS and SaaS providers including Evault partner with Microsoft Azure as a storage repository target.

    When reading some of the coverage of these recent cloud incidents, I am not sure if I am more amazed by some of the marketing cloud washing, or the cloud bashing and uniformed reporting or lack of research and insight. Then again, if someone repeats a myth often enough for others to hear and repeat, as it gets amplified, the myth may assume status of reality. After all, you may know the expression that if it is on the internet then it must be true?

    Images licensed for use by StorageIO via
    Atomazul / Shutterstock.com

    Have AWS and public cloud services become a lightning rod for when things go wrong?

    Here is some coverage of various cloud incidents:

    The above are a small sampling of different stories, articles, columns, blogs, perspectives about cloud services outages or other incidents. Assuming the services are available, you can Google or Bing many others along with reading postmortems to gain insight into what happened, the cause, effect and how to prevent in the future.

    Do these recent incidents show a trend of increased cloud outages? Alternatively, do they say that the cloud services are being used more and on a larger basis, thus the impacts become more known?

    Perhaps it is a mix of the above, and like when a magnetic storage tape gets lost or stolen, it makes for good news or copy, something to write about. Granted there are fewer tapes actually lost than in the past, and far fewer vs. lost or stolen laptops and other devices with data on them. There are probably other reasons such as the lightning rod effect given how much industry hype around clouds that when something does happen, the cynics or foes come out in force, sometimes with FUD.

    Similar to traditional hardware or software based product vendors, some service providers have even tried to convince me that they have never had an incident, lost or corrupted or compromised any data, yeah, right. Candidly, I put more credibility and confidence in a vendor or solution provider who tells me that they have had incidents and taken steps to prevent them from recurring. Granted those steps might be made public while others might be under NDA, at least they are learning and implementing improvements.

    As part of gaining insights, here are some links to AWS, Google, Microsoft Azure and other service status dashboards where you can view current and past situations.

    What is your take on IT clouds? Click here to cast your vote and see what others are thinking about clouds.

    Ok, nuff said for now (check out part II here )

    Disclosure: I am a customer of AWS for EC2, EBS, S3 and Glacier as well as a customer of Bluehost for hosting and Rackspace for backups. Other than Amazon being a seller of my books (and my blog via Kindle) along with running ads on my sites and being an Amazon Associates member (Google also has ads), none of those mentioned are or have been StorageIO clients.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Cloud, virtualization, storage and networking in an election year

    My how time flies, seems like just yesterday (back in 2008) that I did a piece titled Politics and Storage, or, storage in an election year V2.008 and if you are not aware, it is 2012 and thus an election year in the U.S. as well as in many other parts of the world. Being an election year it’s not just about politicians, their supporters, pundits, surrogates, donors and voters, it’s also a technology decision-making and acquisition year (as are most years) for many environments.

    Similar to politics, some technology decisions will be major while others will be minor or renewals so to speak. Major decisions will evolve around strategies, architectures, visions, implementation plans and technology selections including products, protocols, processes, people, vendors or suppliers and services for traditional, virtual and cloud data infrastructure environments.

    Vendors, suppliers, service providers and their associated industry forums or alliances and trade groups are in various sales and marketing awareness campaigns. These various campaigns will decide who will be chosen by their customers or prospects for technology acquisitions ranging from hardware, software and services including servers, storage, IO and networking, desktops, power, cooling, facilities, management tools, virtualization and cloud products and services along with related items.

    The politics of data infrastructures including servers, storage, networking, hardware, software and services spanning physical, cloud and virtual environments has similarities to other political races. These include many organizations in the form of inter departmental rivalry over budgets or funding, service levels, decision-making, turf wars and technology ownership not to mention the usual vendor vs. vendor, VAR vs. VAR, service provider vs. service provider or other match ups.

    On the other hand, data and storage are also being used to support political campaigns in many ways across physical, virtual and cloud deployment scenarios.

    StorageIO industry trends cloud, virtualization and big data

    Let us not forget about the conventions or what are more commonly known as shows, conferences, user group events in the IT world. For example EMCworld earlier this year, Dell Storage Forum, or the recent VMworld (or click here to view video from past VMworld party with INXS), Oracle Open World along with many vendor analyst, partner, press and media or blogger days.

    Here are some 2012 politics of data infrastructure and storage campaign match-ups:

    Speaking of networks vs. server and storage or software and convergence, how about Brocade vs. Cisco, Qlogic vs. Emulex, Broadcom vs. Mellanox, Juniper vs. HP and Dell (Force10) or Arista vs. others in the race for SAN LAN MAN WAN POTS and PANs.

    Then there are the claims, counter claims, pundits, media, bloggers, trade groups or lobbyist, marketing alliance or pacs, paid for ads and posts, tweets and videos along with supporting metrics for traditional and social media.

    Lets also not forget about polls, and more polls.

    Certainly, there are vendors vs. vendors relying on their campaign teams (sales, marketing, engineering, financing and external surrogates) similar to what you would find with a politician, of course scope, size and complexity would vary.

    Surrogates include analyst, bloggers, consultants, business partners, community organizers, editors, VARs, influencers, press, public relations and publications among others. Some claim to be objective and free of vendor influence while leveraging simple to complex schemes for renumeration (e.g. getting paid) while others simply state what they are doing and with whom.

    Likewise, some point fingers at others who are misbehaving while deflecting away from what they are actually doing. Hmm, sounds like the pundit or surrogate two-step (as opposed to the Potomac two step) and prompts the question of who is checking the fact checkers and making disclosures (disclosure: this piece is being sponsored by StorageIO ;) )?

    StorageIO industry trends cloud, virtualization and big data

    What this all means?

    Use your brain, use your eyes and ears, and use your nose all of which have dual paths to your senses.

    In other words, if something sounds or looks too good to be true, it probably isn’t.

    Likewise if something smells funny or does not feel right to your senses or common sense, it probably is not or at least requires a closer look or analysis.

    Be an informed decision maker balancing needs vs. wants to make effective selections regardless of if for a major or minor item, technology, trend, product, process, protocol or service. Informed decisions also mean looking at both current and evolving or future trends, challenges and needs which for data infrastructures including servers, storage, networking, IO fabrics, cloud and virtualization means factoring in changing data and information life cycles and access or usage patterns. After all, while there are tough economic times on a global basis, there is no such thing as a data or information recession.

    StorageIO and uncle sam want you for cloud virtualization and data storage networking

    This also means gaining insight and awareness of issues and challenges, plus balancing awareness and knowledge (G2) vs. looks, appearances and campaign sales pitches (GQ) for your particular environment, priorities and preferences.

    Keep in mind and in the spirit of legendary Chicago style voting, when it comes to storage and data infrastructure topics, technologies and decisions, spend early, spend often and spend for those who cannot to keep the vendors and their ecosystem of partners happy.

    Note that this post is neither supported, influenced, endorsed or paid for by any vendors, VARs, service providers, trade groups, political action committees or Picture Archive Communication system (e.g. PACs), both of which deal with and in big data along with industry consortiums, their partners, customers or surrogates and neither would they probably approve of it anyway’s.

    With that being said, I am Greg Schulz of StorageIO and am not running for or from anything this year and I do endorse the above post ;).

    Ok, nuff said for now

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Cloud conversations: AWS Government Cloud (GovCloud)

    StorageIO industry trends clouds, virtualization, data and storage networking image

    Following earlier cloud conversations posts, cloud computing means many things from products to services, functionality and positioned for different layers of service delivery or capabilities (e.g. SaaS, AaaS, PaaS, IaaS and XaaS).

    Consequently it is no surprise when I hear from different people their opinion, belief or perception of what is or is not a cloud, confidence or concerns, or how to use and abuse clouds among other related themes.

    A common theme I hear talking with IT professionals on a global basis centers around conversations about confidence in clouds including reliability, security, privacy, compliance and confidentiality for where data is protected and preserved. This includes data being stored in different geography locations ranging from states or regions to countries and continents. What I also often hear are discussion around concerns over data from counties outside of the US being stored in the US or vice versa of information privacy laws.

    StorageIO cloud travel image

    Cost is also coming up in many conversations, which is interesting in that many first value propositions have been presented around cloud being cheaper. As with many things it depends, some services and usage models can be cheaper on a relative basis, just like some can be more expensive. Think of it this way, for some people a lease of an automobile can cheaper on monthly cash flow vs. buying or making loan payments. On the other hand, a buy or loan payment can have a lower overall cost depending on different factors then a lease.

    As with many cloud conversations, cost and return on investment (ROI) will vary, just as how the cloud is used to impact your return on innovation (the new ROI) will also vary.

    This brings me to something else I hear during my travels and in other conversations with IT; practitioners (e.g. customers and users as well as industry pundits) is a belief that governments cannot use clouds. Again, it depends on what type of government, the applications, sensitivity of data among others factors.

    Some FUD (Fear uncertainty doubt) I hear includes blanket statements such as governments cannot use cloud services or cloud services do not exist for governments. Again it comes down to digging deeper into the conversation such as what type of cloud, applications, government function, security and sensitivity among other factors.

    Keep in mind that there are services including those from Amazon Web Services (AWS) such as their Government Cloud (GovCloud) region. Granted, GovCloud is not applicable to all government cloud needs or types of applications or data or security clearances among other concerns.

    Needless to say AWS GovCloud is not the only solution out there on a public (government focused), private or hybrid basis, there are probably even some super double secret ultra-private or hybrid fortified government clouds that most in the government including experts are not aware of. However if those do exist, certainly talking about them is also probably off-limits for discussions even by the experts.

    Amazon Web Services logo

    Speaking of AWS, here is a link to an analysis of their cloud storage for archiving and inactive big data called Glacier, along with analysis of AWS Cloud Storage Gateway. Also, keep in mind that protecting data in the cloud is a shared responsibility meaning there are things both you as the user or consumer as well as the provider need to do.

    Btw, what is your take on clouds? Click here to cast your vote and see what others are thinking about clouds.

    Ok, nuff said for now.

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Open Data Center Alliance (ODCA) publishes two new cloud usage models

    The Open Data Center Alliance (ODCA) has announced and published more documents for data center customers of cloud usage. These new cloud usage models for to address customer demands for interoperability of various clouds and services before for Infrastructure as a Service (IaaS) among other topics which are now joined by the new Software as a Service (SaaS), Platform as a Service (PaaS) and foundational document for cloud interoperability.

    Unlike most industry trade groups or alliances that are vendor driven or centric, ODCA is consortium of global IT leaders (e.g. customers) that is vendor independent and comprises as 12 member steering committee from member companies (e.g. customers), learn more about ODCA here.

    Disclosure note, StorageIO is an ODCA member, visit here to become an ODCA member.

    From the ODCA announcement of the new documents:

    The documents detail expectations for market delivery to the organizations mission of open, industry standard cloud solution adoption, and discussions have already begun with providers to help accelerate delivery of solutions based on these new requirements. This suite of requirements was joined by a Best Practices document from National Australia Bank (NAB) outlining carbon footprint reductions in cloud computing. NAB’s paper illustrates their leadership in innovative methods to report carbon emissions in the cloud and aligns their best practices to underlying Alliance requirements. All of these documents are available in the ODCA Documents Library.

    The PaaS interoperability usage model outlines requirements for rapid application deployment, application scalability, application migration and business continuity. The SaaS interoperability usage model makes applications available on demand, and encourages consistent mechanisms, enabling cloud subscribers to efficiently consume SaaS via standard interactions. In concert with these usage models, the Alliance published the ODCA Guide to Interoperability, which describes proposed requirements for interoperability, portability and interconnectivity. The documents are designed to ensure that companies are able to move workloads across clouds.

    It is great to see IT customer driven or centric groups step and actually deliver content and material to help their peers, or in some cases competitors that compliments information provided by vendors and vendor driven trade groups.

    As with technologies, tools and services that often are seen as competitive, a mistake would be viewing ODCA as or in competition with other industry trade groups and organizations or vise versa. Rather, IT organizations and vendors can and should leverage the different content from the various sources. This is an opportunity for example vendors to learn more about what the customers are thinking or concerned about as opposed to telling IT organizations what to be looking at and vise versa.

    Granted some marketing organizations or even trade groups may not like that and view groups such as ODCA as giving away control of who decides what is best for them. Smart vendors, vars, business partners, consultants and advisors are and will leverage material and resources such as ODCA, and likewise, groups like ODCA are open to including a diverse membership unlike some pay to play industry vendor centric trade groups. If you are a vendor, var or business partner, don’t look at ODCA as a threat, instead, explore how your customers or prospects may be involved with, or using ODCA material and leverage that as a differentiator between you and your competitor.

    Likewise don’t be scared of vendor centric industry trade groups, alliances or consortiums, even the pay to play ones can have some value, although some have more value than others. For example from a storage and storage networking perspective, there are the Storage Networking Industry Association (SNIA) along with their various groups focused on Green and Energy along with Cloud Data Management Initiative (CDMI) related topics among others. There is also the SCSI Trade Association (STA) along with the Open Virtualization Alliance (OVA) not to mention the Open Fabric Alliance (OVA), Open Networking Foundation (ONF) and Computer Measurement Group (CMG) among many others that do good work and offer value with diverse content and offerings, some of which are free including to non members.

    Learn more about the ODCA here, along with access various documents including usage models in the ODCA document library here.

    While you are at, why not join StorageIO and other members by signing up to become a part of the ODCA here.

    Ok, nuff said for now.

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Data protection modernization, more than swapping out media

    backup, restore, BC, DR and archiving

    Have you modernized your data protection strategy and environment?

    If not, are you thinking about updating your strategy and environment?

    Why modernize your data protection including backup restore, business continuance (BC), high availability (HA) and disaster recovery (DR) strategy and environment?

    backup, restore, BC, DR and archiving

    Is it to leverage new technology such as disk to disk (D2D) backups, cloud, virtualization, data footprint reduction (DFR) including compression or dedupe?

    Perhaps you have or are considering data protection modernization because somebody told you to or you read about it or watched a video or web cast? Or, perhaps your backup and restore are broke so its time to change media or try something different.

    Lets take a step back for a moment and ask the question of what is your view of data protection modernization?

    Perhaps it is modernizing backup by replacing tape with disk, or disk with clouds?

    Maybe it is leveraging data footprint reduction (DFR) techniques including compression and dedupe?

    Data protection, data footprint reduction, dfr, dedupe, compress

    How about instead of swapping out media, changing backup software?

    Or what about virtualizing servers moving from physical machines to virtual machines?

    On the other hand maybe your view of modernizing data protection is around using a different product ranging from backup software to a data protection appliance, or snapshots and replication.

    The above and others certainly fall under the broad group of backup, restore, BC, DR and archiving, however there is another area which is not as much technology as it is techniques, best practices, processes and procedure based. That is, revisit why data and applications are being protected against what applicable threat risks and associated business risks.

    backup, restore, BC, DR and archiving

    This means reviewing service needs and wants including backup, restore, BC, DR and archiving that in turn drive what data and applications to protect, how often, how many copies and where those are located, along with how long they will be retained.

    backup, restore, BC, DR and archiving

    Modernizing data protection is more than simply swapping out old or broken media like flat tires on a vehicle.

    To be effective, data protection modernization involves taking a step back from the technology, tools and buzzword bingo topics to review what is being protected and why. It also means revisiting service level expectations and clarify wants vs. needs which translates to what if for free that is what is wanted, however for a cost then what is required.

    backup, restore, BC, DR and archiving

    Certainly technologies and tools play a role, however simply using new tools and techniques without revisiting data protection challenges at the source will result in new problems that resemble old problems.

    backup, restore, BC, DR and archiving

    Hence to support growth with a constrained or shrinking budget while maintaining or enhancing service levels, the trick is to remove complexity and costs.

    backup, restore, BC, DR and archiving

    This means not treating all data and applications the same, stretch your available resources to be more effective without compromise on service is mantra of modernizing data protection.

    Ok, nuff said for now, plenty more to discuss later.

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Amazon Web Services (AWS) and the NetFlix Fix?

    Amazon Web Services (AWS)
    Amazon Web Services (AWS)

    I received the following note from Amazon Web Services (AWS) about an enhancement to their Elastic Compute Cloud (EC2) service that can be seen by some as an enhancement to service or perhaps by others after last weeks outages, a fix or addressing a gap in their services. Note for those not aware, you can view current AWS service status portal here.

    The following is the note I received from AWS.

     

    Announcing Multiple IP Addresses for Amazon EC2 Instances in Amazon VPC

    Amazon Web Services (AWS)
    Amazon Web Services (AWS)
    Dear Amazon EC2 Customer,

    We are excited to introduce multiple IP addresses for Amazon EC2 instances in Amazon VPC. Instances in a VPC can be assigned one or more private IP addresses, each of which can be associated with its own Elastic IP address. With this feature you can host multiple websites, including SSL websites and certificates, on a single instance where each site has its own IP address. Private IP addresses and their associated Elastic IP addresses can be moved to other network interfaces or instances, assisting with application portability across instances.

    The number of IP addresses that you can assign varies by instance type. Small instances can accommodate up to 8 IP addresses (across 2 elastic network interfaces) whereas High-Memory Quadruple Extra Large and Cluster Computer Eight Extra Large instances can be assigned up to 240 IP addresses (across 8 elastic network interfaces). For more information about IP address and elastic network interface limits, go to Instance Families and Types in the Amazon EC2 User Guide.

    You can have one Elastic IP (EIP) address associated with a running instance at no charge. If you associate additional EIPs with that instance, you will be charged $0.005/hour for each additional EIP associated with that instance per hour on a pro rata basis.

    With this release we are also lowering the charge for EIP addresses not associated with running instances, from $0.01 per hour to $0.005 per hour on a pro rata basis. This price reduction is applicable to EIP addresses in both Amazon EC2 and Amazon VPC and will be applied to EIP charges incurred since July 1, 2012.
    To learn more about multiple IP addresses, visit the Amazon VPC User Guide. For more information about pricing for additional Elastic IP addresses on an instance, please see Amazon EC2 Pricing.
    Sincerely,

    The Amazon EC2 Team

    We hope you enjoyed receiving this message. If you wish to remove yourself from receiving future product announcements and the monthly AWS Newsletter, please update your communication preferences.

    Amazon Web Services LLC is a subsidiary of Amazon.com, Inc. Amazon.com is a registered trademark of Amazon.com, Inc. This message produced and distributed by Amazon Web Services, LLC, 410 Terry Ave. North, Seattle, WA 98109-5210.

    End of AWS message

     

    Server and StorageIO industry trends and perspective DAS

    Either way you look at it, AWS (disclosure I’m a paying EC2 and S3 customer) is taking responsibility on their part to do what is needed to enable a resilient, flexible, scalable data infrastructure. What I mean by that is that protecting data and access to it in cloud environments is a shared responsibility including discussing what went wrong, how to fix and prevent it, as well as communicating best practices. That is both the provider or service along with those who are using those capabilities have to take some ownership and responsibility on how they get used.

    For example, last week a major thunderstorms rolled across the U.S. causing large-scale power outages along the eastern seaboard of the U.S. and in particular in the Virginia area where one of Amazons availability zones (US East-1) has data centers located. Keep in mind that Amazon availability zones are made up of a collection of different physical data centers to cut or decrease chances of a single point of failure. However on June 30, 2012 during the major storms on the East coast of the U.S. something did go wrong, and as is usually the case, a chain of events resulted in or near a disaster (you can read the AWS post-mortem here).

    The result is that AWS based out of the Virginia availability zone were knocked off line for a period which impacted EC2, Elastic Block Storage (EBS), Relational Database Service (RDS) and Elastic Load Balancer (ELB) capabilities for that zone. This is not the first time that the Virginia availability zone has been affected having met a disruption about a year ago. What was different about this most recent outage is that a year ago one of the marquee AWS customers NetFlix was not affected during that outage due to how they use multiple availability zones for HA. In last weeks AWS outage NetFlix customers or services were affected however not due to loss of data or systems, rather, loss of access (which to a user or consumer is the same thing). The loss of access was due to failure of elastic load balancing not being able to allow users access to other availability zones.

    Server and StorageIO industry trends and perspective DAS

    Consequently, if you choose to read between the lines on the above email note I received from AWS, you can either look at the new service capabilities as an enhancement, or AWS learning and improving their capabilities. Also reading between the lines you can see how some environments such as NetFlix take responsibility in how they use cloud services designing for availability, resiliency and scale with stability as opposed to simply using as a cost cutting tool.

    Thus when both the provider and consumer take some responsibility for ensuring data protection and accessibility to services, there is less of a chance of service disruptions. Likewise when both parties learn from incidents or mistakes or leverage experiences, it makes for a more robust solution on a go forward basis. For those who have been around the block (or file) a few times thinking that clouds are not reliable or still immature you may have a point however think back to when your favorite or preferred platform (e.g. Mainframe, Mini, PC, client-server, iProduct, Web or other) initially appeared and teething problems or associated headaches.

    IMHO AWS along with other vendors or service providers who take responsibility to publish post-mortem’s of incidents, find and fix issues, address and enhance capabilities is part of the solution for laying the groundwork for the future vs. simply playing to a near term trend theme. Likewise vendors and service providers who are reaching out and helping to educate and get their customers to take some responsibility in how they can use services for removing complexity (and cost) to enhance services as opposed to simply cutting cost and introducing risk will do better over the long run.

    As I discuss in my book Cloud and Virtual Data Storage Networking (CRC Press), do not be scared of clouds, however be ready, do your homework, learn and understand what needs to be done or done differently. This means taking a shared responsibility one that the service provider should also be taking with you not to mention identifying new best practices, tools to be used along with conducting proof of concepts (POCs) to learn what to do and what not to do.

    Some related information:
    Only you can prevent cloud data loss
    The blame game: Does cloud storage result in data loss?
    Cloud conversations: Loss of data access vs. data loss
    Clouds are like Electricity: Dont be Scared
    AWS (Amazon) storage gateway, first, second and third impressions
    Poll: What Do You Think of IT Clouds? (Cast your vote and see results)

    Ok, nuff said for now.

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    EMCworld 2012: Trust and marketing, can they coexist?

    Storage I/O Industry Trends and Perspectives

    Recently while at EMCworld in Las Vegas (Thanks btw to EMC who covered coach airfare and 3 nights hotel) I had the opportunity along with group of other industry analysts and advisors to have a series of small group meeting sessions with key EMC leadership.

    EMC world

    These sessions included time with Chairman of the Board of Directors and Chief Executive Officer Joe Tucci, Chairmen of VCE Michael Capellas (who is also on the Cisco Board of Directors), President and Chief Operating Officer, EMC Information Infrastructure and Cloud Services Howard Elias, President and Chief Operating Officer, EMC Information Infrastructure Products Pat Gelsinger, and Executive Vice President and Chief Marketing Officer (CMO) Jeremy Burton.

    Joe Tucci is always fun to listen and engage with in small groups and conveys a cordial confidence when you meet face to face. Howard Elias who is now heading up the services business talked about walking the talk with services, public and private cloud including what EMC is doing internally. Michael Capellas had some good insight into what he is doing with VCE, along with his role on the Cisco BOD. Pat Gelsinger had some interesting points however seemed a bit more reserved than in earlier sessions. Jeremy Burton who is normally associated with the effective marketing company or everything movie campaigns at EMC did not use any backdrops, visual aids, theatrics or Vegas style entertainment during his session.

    Of the above-mentioned executives, the one that impressed me the most, and talking with other analysts/advisors had similar perspectives was Jeremy Burton. I have seen and heard him talk before in live and virtual venues along with what he is doing to focus EMC messaging and themes.

    A common comment and theme in talking with other analysts and advisors was that in five minutes, Jeremy did more to advance, clarify, articulate and explain who EMC is, what they are doing now and for the future.

    Image courtesy of EMC.com

    Trust was one of the themes of the EMCworld event as it pertains to collaborating with vendors and service providers as well as consultants, advisors and others. Trust is also important for going to the cloud on a public or private basis. It is easy to talk about trust however, it is also something that is earned and is important to keep up and protect. Normally given some of the stigma associated with marketing and or sales, trust too often becomes a punch line or term tossed around with skepticism, cynicism or empty promises. The reason I bring trust up in this discussion was that in Jeremy’s interaction with those in the room, whether others realized it or not, he was working on planting the seeds and establishing the basis for trust.

    Does that mean there is automatic trust now in anything that EMC or their marketing organization says or more so than what heard from other organizations? Perhaps some will automatically take what is heard and go with that as gospel however, they may be doing that already. For others who are skeptical by default and do their homework, analysis, research and other related tasks, they may be more likely to give the benefit of the doubt vs. automatically questioning everything looking for multiple confirmations and added fact checking.

    As for me, I generally take what any vendor or their pundits say with a grain of salt giving benefit of doubt where applicable unless trust has been previously impacted. In the case of EMC, I generally take what they say with a grain of salt. However, a level of trust and confidence can make validating what they say sometimes easier than with others. This is in part due to knowing where to go internally for details and information including NDA based material and the good job their analyst relations team and other group do on building and keep up relationships.

    Does this mean I like EMC more or less than other vendors? It means there is a level of trust, communication, relationship, contact, interaction and access to resources with EMC that might be more or less than with other vendors.  Disclosure EMC along with some companies they have acquired have been past clients.

    Now back to Jeremy.

    What impressed me the most was while other executives were engaging to different degrees, when I asked Jeremy how he and EMC balances entertainment (videos and movies, theatrics), education (expanding knowledge of EMC solutions, technology advancement) and being engaging (not just sales calls, social media, golfing or other in person activities) to drive business economics his response included all three of those aspects.

    Storage I/O Industry Trends and Perspectives

    Ok, I know, some of you should be saying that is the job and role of a marketing person to be an effective communicator which I would agree, however why don’t more marketers do a more effective job of what they do?

    In other words, Jeremy educated by sharing what and why they are doing certain things, Jeremy engaged with the entire audience while answering my question however not singular responding to me, he also entertained with some of his answers while also keeping them to the point, not rambling on. Afterwards I had a few minutes to talk one on one with Jeremy without the handlers or others and I can say it was refreshing and as is too of the case with marketers, there is trust.

    That does not mean I will take anything verbatim or follow the scripts or other things the truth squads want preached or handed out from EMC, Jeremy or any other vendor for that matter.

    I can say that in the few minutes up close and in a smaller setting, EMC has a secret weapon who can do more to build and convey trust and that is Jeremy Burton, hope I am not wrong ;).

    Ok, nuff said for now

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Only you can prevent cloud data loss

    Storage I/O trends

    Some of you might remember the saying from Smokey the bear, only you can prevent forest fires and for those who do not know about that, click on the image below.

    The reason I bring this up is that while cloud providers are responsible (see the cloud blame game) is that it is also up to the user or consumer to take some ownership and responsibility.

    Similar to vendor lock-in, the only one who can allow vendor lock in is the customer, granted a vendor can help influence the customer.

    The same theme applies to public clouds and cloud storage providers in that there is responsibility of providers along with government and industry regulations to help protect consumers or users. However, there is also the shared responsibility of the user and consumer to make informed decisions.

    What is your perspective on who is responsible for cloud data protection?

    Ok, nuff said for now

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Part IV: PureSystems, something old, something new, something from big blue

    This is the fourth in a five-part series around the recent IBM PureSystems announcements. You can view the earlier post here, and the next post here.

    So what does this mean for IBM Business Partners (BPs) and ISVs?
    What could very well differentiate IBM PureSystems from those of other competitors is to take what their partner NetApp has done with FlexPods combing third-party applications from Microsoft and SAP among others and take it to the next level. Similar to what helped make EMC Centera a success (or at least sell a lot of them) was inclusion and leveraging third-party ISVs and BPs  to add value. Compared to other vendors with object based or content accessible storage (CAS) or online archive platforms that focused on the technology feature, function speeds and feeds, EMC realized the key was getting ISVs to support so that BPs and their own direct sales force could sell the solution.

    With PureSystems, IBM is revisiting what they have done in the past which if offer bundled solutions providing incentives for ISVs to support and BPs to sell the IBM brand solution. EMC took an early step with including VMware with their Vblock combing server, storage, networking and software with NetApp taking the next step adding SAP, Microsoft and other applications. Dell, HP, Oracle and others are following suit so it only makes sense that IBM returns to its roots leveraging its DNA to reach out and get their ISVs who are now, have been in the past, or are new opportunities to be on board.

    IBM is throwing its resources including their innovation centers for training around the world where business partners can get the knowledge and technical support they need. In other words, workshops or seminars on how to sell deploy and setting up of these systems, application and customer testing or proof of concepts and things one would expect out of IBM for such an initiative. In addition to technology and sales training along with marketing support, IBM is making their financing capabilities available to help customers as well as offer incentives to their business partners to simplify acquisitions.

    So what buzzword bingo topics and themes did IBM address with this announcement:
    IBM did a fantastic job in terms of knocking the ball out of the park with this announcement pertaining buzzword bingo and deserves an atta boy or atta girl!

    So what about how this will affect sales of Bladecenters  or other systems?
    If all IBM and their BPs do are, encroach on existing systems sales to circle the wagons and protect the installed base, which would be one thing. However if IBM and their BPs can use the new packaging and model approach to reestablish customers and partnerships, or open and expand into new adjacent markets, then the net differences should be more Bladecenters (excuse me, PureFlex) being sold.

    So what will this cost?
    IBM is citing entry PureSystems Express models starting at around $100,000 USD for base systems with others starting at around $200,000 and $300,000 expandable into larger configurations and budgets. Note that like airlines that advertise a low airfare and then you get to pay extra for peanuts, drinks, extra bag space, changes to reservations and so forth, look at these and related systems not just for the first starting price, also for expansion costs over different time periods. Contact IBM, your BP or ISV to find out what one of these systems will do for and cost you.

    So what about VARs and IBM business partners (BPs)?
    This could be a boon for those BPs and ISVs  that had previously sold their software solutions bundled with IBM hardware platforms who were being challenged by other converged solution stacks or were being forced to unbundled. This will also allow those business partners to compete on par with other converged solutions or continue selling the pieces of what they are familiar with however under a new umbrellas. Of course, pricing will be a focus and concern for some who will want to see what added value exists vs. acquiring the various components. This also means that IBM will have to make incentives available for their partners to make a living while also allowing their customers to afford solutions and maximize their return on innovation (the new ROI) and enablement.

    Click here to view the next post in this series, ok nuff said for now.

    Here are some links to learn more:
    Various IBM Redbooks and related content
    The blame game: Does cloud storage result in data loss?
    What do you need when its time to buy a new server?
    2012 industry trends perspectives and commentary (predictions)
    Convergence: People, Processes, Policies and Products
    Buzzword Bingo and Acronym Update V2.011
    The function of XaaS(X) Pick a letter
    Hard product vs. soft product
    Buzzword Bingo and Acronym Update V2.011
    Part I: PureSystems, something old, something new, something from big blue
    Part II: PureSystems, something old, something new, something from big blue
    Part III: PureSystems, something old, something new, something from big blue
    Part IV: PureSystems, something old, something new, something from big blue
    Part V: PureSystems, something old, something new, something from big blue
    Cloud and Virtual Data Storage Networking

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved