Azure Stack Technical Preview 3 (TP3) Overview Preview Review

server storage I/O trends

Azure Stack Technical Preview 3 (TP3) Overview Preview Review

Perhaps you are aware or use Microsoft Azure, how about Azure Stack?

This is part one of a two-part series looking at Microsoft Azure Stack providing an overview, preview and review. Read part two here that looks at my experiences installing Microsoft Azure Stack Technical Preview 3 (TP3).

For those who are not aware, Azure Stack is a private on-premises extension of the Azure public cloud environment. Azure Stack now in technical preview three (e.g. TP3), or what you might also refer to as a beta (get the bits here).

In addition to being available via download as a preview, Microsoft is also working with vendors such as Cisco, Dell EMC, HPE, Lenovo and others who have announced Azure Stack support. Vendors such as Dell EMC have also made proof of concept kits available that you can buy including server with storage and software. Microsoft has also indicated that once launched for production versions scaling from a few to many nodes, that a single node proof of concept or development system will also remain available.

software defined data infrastructure SDDI and SDDC
Software-Defined Data Infrastructures (SDDI) aka Software-defined Data Centers, Cloud, Virtual and Legacy

Besides being an on-premises, private cloud variant, Azure Stack is also hybrid capable being able to work with public cloud Azure. In addition to working with public cloud Azure, Azure Stack services and in particular workloads can also work with traditional Microsoft, Linux and others. You can use pre built solutions from the Azure marketplace, in addition to developing your applications using Azure services and DevOps tools. Azure Stack enables hybrid deployment into public or private cloud to balance flexibility, control and your needs.

Azure Stack Overview

Microsoft Azure Stack is an on premise (e.g. in your own data center) private (or hybrid when connected to Azure) cloud platform. Currently Azure Stack is in Technical Preview 3 (e.g. TP3) and available as a proof of concept (POC) download from Microsoft. You can use Azure Stack TP3 as a POC for learning, demonstrating and trying features among other activities. Here is link to a Microsoft Video providing an overview of Azure Stack, and here is a good summary of roadmap, licensing and related items.

In summary, Microsoft Azure Stack is:

  • A onsite, on premise, in your data center extension of Microsoft Azure public cloud
  • Enabling private and hybrid cloud with strong integration along with common experiences with Azure
  • Adopt, deploy, leverage cloud on your terms and timeline choosing what works best for you
  • Common processes, tools, interfaces, management and user experiences
  • Leverage speed of deployment and configuration with a purpose-built integrate solution
  • Support existing and cloud native Windows, Linux, Container and other services
  • Available as a public preview via software download, as well as vendors offering solutions

What is Azure Stack Technical Preview 3 (TP3)

This version of Azure Stack is a single node running on a lone physical machine (PM) aka bare metal (BM). However can also be installed into a virtual machine (VM) using nesting. For example I have Azure Stack TP3 running nested on a VMware vSphere ESXi 6.5 systems with a Windows Server 2016 VM as its base operating system.

Microsoft Azure Stack architecture
Click here or on the above image to view list of VMs and other services (Image via Microsoft.com)

The TP3 POC Azure Stack is not intended for production environments, only for testing, evaluation, learning and demonstrations as part of its terms of use. This version of Azure Stack is associated with a single node identity such as Azure Active Directory (AAD) integrated with Azure, or Active Directory Federation Services (ADFS) for standalone modes. Note that since this is a single server deployment, it is not intended for performance, rather, for evaluating functionality, features, APIs and other activities. Learn more about Azure Stack TP3 details here (or click on image) including names of various virtual machines (VMs) as well as their roles.

Where to learn more

The following provide more information and insight about Azure, Azure Stack, Microsoft and Windows among related topics.

  • Azure Stack Technical Preview 3 (TP3) Overview Preview Review
  • Azure Stack TP3 Overview Preview Review Part II
  • Azure Stack Technical Preview (get the bits aka software download here)
  • Azure Stack deployment prerequisites (Microsoft)
  • Microsoft Azure Stack troubleshooting (Microsoft Docs)
  • Azure Stack TP3 refresh tips (Azure Stack)
  • Here is a good post with a tip about not applying certain Windows updates to Azure stack TP3 installs.
  • Configure Azure stack TP3 to be available on your own network (Azure Stack)
  • Azure Stack TP3 Marketplace syndication (Azure Stack)
  • Azure Stack TP3 deployment experiences (Azure Stack)
  • Frequently asked questions for Azure Stack (Microsoft)
  • Deploy Azure Stack (Microsoft)
  • Connect to Azure Stack (Microsoft)
  • Azure Active Directory (AAD) and Active Directory Federation Services (ADFS)
  • Azure Stack TP2 deployment experiences by Niklas Akerlund (@vNiklas) useful for tips for TP3
  • Deployment Checker for Azure Stack Technical Preview (Microsoft Technet)
  • Azure stack and other tools (Github)
  • How to enable nested virtualization on Hyper-V Windows Server 2016
  • Dell EMC announce Microsoft Hybrid Cloud Azure Stack (Dell EMC)
  • Dell EMC Cloud for Microsoft Azure Stack (Dell EMC)
  • Dell EMC Cloud for Microsoft Azure Stack Data Sheet (Dell EMC PDF)
  • Dell EMC Cloud Chats (Dell EMC Blog)
  • Microsoft Azure stack forum
  • Dell EMC Microsoft Azure Stack solution
  • Gaining Server Storage I/O Insight into Microsoft Windows Server 2016
  • Overview Review of Microsoft ReFS (Reliable File System) and resource links
  • Via WServerNews.com Cloud (Microsoft Azure) storage considerations
  • Via CloudComputingAdmin.com Cloud Storage Decision Making: Using Microsoft Azure for cloud storage
  • www.thenvmeplace.com, www.thessdplace.com, www.objectstoragecenter.com and www.storageio.com/converge
  • What this all means

    A common question is if there is demand for private and hybrid cloud, in fact, some industry expert pundits have even said private, or hybrid are dead which is interesting, how can something be dead if it is just getting started. Likewise, it is early to tell if Azure Stack will gain traction with various organizations, some of whom may have tried or struggled with OpenStack among others.

    Given a large number of Microsoft Windows-based servers on VMware, OpenStack, Public cloud services as well as other platforms, along with continued growing popularity of Azure, having a solution such as Azure Stack provides an attractive option for many environments. That leads to the question of if Azure Stack is essentially a replacement for Windows Servers or Hyper-V and if only for Windows guest operating systems. At this point indeed, Windows would be an attractive and comfortable option, however, given a large number of Linux-based guests running on Hyper-V as well as Azure Public, those are also primary candidates as are containers and other services.

    Continue reading more in part two of this two-part series here including installing Microsoft Azure Stack TP3.

    Ok, nuff said (for now…).

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book "Software-Defined Data Infrastructure Essentials" (CRC Press).

    Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

    Azure Stack TP3 Overview Preview Review Part II

    server storage I/O trends

    Azure Stack TP3 Overview Preview (Part II) Install Review

    This is part two of a two-part series looking at Microsoft Azure Stack with a focus on my experiences installing Microsoft Azure Stack Technical Preview 3 (TP3) including into a nested VMware vSphere ESXi environment. Read part one here that provides a general overview of Azure Stack.

    Azure Stack Review and Install

    Being familiar with Microsoft Azure public cloud having used it for a few years now, I wanted to gain some closer insight, experience, expand my trade craft on Azure Stack by installing TP3. This is similar to what I have done in the past with OpenStack, Hadoop, Ceph, VMware, Hyper-V and many others, some of which I need to get around to writing about sometime. As a refresher from part one of this series, the following is an image via Microsoft showing the Azure Stack TP3 architecture, click here or on the image to learn more including the names and functions of the various virtual machines (VMs) that make up Azure Stack.

    Microsoft Azure Stack architecture
    Click here or on the above image to view list of VMs and other services (Image via Microsoft.com)

    Whats Involved Installing Azure Stack TP3?

    The basic steps are as follows:

    • Read this Azure Stack blog post (Azure Stack)
    • Download the bits (e.g. the Azure Stack software) from here, where you access the Azure Stack Downloader tool.
    • Planning your deployment making decisions on Active Directory and other items.
    • Prepare the target server (physical machine aka PM, or virtual machine VM) that will be the Azure Stack destination.
    • Copy Azure Stack software and installer to target server and run pre-install scripts.
    • Modify PowerShell script file if using a VM instead of a PM
    • Run the Azure Stack CloudBuilder setup, configure unattend.xml if needed or answer prompts.
    • Server reboots, select Azure Stack from two boot options.
    • Prepare your Azure Stack base system (time, network NICs in static or DHCP, if running on VMware install VMtools)
    • Determine if you will be running with Azure Active Directory (AAD) or standalone Active Directory Federated Services (ADFS).
    • Update any applicable installation scripts (see notes that follow)
    • Deploy the script, then extended Azure Stack TP3 PoC as needed

    Note that this is a large download of about 16GB (23GB with optional WIndows Server 2016 demo ISO).

    Use the AzureStackDownloader tool to download the bits (about 16GB or 23GB with optional Windows Server 2016 base image) which will either be in several separate files which you stitch back together with the MicrosoftAzureStackPOC tool, or as a large VHDX file and smaller 6.8GB ISO (Windows Server 2016). Prepare your target server system for installation once you have all the software pieces downloaded (or do the preparations while waiting for download).

    Once you have the software downloaded, if it is a series of eight .bin files (7 about 2GB, 1 around 1.5GB), good idea to verify their checksums, then stitch them together on your target system, or on a staging storage device or file share. Note that for the actual deployment first phase, the large resulting cloudbuilder.vhdx file will need to reside in the C:\ root location of the server where you are installing Azure Stack.

    server storageio nested azure stack tp3 vmware

    Azure Stack deployment prerequisites (Microsoft) include:

    • At least 12 cores (or more), dual socket processor if possible
    • As much DRAM as possible (I used 100GB)
    • Put the operating system disk on flash SSD (SAS, SATA, NVMe) if possible, allocate at least 200GB (more is better)
    • Four x 140GB or larger (I went with 250GB) drives (HDD or SSD) for data deployment drives
    • A single NIC or adapter (I put mine into static instead of DHCP mode)
    • Verify your physical or virtual server BIOS has VT enabled

    The above image helps to set the story of what is being done. On the left is for bare metal (BM) or physical machine (PM) install of Azure Stack TP3, on the right, a nested VMware (vSphere ESXi 6.5) with virtual machine (VM) 11 approach. Note that you could also do a Hyper-V nested among other approaches. Shown in the image above common to both a BM or VM is a staging area (could be space on your system drive) where Azure Stack download occurs. If you use a separate staging area, then simply copy the individual .bin files and stitch together into the larger .VHDX, or, copy the larger .VHDX, which is better is up to your preferences.

    Note that if you use the nested approach, there are a couple of configuration (PowerShell) scripts that need to be updated. These changes are to trick the installer into thinking that it is on a PM when it checks to see if on physical or virtual environments.

    Also note that if using nested, make sure you have your VMware vSphere ESXi host along with specific VM properly configured (e.g. that virtualization and other features are presented to the VM). With vSphere ESXi 6.5 virtual machine type 11 nesting is night and day easier vs. earlier generations.

    Something else to explain here is that you will initially start the Azure Stack install preparation using a standard Windows Server (I used a 2016 version) where the .VHDX is copied into its C:\ root. From there you will execute some PowerShell scripts to setup some configuration files, one of which needs to be modified for nesting.

    Once those prep steps are done, there is a Cloudbuilder deploy script that gets run that can be done with an unattend.xml file or manual input. This step will cause a dual-boot option to be added to your server where you can select Azure Stack or your base prep Windows Server instance, followed by reboot.

    After the reboot occurs and you choose to boot into Azure Stack, this is the server instance that will actually run the deployment script, as well as build and launch all the VMs for the Azure Stack TP3 PoC. This is where I recommend having a rough sketch like above to annotate layers as you go to remember what layer working at. Don’t worry, it becomes much easier once all is said and done.

    Speaking of preparing your server, refer to Microsoft specs, however in general give the server as much RAM and cores as possible. Also if possible place the system disk on a flash SSD (SAS, SATA, NVMe) and make sure that it has at least 200GB, however 250 or even 300GB is better (just in case you need more space).

    Additional configuration tips include allocating four data disks for Azure, if possible make these SSDs as well as, however more important IMHO to have at least the system on fast flash SSD. Another tip is to enable only one network card or NIC and put it into static vs. DHCP address mode to make things easier later.

    Tip: If running nested, vSphere 6.5 worked the smoothest as had various issues or inconsistencies with earlier VMware versions, even with VMs that ran nested just fine.

    Tip: Why run nested? Simple, I wanted to be able to use using VMware tools, do snapshots to go back in time, plus share the server with some other activities until ready to give Azure Stack TP3 its own PM.

    Tip: Do not connect the POC machine to the following subnets (192.168.200.0/24, 192.168.100.0/27, 192.168.101.0/26, 192.168.102.0/24, 192.168.103.0/25, 192.168.104.0/25) as Azure Stack TP3 uses those.

    storageio azure stack tp3 vmware configuration

    Since I decided to use a nested VM deploying using VMware, there were a few extra steps needed that I have included as tips and notes. Following is view via vSphere client of the ESXi host and VM configuration.

    The following image combines a couple of different things including:

    A: Showing the contents of C:\Azurestack_Supportfiles directory

    B: Modifying the PrepareBootFromVHD.ps1 file if deploying on virtual machine (See tips and notes)

    C: Showing contents of staging area including individual .bin files along with large CloudBuilder.vhdx

    D: Running the PowerShell script commands to prepare the PrepareBootFromVHD.ps1 and related items

    prepariing azure stack tp3 cloudbuilder for nested vmware deployment

    From PowerShell (administrator):

    # Variables
    $Uri = 'https://raw.githubusercontent.com/Azure/Azure stack/master/Deployment/'
    $LocalPath = 'c:\AzureStack_SupportFiles'

    # Create folder
    New-Item $LocalPath -type directory

    # Download files
    ( 'BootMenuNoKVM.ps1', 'PrepareBootFromVHD.ps1', 'Unattend.xml', 'unattend_NoKVM.xml') | foreach { Invoke-WebRequest ($uri + $_) -OutFile ($LocalPath + '\' + $_) }

    After you do the above, decide if you will be using an Unattend.xml or manual entry of items for building the Azure Stack deployment server (e.g. a Windows Server). Note that the above PowerShell script created the C:\azurestack_supportfiles folder and downloads the script files for building the cloud image using the previously downloaded Azure Stack CloudBuilder.vhdx (which should be in C:\).

    Note and tip is that if you are doing a VMware or virtual machine based deployment of TP3 PoC, you will need to change C:\PrepareBootFromVHD.ps1 in the Azure Stack support files folder. Here is a good resource on what gets changed via Github that shows an edit on or about line 87 of PrepareBootFromVHD.ps1. If you run the PrepareBootFromVHD.ps1 script on a virtual machine you will get an error message, the fix is relatively easy (after I found this post).

    Look in PrepareBootFromVHD.ps1 for something like the following around line 87:

    if ((get-disk | where {$_.isboot -eq $true}).Model -match 'Virtual Disk')       {      Write-Host "The server is currently already booted from a virtual hard disk, to boot the server from the CloudBuilder.vhdx you will need to run this script on an Operating System that is installed on the physical disk of this server."      Exit      }
    

    You can either remove the "exit" command, or, change the test for "Virtual Disk" to something like "X", for fun I did both (and it worked).

    Note that you only have to make the above and another change in a later step if you are deploying Azure Stack TP3 as a virtual machine.

    Once you are ready, go ahead and launch the PrepareBootFromVHD.ps1 script which will set the BCDBoot entry (more info here).

    azure stack tp3 cloudbuilder nested vmware deployment

    You will see a reboot and install, this is installing what will be called the physical instance. Note that this is really being installed on the VM system drive as a secondary boot option (e.g. azure stack).

    azure stack tp3 dual boot option

    After the reboot, login to the new Azure Stack base system and complete any configuration including adding VMware Tools if using VMware nested. Some other things to do include make sure you have your single network adapter set to static (makes things easier), and any other updates or customizations. Before you run the next steps, you need to decide if going to use Azure Active Directory (AAD) or local ADFS.

    Note that if you are not running on a virtual machine, simply open a PowerShell (administrator) session, and run the deploy script. Refer to here for more guidance on the various options available including discussion on using AAD or ADFS.

    Note if you run the deployment script on a virtual machine, you will get an error which is addressed in the next section, otherwise, sit back and watch the progress..

    CloudBuilder Deployment Time

    Once you have your Azure Stack deployment system and environment ready, including a snapshot if on virtual machine, launch the PowerShell deployment script. Note that you will need to have decided if deploying with Azure Active Directory (AAD) or Azure Directory Federated Services (ADFS) for standalone aka submarine mode. There are also other options you can select as part of the deployment discussed in the Azure Stack tips here (a must read) and here. I chose to do a submarine mode (e.g. not connected to Public Azure and AAD) deployment.

    From PowerShell (administrator):

    cd C:\CloudDeployment:\Setup
    $adminpass = ConvertTo-SecureString "youradminpass" -AsPlainText -Force
    .\InstallAzureStackPOC.ps1 -AdminPassword $adminpass -UseADFS

    Deploying on VMware Virtual Machines Tips

    Here is a good tip via Gareth Jones (@garethjones294) that I found useful for updating one of the deployment script files (BareMetal_Tests.ps1 located in C:\CloudDeployment\Roles\PhysicalMachines\Tests folder) so that it would skip the bare metal (PM) vs. VM tests. Another good resource, even though it is for TP2 and early versions of VMware is TP2 deployment experiences by Niklas Akerlund (@vNiklas).

    Note that this is a bit of a chick and egg scenario unless you are proficient at digging into script files since the BareMetal_Tests.ps1 file does not get unpacked until you run the CloudBuilder deployment script. If you run the script and get an error, then make the changes below, and rerun the script as noted. Once you make the modification to the BareMetal_Tests.ps1 file, keep a copy in a safe place for future use.

    Here are some more tips for deploying Azure Stack on VMware,

    Per the tip mentioned about via Gareth Jones (tip: read Gareths post vs. simply cut and paste the following which is more of a guide):

    Open BareMetal_Tests.ps1 file in PowerShell ISE and navigate to line 376 (or in that area)
    Change $false to $true which will stop the script failing when checking to see if the Azure Stack is running inside a VM.
    Next go to line 453.
    Change the last part of the line to read “Should Not BeLessThan 0”
    This will stop the script checking for the required amount of cores available.

    After you make the above correction as with any error (and fix) during Azure Stack TP3 PoC deployment, simply run the following.

    cd C:\CloudDeployment\Setup
    .\InstallAzureStackPOC.ps1 -rerun
    

    Refer to the extra links in the where to learn more section below that offer various tips, tricks and insight that I found useful, particular for deploying on VMware aka nested. Also in the links below are tips on general Azure Stack, TP2, TP3, adding services among other insight.

    starting azure stack tp3 deployment

    Tip: If you are deploying Azure Stack TP3 PoC on virtual machine, once you start the script above, copy the modified BareMetal_Tests.ps1 file

    Once the CloudBuilder deployment starts, sit back and wait, if you are using SSDs, it will take a while, if using HDDs, it will take a long while (up to hours), however check in on it now and then to see progress of if any errors. Note that some of the common errors will occur very early in the deployment such as the BareMetal_Tests.ps1 mentioned above.

    azure stack tp3 deployment finished

    Checking in periodically to see how the deployment progress is progressing, as well as what is occurring. If you have the time, watch some of the scripts as you can see some interesting things such as the software defined data center (SDDC) aka software-defined data infrastructure (SDDC) aka Azure Stack virtual environment created. This includes virtual machine creation and population, creating the software defined storage using storage spaces direct (S2D), virtual network and active directory along with domain controllers among others activity.

    azure stack tp3 deployment progress

    After Azure Stack Deployment Completes

    After you see the deployment completed, you can try accessing the management portal, however there may be some background processing still running. Here is a good tip post on connecting to Azure Stack from Microsoft using Remote Desktop (RDP) access. Use RDP from the Azure Stack deployment Windows Server and connect to a virtual machine named MAS-CON01, launch Server Manager and for Local Server disable Internet Explorer Enhanced Security (make sure you are on the right system, see the tip mentioned above). Disconnect from MAS-CON01 (refer to the Azure Stack architecture image above), then reconnect, and launch Internet Explorer with an URL of (note documentation side to use which did not work for me).

    Note the username for the Azure Stack system is AzureStack\AzureStackAdmin with a password of what you set for administrative during setup. If you get an error, verify the URLs, check your network connectivity, wait a few minutes as well as verify what server you are trying to connect from and too. Keep in mind that even if deploying on a PM or BM (e.g. non virtual server or VM), the Azure Stack deployment TP3 PoC creates a "virtual" software-defined environment with servers, storage (Azure Stack uses Storage Spaces Direct [S2D] and software defined network.

    accessing azure stack tp3 management portal dashboard

    Once able to connect to Azure Stack, you can add new services including virtual machine image instances such as Windows (use the Server 2016 ISO that is part of Azure Stack downloads), Linux or others. You can also go to these Microsoft resources for some first learning scenarios, using the management portals, configuring PowerShell and troubleshooting.

    Where to learn more

    The following provide more information and insight about Azure, Azure Stack, Microsoft and Windows among related topics.

  • Azure Stack Technical Preview 3 (TP3) Overview Preview Review
  • Azure Stack TP3 Overview Preview Review Part II
  • Azure Stack Technical Preview (get the bits aka software download here)
  • Azure Stack deployment prerequisites (Microsoft)
  • Microsoft Azure Stack troubleshooting (Microsoft Docs)
  • Azure Stack TP3 refresh tips (Azure Stack)
  • Here is a good post with a tip about not applying certain Windows updates to AzureStack TP3 installs.
  • Configure Azure Stack TP3 to be available on your own network (Azure Stack)
  • Azure Stack TP3 Marketplace syndication (Azure Stack)
  • Azure Stack TP3 deployment experiences (Azure Stack)
  • Frequently asked questions for Azure Stack (Microsoft)
  • Azure Active Directory (AAD) and Active Directory Federation Services (ADFS)
  • Deploy Azure Stack (Microsoft)
  • Connect to Azure Stack (Microsoft)
  • Azure Stack TP2 deployment experiences by Niklas Akerlund (@vNiklas) useful for tips for TP3
  • Deployment Checker for Azure Stack Technical Preview (Microsoft Technet)
  • Azure stack and other tools (Github)
  • How to enable nested virtualization on Hyper-V Windows Server 2016
  • Dell EMC announce Microsoft Hybrid Cloud Azure Stack (Dell EMC)
  • Dell EMC Cloud for Microsoft Azure Stack (Dell EMC)
  • Dell EMC Cloud for Microsoft Azure Stack Data Sheet (Dell EMC PDF)
  • Dell EMC Cloud Chats (Dell EMC Blog)
  • Microsoft Azure stack forum
  • Dell EMC Microsoft Azure Stack solution
  • Gaining Server Storage I/O Insight into Microsoft Windows Server 2016
  • Overview Review of Microsoft ReFS (Reliable File System) and resource links
  • Via WServerNews.com Cloud (Microsoft Azure) storage considerations
  • Via CloudComputingAdmin.com Cloud Storage Decision Making: Using Microsoft Azure for cloud storage
  • www.thenvmeplace.com, www.thessdplace.com, www.objectstoragecenter.com and www.storageio.com/converge
  • What this all means

    A common question is if there is demand for private and hybrid cloud, in fact, some industry expert pundits have even said private, or hybrid are dead which is interesting, how can something be dead if it is just getting started. Likewise, it is early to tell if Azure Stack will gain traction with various organizations, some of whom may have tried or struggled with OpenStack among others.

    Given a large number of Microsoft Windows-based servers on VMware, OpenStack, Public cloud services as well as other platforms, along with continued growing popularity of Azure, having a solution such as Azure Stack provides an attractive option for many environments. That leads to the question of if Azure Stack is essentially a replacement for Windows Servers or Hyper-V and if only for Windows guest operating systems. At this point indeed, Windows would be an attractive and comfortable option, however, given a large number of Linux-based guests running on Hyper-V as well as Azure Public, those are also primary candidates as are containers and other services.

    software defined data infrastructures SDDI and SDDC

    Some will say that if OpenStack is struggling in many organizations and being free open source, how Microsoft can have success with Azure Stack. The answer could be that some organizations have struggled with OpenStack while others have not due to lack of commercial services and turnkey support. Having installed both OpenStack and Azure Stack (as well as VMware among others), Azure Stack is at least the TP3 PoC is easy to install, granted it is limited to one node, unlike the production versions. Likewise, there are easy to use appliance versions of OpenStack that are limited in scale, as well as more involved installs that unlock full functionality.

    OpenStack, Azure Stack, VMware and others have their places, alongside, or supporting containers along with other tools. In some cases, those technologies may exist in the same environment supporting different workloads, as well as accessing various public clouds, after all, Hybrid is the home run for many if not most legality IT environments.

    Ok, nuff said (for now…).

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book "Software-Defined Data Infrastructure Essentials" (CRC Press).

    Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

    Dell EMC Announce Azure Stack Hybrid Cloud Solution

    server storage I/O trends

    Dell EMC Azure Stack Hybrid Cloud Solution

    Dell EMC have announced their Microsoft Azure Stack hybrid cloud platform solutions. This announcement builds upon earlier statements of support and intention by Dell EMC to be part of the Microsoft Azure Stack community. For those of you who are not familiar, Azure Stack is an on premise extension of Microsoft Azure public cloud.

    What this means is that essentially you can have the Microsoft Azure experience (or a subset of it) in your own data center or data infrastructure, enabling cloud experiences and abilities at your own pace, your own way with control. Learn more about Microsoft Azure Stack including my experiences with and installing Technique Preview 3 (TP3) here.

    software defined data infrastructures SDDI and SDDC

    What Is Azure Stack

    Microsoft Azure Stack is an on-premises (e.g. in your own data center) private (or hybrid when connected to Azure) cloud platform. Currently Azure Stack is in Technical Preview 3 (e.g. TP3) and available as a proof of concept (POC) download from Microsoft. You can use Azure Stack TP3 as a POC for learning, demonstrating and trying features among other activities. Here is link to a Microsoft Video providing an overview of Azure Stack, and here is a good summary of roadmap, licensing and related items.

    In summary, Microsoft Azure Stack and this announcement is about:

    • A onsite, on-premises, in your data center extension of Microsoft Azure public cloud
    • Enabling private and hybrid cloud with good integration along with shared experiences with Azure
    • Adopt, deploy, leverage cloud on your terms and timeline choosing what works best for you
    • Common processes, tools, interfaces, management and user experiences
    • Leverage speed of deployment and configuration with a purpose-built integrated solution
    • Support existing and cloud-native Windows, Linux, Container and other services
    • Available as a public preview via software download, as well as vendors offering solutions

    What Did Dell EMC Announce

    Dell EMC announced their initial product, platform solutions, and services for Azure Stack. This includes a Proof of Concept (PoC) starter kit (PE R630) for doing evaluations, prototype, training, development test, DevOp and other initial activities with Azure Stack. Dell EMC also announced a larger for production deployment, or large-scale development, test DevOp activity turnkey solution. The initial production solution scales from 4 to 12 nodes, or from 80 to 336 cores that include hardware (server compute, memory, I/O and networking, top of rack (TOR) switches, management, Azure Stack software along with services. Other aspects of the announcement include initial services in support of Microsoft Azure Stack and Azure cloud offerings.
    server storage I/O trends
    Image via Dell EMC

    The announcement builds on joint Dell EMC Microsoft experience, partnerships, technologies and services spanning hardware, software, on site data center and public cloud.
    server storage I/O trends
    Image via Dell EMC

    Dell EMC along with Microsoft have engineered a hybrid cloud platform for organizations to modernize their data infrastructures enabling faster innovate, accelerate deployment of resources. Includes hardware (server compute, memory, I/O networking, storage devices), software, services, and support.
    server storage I/O trends
    Image via Dell EMC

    The value proposition of Dell EMC hybrid cloud for Microsoft Azure Stack includes consistent experience for developers and IT data infrastructure professionals. Common experience across Azure public cloud and Azure Stack on-premises in your data center for private or hybrid. This includes common portal, Powershell, DevOps tools, Azure Resource Manager (ARM), Azure Infrastructure as a Service (IaaS) and Platform as a Service (PaaS), Cloud Infrastructure and associated experiences (management, provisioning, services).
    server storage I/O trends
    Image via Dell EMC

    Secure, protect, preserve and serve applications VMs hosted on Azure Stack with Dell EMC services along with Microsoft technologies. Dell EMC data protection including backup and restore, Encryption as a Service, host guard and protected VMs, AD integration among other features.
    server storage I/O trends
    Image via Dell EMC

    Dell EMC services for Microsoft Azure Stack include single contact support for prepare, assessment, planning; deploy with rack integration, delivery, configuration; extend the platform with applicable migration, integration with Office 365 and other applications, build new services.
    server storage I/O trends
    Image via Dell EMC

    Dell EMC Hyper-converged scale out solutions range from minimum of 4 x PowerEdge R730XD (total raw specs include 80 cores (4 x 20), 1TB RAM (4 x 256GB), 12.8TB SSD Cache, 192TB Storage, plus two top of row network switches (Dell EMC) and 1U management server node. Initial maximum configuration raw specification includes 12 x R730XD (total 336 cores), 6TB memory, 86TB SSD cache, 900TB storage along with TOR network switch and management server.

    The above configurations initially enable HCI nodes of small (low) 20 cores, 256GB memory, 5.7TB SSD cache, 40TB storage; mid size 24 cores, 384GB memory, 11.5TB cache and 60TB storage; high-capacity with 28 cores, 512GB memory, 11.5TB cache and 80TB storage per node.
    server storage I/O trends
    Image via Dell EMC

    Dell EMC Evaluator program for Microsoft Azure Stack including the PE R630 for PoCs, development, test and training environments. The solution combines Microsoft Azure Stack software, Dell EMC server with Intel E5-2630 (10 cores, 20 threads / logical processors or LPs), or Intel E5-2650 (12 cores, 24 threads / LPs). Memory is 128GB or 256GB, storage includes flash SSD (2 x 480GB SAS) and HDD (6 x 1TB SAS).
    and networking.
    server storage I/O trends
    Image via Dell EMC

    Collaborative support single contact between Microsoft and Dell EMC

    Who Is This For

    This announcement is for any organization that is looking for an on-premises, in your data center private or hybrid cloud turnkey solution stack. This initial set of announcements can be for those looking to do a proof of concept (PoC), advanced prototype, support development test, DevOp or gain cloud-like elasticity, ease of use, rapid procurement and other experiences of public cloud, on your terms and timeline. Naturally, there is a strong affinity and seamless experience for those already using, or planning to use Azure Public Cloud for Windows, Linux, Containers and other workloads, applications, and services.

    What Does This Cost

    Check with your Dell EMC representative or partner for exact pricing which varies for the size and configurations. There are also various licensing models to take into consideration if you have Microsoft Enterprise License Agreements (ELAs) that your Dell EMC representative or business partner can address for you. Likewise being cloud based, there is also time usage-based options to explore.

    Where to learn more

    What this all means

    The dust is starting to settle on last falls Dell EMC integration, both of whom have long histories working with, and partnering along with Microsoft on legacy, as well as virtual software-defined data centers (SDDC), software-defined data infrastructures (SDDI), native, and hybrid clouds. Some may view the Dell EMC VMware relationship as a primary focus, however, keep in mind that both Dell and EMC had worked with Microsoft long before VMware came into being. Likewise, Microsoft remains one of the most commonly deployed operating systems on VMware-based environments. Granted Dell EMC have a significant focus on VMware, they both also sell, service and support many services for Microsoft-based solutions.

    What about Cisco, HPE, Lenovo among others who have to announce or discussed their Microsoft Azure Stack intentions? Good question, until we hear more about what those and others are doing or planning, there is not much more to do or discuss beyond speculating for now. Another common question is if there is demand for private and hybrid cloud, in fact, some industry expert pundits have even said private, or hybrid are dead which is interesting, how can something be dead if it is just getting started. Likewise, it is early to tell if Azure Stack will gain traction with various organizations, some of whom may have tried or struggled with OpenStack among others.

    Given a large number of Microsoft Windows-based servers on VMware, OpenStack, Public cloud services as well as other platforms, along with continued growing popularity of Azure, having a solution such as Azure Stack provides an attractive option for many environments. That leads to the question of if Azure Stack is essentially a replacement for Windows Servers or Hyper-V and if only for Windows guest operating systems. At this point indeed, Windows would be an attractive and comfortable option, however, given a large number of Linux-based guests running on Hyper-V as well as Azure Public, those are also primary candidates as are containers and other services.

    Overall, this is an excellent and exciting move for both Microsoft extending their public cloud software stack to be deployed within data centers in a hybrid way, something that those customers are familiar with doing. This is a good example of hybrid being spanning public and private clouds, remote and on-premises, as well as familiarity and control of traditional procurement with the flexibility, elasticity experience of clouds.

    software defined data infrastructures SDDI and SDDC

    Some will say that if OpenStack is struggling in many organizations and being free open source, how Microsoft can have success with Azure Stack. The answer could be that some organizations have struggled with OpenStack while others have not due to lack of commercial services and turnkey support. Having installed both OpenStack and Azure Stack (as well as VMware among others), Azure Stack is at least the TP3 PoC is easy to install, granted it is limited to one node, unlike the production versions. Likewise, there are easy to use appliance versions of OpenStack that are limited in scale, as well as more involved installs that unlock full functionality.

    OpenStack, Azure Stack, VMware and others have their places, along, or supporting containers along with other tools. In some cases, those technologies may exist in the same environment supporting different workloads, as well as accessing various public clouds, after all, Hybrid is the home run for many if not most legality IT environments.

    Overall this is a good announcement from Dell EMC for those who are interested in, or should become more aware about Microsoft Azure Stack, Cloud along with hybrid clouds. Likewise look forward to hearing more about the solutions from others who will be supporting Azure Stack as well as other hybrid (and Virtual Private Clouds).

    Ok, nuff said (for now…).

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book "Software-Defined Data Infrastructure Essentials" (CRC Press).

    Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

    HPE Continues Buying Into Server Storage I/O Data Infrastructures

    Storage I/O Data Infrastructures trends
    Updated 1/16/2018

    HPE expanded its Storage I/O Data Infrastructures portfolio buying into server storage I/O data infrastructure technologies announcing an all cash (e.g. no stock) acquisition of Nimble Storage (NMBL). The cash acquisition for a little over $1B USD amounts to $12.50 USD per Nimble share, double what it had traded at. As a refresh, or overview, Nimble is an all flash shared storage system leverage NAND flash solid storage device (SSD) performance. Note that Nimble also partners with Cisco and Lenovo platforms that compete with HPE servers for converged systems.

    Earlier this year (keep in mind its only mid-March) HPE also announced acquisition of server storage Hyper-Converged Infrastructure (HCI) vendor Simplivity (about $650M USD cash). In another investment this year HPE joined other investors as part of scale out and software defined storage startups Hedvig latest funding round (more on that later). These acquisitions are in addition to smaller ones such as last years buying of SGI, not to mention various divestitures.

    Data Infrastructures

    What Are Server Storage I/O Data Infrastructures Resources

    Data Infrastructures exists to support business, cloud and information technology (IT) among other applications that transform data into information or services. The fundamental role of data infrastructures is to give a platform environment for applications and data that is resilient, flexible, scalable, agile, efficient as well as cost-effective.

    Technologies that make up data infrastructures include hardware, software, cloud or managed services, servers, storage, I/O and networking along with people, processes, policies along with various tools spanning legacy, software-defined virtual, containers and cloud.

    HPE and Server Storage Acquisitions

    HPE and its predecessor HP (e.g. before the split that resulted in HPE) was familiar with expanding its data infrastructure portfolio spanning servers, storage, I/O networking, hardware, software and services. These range from Compaq who acquired DEC which gave them the StorageWorks brand and product line up (e.g. recall EVA and its predecessors), Lefthand, 3PAR, IBRIX, Polyserve, Autonomy, EDS and others that I’m guessing some at HPE (along with customers and partners) might not want to remember.

    In addition to their own in-house including via technology acquisition, HPE also partners for its entry-level and volume low-end MSA (Modular Storage Array) series with DotHill who was acquired by Seagate a year or so ago. In addition to the MSA, other HPE OEMs for storage include Hitachi Ltd. (e.g. parent of Hitachi Data Systems aka HDS) reselling their high-end enterprise class storage system as the XP7, as well as various other partner arrangements.

    Keep in mind that HPE has a large server business from low to high-end, spanning towers to dense blades to dual, quad and cluster in box (CiB) configurations with various processor architectures. Some of these servers are used as platforms for not only HPE, also other vendors software defined storage, as well as tin wrapped software solutions, appliances and systems. HPE is also one of a handful of partners working with Microsoft to bring the software defined private (and hybrid) Azure Stack cloud stack as an appliance to market.

    HPE acquisitions Dejavu or Something New?

    For some people there may be a sense of Dejavu of what HPE and its predecessors have previously acquired, developed, sold and supported into the market over years (and decades in some cases). What will be interesting to see is how the 3PAR (StoreServ) and Lefthand based (StoreVirtual) as well as ConvergedSystem 250-HC product lines are realigned to make way for Nimble and Simplivity.

    Likewise what will HPE do with MSA at the low-end, continue to leverage it for low-end and high-volume basic storage similar to Dell with the Netapp/Engenio powered MD series? Or will HPE try to move the Nimble down market and displace the MDS? What about in the mid-market, will Nimble be unleashed to replace StoreVirtual (e.g. Lefthand), or will they fence it in (e.g. being restricted to certain scenarios?
    Will the Nimble solution be allowed to move up market into the low-end of where 3PAR has been positioned, perhaps even higher up given its all flash capabilities. Or, will there be a 3PAR everywhere approach?

    Then there is Simplivity as the solution is effectively software running on an HPE server (or with other partners Cisco and Lenovo) along with a PCIe offload card (with Simplivity data services acceleration). Note that Simplivity leverages PCIe offload cards for some of their functionality, this too is familiar ground for HPE given its ASIC use by 3PAR.

    Simplivity has the potential to disrupt some low to mid-range, perhaps even larger opportunities that are looking to go to a converged infrastructure (CI) or HCI deployment as part of their data infrastructure needs. One can speculate that Simplivity after repackaging will be positioned along current HPE CI and HCI solutions.

    This will be interesting to watch to see if the HPE server and storage groups can converge not only from a technology point, also sales, marketing, service, and support perspective. With the Simplivity solution, HPE has an opportunity to move the industry thinking or perception that HCI is only for small environments defined by what some products can do.

    What I mean by this is that HPE with its enterprise and SMB along with SME and cloud managed service provider experience as well as servers can bring hyper-scale out (and up) converged to the market. In other words, start addressing the concern I hear from larger organizations that most CI or HCI solutions (or packaging) are just for smaller environments. HPE has the servers, they have the storage from MSAs to other modules and core data infrastructure building blocks along with the robustness of the Simplivity software to enable hyper-scale out CI.

    What about bulk, object, scale-out storage

    HPE has a robust tape business, yes I know tape is dead, however tell that to the customers who keep buying products providing revenue along with margin to HPE (and others). Likewise HPE has VTLs as well as other solutions for addressing bulk data (e.g. big data, backups, protection copies, archives, high volume, and large quantity, what goes on tape or object). For example HPE has the StoreOnce solution.

    However where is the HPE object storage story?

    Otoh, does HPE its own object storage software, simply partner with others? HPE can continue to provide servers along with underlying storage for other vendors bulk, cloud and object storage systems, and where needed, meet in the channel among other arrangements.

    On the other hand, this is where similar to Polyserve and Ibrix among others in the past have come into play, with HPE via its pathfinder (investment group) joining others in putting some money into Hedvig. HPE gets access to Hedvig for their scale out storage that can be used for bulk as well as other deployments including CI, HCI and CIB (e.g. something to sell HPE servers and storage with).

    HPE can continue to partner with other software providers and software-defined storage stacks. Keep in mind that Milan Shetti (CTO, Data Center Infrastructure Group HPE) is no stranger to these waters given his past at Ibrix among others.

    What About Hedvig

    Time to get back to Hedvig which is a storage startup whose software can run on various server storage platforms, as well as in different topologies. Different topologies include in a CI or HCI, Cloud, as well as scale out with various access including block, file and object. In addition to block, file and object access, Hedvig has interesting management tools, data services, along with support for VMware, Docker, and OpenStack among others.

    Recently Hedvig landed another $21.5M USD in funding bringing their total to about $52M USD. HPE via its investment arm, joins other investors (note HPE was part of the $21.5M, that was not the amount they invested) including Vertex, Atlantic Bridge, Redpoint, edbi and true ventures.

    What does this mean for HPE and Hedvig among others? Tough to say however easy to imagine how Hedvig could be leveraged as a partner using HPE servers, as well as for HPE to have an addition to their bulk, scale-out, cloud and object storage portfolio.

    Where to Learn More

    View more material on HPE, data infrastructure and related topics with the following links.

  • Cloud and Object storage are in your future, what are some questions?
  • PCIe Server Storage I/O Network Fundamentals
  • If NVMe is the answer, what are the questions?
  • Fixing the Microsoft Windows 10 1709 post upgrade restart loop
  • Data Infrastructure server storage I/O network Recommended Reading
  • Introducing Windows Subsystem for Linux WSL Overview
  • IT transformation Serverless Life Beyond DevOps with New York Times CTO Nick Rockwell Podcast
  • HPE Announces AMD Powered Gen 10 ProLiant DL385 For Software Defined Workloads
  • AWS Announces New S3 Cloud Storage Security Encryption Features
  • NVM Non Volatile Memory Express NVMe Place
  • Data Infrastructure Primer and Overview (Its Whats Inside The Data Center)
  • January 2017 Server StorageIO Update Newsletter
  • September and October 2016 Server StorageIO Update Newsletter
  • HP Buys one of the seven networking dwarfs and gets a bargain
  • Did HP respond to EMC and Cisco VCE with Microsoft Hyper-V bundle?
  • Give HP storage some love and short strokin
  • While HP and Dell make counter bids, exclusive interview with 3PAR CEO David Scott
  • Data Protection Fundamental Topics Tools Techniques Technologies Tips
  • Hewlett-Packard beats Dell, pays $2.35 billion for 3PAR
  • HP Moonshot 1500 software defined capable compute servers
  • What Does Converged (CI) and Hyper converged (HCI) Mean to Storage I/O?
  • What’s a data infrastructure?
  • Ensure your data infrastructure remains available and resilient
  • Object Storage Center, The SSD place and The NVMe place
  • Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What this all means

    Generally speaking I think this is a good series of moves for HPE (and their customers) as long as they can execute in all dimensions.

    Let’s see how they execute, and by this, I mean more than simply executing or terminating staff from recently acquired or earlier acquisitions. How will HPE craft go to the market message that leverages the portfolio to compete and hold or take share from other vendors, vs. cannibalize across its own lines (e.g. revenue prevention)? With that strategy and message, how will HPE assure existing customers will be taken care, be given a definite upgrade and migration path vs. giving them a reason to go elsewhere.

    Hopefully HPE unleashes the full potential of Simplivity and Nimble along with 3PAR, XP7 where needed, along with MSA at low-end (or as part of volume scale-out with servers for software defined), to mention sever portfolio. For now, this tells me that HPE is still interested in maintaining, expanding their data infrastructure business vs. simply retrenching selling off assets. Thus this looks like HPE is interested in continuing to invest in data infrastructure technologies including buying into server, storage I/O network, hardware, software solutions, while not simply clinging to what they already have, or previously bought.

    Everything is not the same in data centers and across data infrastructure, so why have a one size fits all approach for organization as large, diverse as HPE.

    Congratulations and best wishes to the folks at Hedvig, Nimble, Simplivity.

    Now, lets see how this all plays out.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Supermicro CSE-M14TQC Use your media bay to add 12 Gbps SAS SSD drives to your server

    Storage I/O trends

    Supermicro CSE-M14TQC Use your media bay to add 12 Gbps SAS SSD drives to your server

    Do you have a computer server, workstation or mini-tower PC that needs to have more 2.5" form factor hard disk drive (HDD), solid state device (SSD) or hybrid flash drives added yet no expansion space?

    Do you also want or need the HDD or SSD drive expansion slots to be hot swappable, 6 Gbps SATA3 along with up to 12 Gbps SAS devices?

    Do you have an available 5.25" media bay slot (e.g. where you can add an optional CD or DVD drive) or can you remove your existing CD or DVD drive using USB for software loading?

    Do you need to carry out the above without swapping out your existing server or workstation on a reasonable budget, say around $100 USD plus tax, handling, shipping (your prices may vary)?

    If you need implement the above, then here is a possible solution, or in my case, an real solution.

    Via StorageIOblog Supermicro 4 x 2.5 12Gbps SAS enclosure CSE-M14TQC
    Supermicro CSE-M14TQC with hot swap canister before installing in one of my servers

    In the past I have used a solution from Startech that supports up to 4 x 2.5" 6 Gbps SAS and SATA drives in a 5.25" media bay form factor installing these in my various HP, Dell and Lenovo servers to increase internal storage bays (slots).

    Via Amazon.com StarTech SAS and SATA expansion
    Via Amazon.com StarTech 4 x 2.5" SAS and SATA internal enclosure

    I still use the StarTech device shown (read earlier reviews and experiences here, here and here) above in some of my servers which continue to be great for 6Gbps SAS and SATA 2.5" HDDs and SSDs. However for 12 Gbps SAS devices, I have used other approaches including external 12 Gbps SAS enclosures.

    Recently while talking with the folks over at Servers Direct, I mentioned how I was using StarTech 4 x 2.5" 6Gbps SAS/SATA media bay enclosure as a means of boosting the number of internal drives that could be put into some smaller servers. The Servers Direct folks told me about the Supermicro CSE-M14TQC which after doing some research, I decided to buy one to complement the StarTech 6Gbps enclosures, as well as external 12 Gbps SAS enclosures or other internal options.

    What is the Supermicro CSE-M14TQC?

    The CSE-M14TQC is a 5.25" form factor enclosure that enables four (4) 2.5" hot swappable (if your adapter and OS supports hot swap) 12 Gbps SAS or 6 Gbps SATA devices (HDD and SSD) to fit into the media bay slot normally used by CD/DVD devices in servers or workstations. There is a single Molex male power connector on the rear of the enclosure that can be used to attach to your servers available power using applicable connector adapters. In addition there are four seperate drive connectors (e.g. SATA type connectors) that support up to 12 Gbps SAS per drive which you can attach to your servers motherboard (note SAS devices need a SAS controller), HBA or RAID adapters internal ports.

    Cooling is provided via a rear mounted 12,500 RPM 16 cubic feet per minute fan, each of the four drives are hot swappable (requires operating system or hypervisor support) contained in a small canister (provided with the enclosure). Drives easily mount to the canister via screws that are also supplied as part of the enclosure kit. There is also a drive activity and failure notification LED for the devices. If you do not have any available SAS or SATA ports on your servers motherboard, you can use an available PCIe slot and add a HBA or RAID card for attaching the CSE-M14TQC to the drives. For example, a 12 Gbps SAS (6 Gbps SATA) Avago/LSI RAID card, or a 6 Gbps SAS/SATA RAID card.

    Via Supermicro CSE-M14TQC rear details (4 x SATA and 1 Molex power connector)

    Via StorageIOblog Supermicro 4 x 2.5 rear view CSE-M14TQC 12Gbps SAS enclosure
    CSE-M14TQCrear view before installation

    Via StorageIOblog Supermicro CSE-M14TQC 12Gbps SAS enclosure cabling
    CSE-M14TQC ready for installation with 4 x SATA (12 Gbps SAS) drive connectors and Molex power connector

    Tip: In the case of the Lenovo TS140 that I initially installed the CSE-M14TQC into, there is not a lot of space for installing the drive connectors or Molex power connector to the enclosure. Instead, attach the cables to the CSE-M14TQC as shown above before installing the enclosure into the media bay slot. Simply attach the connectors as shown and feed them through the media bay opening as you install the CSE-M14TQC enclosure. Then attach the drive connectors to your HBA, RAID card or server motherboard and the power connector to your power source inside the server.

    Note and disclaimer, pay attention to your server manufactures power loading and specification along with how much power will be used by the HDD or SSD’s to be installed to avoid electrical power or fire issues due to overloading!

    Via StorageIOblog Supermicro CSE-M14TQC enclosure Lenovo TS140
    CSE-M14TQC installed into Lenovo TS140 empty media bay

    Via StorageIOblog Supermicro CSE-M14TQC drive enclosure Lenovo TS140

    CSE-M14TQC installed with front face plated installed on Lenovo TS140

    Where to read, watch and learn more

    Storage I/O trends

    What this all means and wrap up

    If you have a server that simply needs some extra storage capacity by adding some 2.5" HDDs, or boosting performance with fast SSDs yet do not have any more internal drive slots or expansion bays, leverage your media bay. This applies to smaller environments where you might have one or two servers, as well as for environments where you want or need to create a scale out software defined storage or hyper-converged platform using your own hardware. Another option is that if you have a lab or test environment for VMware vSphere ESXi Windows, Linux, Openstack or other things, this can be a cost-effective approach to adding both storage space capacity as well as performance and leveraging newer 12Gbps SAS technologies.

    For example, create a VMware VSAN cluster using smaller servers such as Lenovo TS140 or equivalent where you can install a couple of 6TB or 8TB higher capacity 3.5" drive in the internal drive bays, then adding a couple of 12 Gbps SAS SSDs along with a couple of 2.5" 2TB (or larger) HDDs along with a RAID card, and high-speed networking card. If VMware VSAN is not your thing, how about setting up a Windows Server 2012 R2 failover cluster including Scale Out File Server (SOFS) with Hyper-V, or perhaps OpenStack or one of many other virtual storage appliances (VSA) or software defined storage, networking or other solutions. Perhaps you need to deploy more storage for a big data Hadoop based analytics system, or cloud or object storage solution? On the other hand, if you simply need to add some storage to your storage or media or gaming server or general purpose server, the CSE-M14TQC can be an option along with other external solutions.

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Breaking the VMware ESXi 5.5 ACPI boot loop on Lenovo TD350

    Storage I/O trends

    Breaking the VMware ESXi 5.5 ACPI boot loop on Lenovo TD350

    Do you have a Lenovo TD350 or for that many other servers that when trying to load or run VMware vSphere ESXi 5.5 u2 (or other versions) and run into the boot loop at the “Initializing ACPI” point?

    Lenovo TD350 server

    VMware ACPI boot loop

    The symptoms are that you see ESXi start its boot process, loading drivers and modules (e.g. black screen), then you see the Yellow boot screen with Timer and Scheduler initialized, and at the “Initializing ACPI” point, ka boom, either a boot loop starts (e.g. the above processes repeats after system boots).

    The fix is actually pretty quick and simple, finding it took a bit of time, trial and error.

    There were of course the usual suspects such as

    • Checking to BIOS and firmware version of the motherboard on the Lenovo TD350 (checked this, however did not upgrade)
    • Making sure that the proper VMware ESXi patches and updates were installed (they were, this was a pre built image from another working server)
    • Having the latest installation media if this was a new install (tried this as part of trouble shooting to make sure the pre built image was ok)
    • Remove any conflicting devices (small diversion hint: make sure if you have cloned a working VMware image to an internal drive that it is removed to avoid same file system UUID errors)
    • Boot into BIOS making sure that for processor VT is enabled, for SATA that AHCI is enabled for any drives as opposed to IDE or RAID, and that for boot, make sure set to Legacy vs. Auto (e.g. disable UEFI support) as well as verify boot order. Having been in auto mode for UEFI support for some other activity, this was easy to change, however was not the magic silver bullet I was looking for.

    Breaking the VMware ACPI boot loop on Lenovo TD350

    After doing some searching and coming up with some interesting and false leads, as well as trying several boots, BIOS configuration changes, even cloning the good VMware ESXi boot image to an internal drive if there was a USB boot issue, the solution was rather simple once found (or remembered).

    Lenovo TD350 Basic BIOS settings
    Lenovo TD350 BIOS basic settings

    Lenovo TD350 processor BIOS settings
    Lenovo TD350 processor settings

    Make sure that in your BIOS setup under PCIE that you have that you disable “Above 4GB decoding".

    Turns out that I had enabled "Above 4GB decoding" for some other things I had done.

    Lenovo TD350 fix VMware ACPO error
    Lenovo TD350 disabling above 4GB decoding on PCIE under advanced settings

    Once I made the above change, press F10 to save BIOS settings and boot, VMware ESXi had no issues getting past the ACPI initializing and the boot loop was broken.

    Where to read, watch and learn more

    • Lenovo TS140 Server and Storage I/O lab Review
    • Lenovo ThinkServer TD340 Server and StorageIO lab Review
    • Part II: Lenovo TS140 Server and Storage I/O lab Review
    • Software defined storage on a budget with Lenovo TS140

    Storage I/O trends

    What this all means and wrap up

    In this day and age of software defined focus, remember to double-check how your hardware BIOS (e.g. software) is defined for supporting various software defined server, storage, I/O and networking software for cloud, virtual, container and legacy environments. Watch for future posts with my experiences using the Lenovo TD350 including with Windows 2012 R2 (bare metal and virtual), Ubuntu (bare metal and virtual) with various application workloads among other things.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Lenovo ThinkServer TD340 StorageIO lab Review

    Storage I/O trends

    Lenovo ThinkServer TD340 Server and StorageIO lab Review

    Earlier this year I did a review of the Lenovo ThinkServer TS140 in the StorageIO Labs (see the review here), in fact I ended up buying a TS140 after the review, and a few months back picked up another one. This StorageIOlab review looks at the Lenovo ThinkServer TD340 Tower Server which besides having a larger model number than the TS140, it also has a lot more capabilities (server compute, memory, I/O slots and internal hot-swap storage bays. Pricing varies depending on specific configuration options, however at the time of this post Lenovo was advertising a starting price of $1,509 USD for a specific configuration here. You will need to select different options to decide your specific cost.

    The TD340 is one of the servers that Lenovo has had prior to its acquisition of IBM x86 server business that you can read about here. Note that the Lenovo acquisition of the IBM xSeries business group has begun as of early October 2014 and is expected to be completed across different countries in early 2015. Read more about the IBM xSeries business unit here, here and here.

    The Lenovo TD340 Experience

    Lets start with the overall experience which was very easy other than deciding what make and model to try. This includes going from first answering some questions to get the process moving, agreeing to keep the equipment safe, secure, insured as well as not damaging anything. Part of the process also involved answering some configuration related questions and shortly there after a large box from Lenovo arrived.

    TD340 is ready for use
    TD340 with Keyboard and Mouse (Monitor and keyboard not included)

    One of the reasons I have a photo of the TD340 on a desk is that I initially put it in an office environment similar to what I did with the TS140 as Lenovo claimed it would be quiet enough to do so. I was not surprised and indeed the TD340 is quiet enough to be used where you would normally find a workstation or mini-tower. By being so quiet the TD340 is a good fit for environments that need a server that has to go into an office environment as opposed to a server or networking room.

    Welcome to the TD340
    Lenovo ThinkServer Setup

    TD340 Setup
    Lenovo TD340 as tested in BIOS setup, note the dual Intel Xeon E5-2420 v2 processors

    TD340 as tested

    TD340 Selfie of whats inside
    TD340 "Selfie" with 4 x 8GB DDR3 DIMM (32GB) and PCIe slots (empty)

    TD340 disk drive bays
    TD340 internal drive hot-swap bays

    Speeds and Feeds

    The TD340 that I tested was a Machine type 7087 model 002RUX which included 4 x 16GB DIMMs and both processor sockets occupied.

    You can view the Lenovo TD340 data sheet with more speeds and feeds here, however the following is a summary.

    • Operating systems support include various Windows Servers (2008-2012 R2), SUSE, RHEL, Citrix XenServer and VMware ESXi
    • Form factor is 5U tower with weight starting at 62 pounds depending on how configured
    • Processors include support for up to two (2) Intel E5-2400 v2 series
    • Memory includes 12 DDR3 DRAM DIMM slots (LV RDIMM and UDIMM) for up to 129GB.
    • Expansion slots vary depending on if a single or dual cpu socket. Single CPU socket installed has 1 x PCIe Gen3 FH/HL x8 mechanical, x4 electrical, 1 x PCIe Gen3
    • FH/HL x16 mechanical, x16 electrical and a single PCI 32bit/33 MHz FH/HL slot. With two CPU sockets installed extra PCIe slots are enabled. These include single x PCIe GEN3: FH/HL x8 mechanical, x4 electrical, single x PCIe GEN3: FH/HL x16 mechanical, x16 electrical, three x PCIe GEN3: FH/HL x8 mechanical, x8 electrical and a single PCI 5V 32-bit/33 MHz: FH/HL
    • Two 5.25” media bays for CD or DVDs or other devices
    • Integrated ThinkServer RAID (0/1/10/5) with optional RAID adapter models
    • Internal storage varies depending on model including up to eight (8) x 3.5” hot swap drives or 16 x 2.5” hot swap drives (HDD’s or SSDs).
    • Storage space capacity varies by the type and size of the drives being used.
    • Networking interfaces include two (2) x GbE
    • Power supply options include single 625 watt or 800 watt, or 1+1 redundant hot-swap 800 watt, five fixed fans.
    • Management tools include ThinkServer Management Module and diagnostics

    What Did I do with the TD340

    After initial check out in an office type environment, I moved the TD340 into the lab area where it joined other servers to be used for various things.

    Some of those activities included using the Windows Server 2012 Essentials along with associated admin activities as well as installing VMware ESXi 5.5.

    TD340 is ready for use
    TD340 with Keyboard and Mouse (Monitor and keyboard not included)

    What I liked

    Unbelievably quiet which may not seem like a big deal, however if you are looking to deploy a server or system into a small office workspace, this becomes an important considerations. Otoh, if you are a power user and want a robust server that can be installed into a home media entertainment system, well, this might be a nice to have consideration ;). Speaking of IO slots, naturally I’m interested in Server Storage I/O so having multiple slots is a must have, along with the processor that is multi-core (pretty much standard these days) along with VT and EP for supporting VMware (these were disabled in the BIOS however that was an easy fix).

    What I did not like

    The only thing I did not like was that I ran into a compatibility issue trying to use a LSI 9300 series 12Gb SAS HBA which Lenovo is aware of, and perhaps has even addressed by now. What I ran into is that the adapters work however I was not able to get the full performance out of the adapters as compared to on other systems including my slower Lenovo TS140s.

    Summary

    Overall I give Lenovo and the TD340 an "B+" which would have been an "A" had I not gotten myself into a BIOS situation or been able to run the 12Gbps SAS PCIe Gen 3 cards at full speed. Likewise the Lenovo service and support also helped to improve on the experience. Otoh, if you are simply going to use the TD340 in a normal out of the box mode without customizing to add your own adapters or install your own operating system or Hypervisors (beyond those that are supplied as part of the install setup tool kit), you may have an "A" or "A+" experience with the TD340.

    Would I recommend the TD340 to others? Yes for those who need this type and class of server for Windows, *nix, Hyper-V or VMware environments.

    Would I buy a TD340 for myself? Maybe if that is the size and type of system I need, however I have my eye on something bigger. On the other hand for those who need a good value server for a SMB or ROBO environment with room to grow, the TD340 should be on your shopping list to compare with other solutions.

    Disclosure: Thanks to the folks at Lenovo for sending and making the TD340 available for review and a hands on test experience including covering the cost of shipping both ways (the unit should now be back in your possession). Thus this is not a sponsored post as Lenovo is not paying for this (they did loan the server and covered two-way shipping), nor am I paying them, however I have bought some of their servers in the past for the StorageIOLab environment that are companions to some Dell and HP servers that I have also purchased.

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Lenovo TS140 Server and Storage I/O Review

    Storage I/O trends

    Lenovo TS140 Server and Storage I/O Review

    This is a review that looks at my recent hands on experiences in using a TS140 (Model MT-M 70A4 – 001RUS) pedestal (aka tower) server that the Lenovo folks sent to me to use for a month or so. The TS140 is one of the servers that Lenovo had prior to its acquisition of IBM x86 server business that you can read about here.

    The Lenovo TS140 Experience

    Lets start with the overall experience which was very easy and good. This includes going from initial answering some questions to get the process moving, agreeing to keep the equipment safe, secure, insured as well as not damaging anything (this was not a tear down and rip it apart into pieces trial).

    Part of the process also involved answering some configuration related questions and shortly there after a large box from Lenovo arrived. Turns out it was a box (server hardware) inside of a Lenovo box, that was inside a slightly larger unmarked shipping box (see larger box in the background).

    TS140 Evaluation Arrives

    TS140 shipment undergoing initial security screen scan and sniff (all was ok)

    TS140 with Windows 2012
    TS140 with Keyboard and Mouse (Monitor not included)

    One of the reasons I have a photo of the TS140 on a desk is that I initially put it in an office environment as Lenovo claimed it would be quiet enough to do so. I was not surprised and indeed the TS140 is quiet enough to be used where you would normally find a workstation or mini-tower. By being so quiet the TS140 is a good fit for environments that need a small or starter server that has to go into an office environment as opposed to a server or networking room. For those who are into mounting servers, there is the option for placing the TS140 on its side into a cabinet or rack.

    Windows 2012 on TS140
    TS140 with Windows Server 2012 Essentials

    TS140 as tested

    TS140 Selfie of whats inside
    TS140 "Selfie" with 4 x 4GB DDR3 DIMM (16GB) and PCIe slots (empty)

    16GB RAM (4 x 4GB DDR3 UDIMM, larger DIMMs are supported)
    Windows Server 2012 Essentials
    Intel Xeon E3-1225 v3 @3.2 Ghz quad (C226 chipset and TPM 1.2) vPRO/VT/EP capable
    Intel GbE 1217-LM Network connection
    280 watt power supply
    Keyboard and mouse (no monitor)
    Two 7.2K SATA HDDs (WD) configured as RAID 1 (100GB Lun)
    Slot 1 PCIe G3 x16
    Slot 2 PCIe G2 x1
    Slot 3 PCIe G2 x16 (x4 electrical signal)
    Slot 4 PCI (legacy)
    Onboard 6GB SATA RAID 0/1/10/5
    Onboard SATSA 3.0 (6Gbps) connectors (0-4), USB 3.0 and USB 2.0

    Read more about what I did with the Lenovo TS140 in part II of my review along with what I liked, did not like and general comments here.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Part II: What I did with Lenovo TS140 in my Server and Storage I/O Review

    Storage I/O trends

    Part II: Lenovo TS140 Server and Storage I/O Review


    This is the second of a two-part post series on my recent experiences with a Lenovo TS140 Server, you can read part I here.

    What Did I do with the TS140

    After initial check out in an office type environment, I moved the TS140 into the lab area where it joined other servers to be used for various things.

    Some of those activities included using the Windows Server 2012 Essentials along with associated admin activities. Also, I also installed VMware ESXi 5.5 and ran into a few surprises. One of those was that I needed to apply an update to VMware drivers to support the onboard Intel NIC, as well as enable VT and EP modes for virtualization to assist via the BIOS. The biggest surprise was that I discovered I could not install VMware onto an internal drive attached via one of the internal SATA ports which turns out to be a BIOS firmware issue.

    Lenovo confirmed this when I brought it to their attention, and the workaround is to use USB to install VMware onto a USB flash SSD thumb drive, or other USB attached drive or to use external storage via an adapter. As of this time Lenovo is aware of the VMware issue, however, no date for new BIOS or firmware is available. Speaking of BIOS, I did notice that there was some newer BIOS and firmware available (FBKT70AUS December 2013) than what was installed (FB48A August of 2013). So I went ahead and did this upgrade which was a smooth, quick and easy process. The process included going to the Lenovo site (see resource links below), selecting the applicable download, and then installing it following the directions.

    Since I was going to install various PCIe SAS adapters into the TS140 attached to external SAS and SATA storage, this was not a big issue, more of an inconvenience Likewise for using storage mounted internally the workaround is to use an SAS or SATA adapter with internal ports (or cable). Speaking of USB workarounds, have a HDD, HHDD, SSHD or SSD that is a SATA device and need to attach it to USB, then get one of these cables. Note that there are USB 3.0 and USB 2.0 cables (see below) available so choose wisely.

    USB to SATA adapter cable

    In addition to running various VMware-based workloads with different guest VMs.

    I also ran FUTREMARK PCmark (btw, if you do not have this in your server storage I/O toolbox it should be) to gauge the systems performance. As mentioned the TS140 is quiet. However, it also has good performance depending on what processor you select. Note that while the TS140 has a list price as of the time of this post under $400 USD, that will change depending on which processor, amount of memory, software and other options you choose.

    Futuremark PCMark
    PCmark

    PCmark testResults
    Composite score2274
    Compute11530
    System Storage2429
    Secondary Storage2428
    Productivity1682
    Lightweight2137

    PCmark results are shown above for the Windows Server 2012 system (non-virtualized) configured as shipped and received from Lenovo.

    What I liked

    Unbelievably quiet which may not seem like a big deal, however, if you are looking to deploy a server or system into a small office workspace, this becomes an important considerations. Otoh, if you are a power user and want a robust server that can be installed into a home media entertainment system, well, this might be a nice to have consideration ;).

    Something else that I liked is that the TS140 with the E3-1220 v3 family of processor supports PCIe G3 adapters which are useful if you are going to be using 10GbE cards or 12Gbs SAS and faster cards to move lots of data, support more IOPs or reduce response time latency.

    In addition, while only 4 DIMM slots is not very much, its more than what some other similar focused systems have, plus with large capacity DIMMs, you can still get a nice system, or two, or three or four for a cluster at a good price or value (Hmm, VSAN anybody?). Also while not a big item, the TS140 did not require ordering an HDD or SSD if you are not also ordering software the system for a diskless system and have your own.

    Speaking of IO slots, naturally I’m interested in Server Storage I/O so having multiple slots is a must have, along with the processor that is quad core (pretty much standard these days) along with VT and EP for supporting VMware (these were disabled in the BIOS. However, that was an easy fix).

    Then there is the price as of this posting starting at $379 USD which is a bare bones system (e.g. minimal memory, basic processor, no software) whose price increases as you add more items. What I like about this price is that it has the PCIe G3 slot as well as other PCIe G2 slots for expansion meaning I can install 12Gbps (or 6Gbps) SAS storage I/O adapters, or other PCIe cards including SSD, RAID, 10GbE CNA or other cards to meet various needs including software defined storage.

    What I did not like

    I would like to have had at least six vs. four DIMM slots, however keeping in mind the price point of where this system is positioned, not to mention what you could do with it thinking outside of the box, I’m fine with only 4 x DIMM. Space for more internal storage would be nice, however, if that is what you need, then there are the larger Lenovo models to look at. By the way, thinking outside of the box, could you do something like a Hadoop, OpenStack, Object Storage, VMware VSAN or other cluster with these in addition to using as a Windows Server?

    Yup.

    Granted you won’t have as much internal storage, as the TS140 only has two fixed drive slots (for more storage there is the model TD340 among others).

    However it is not that difficult to add more (not Lenovo endorsed) by adding a StarTech enclosure like I did with my other systems (see here). Oh and those extra PCIe slots, that’s where a 12Gbs (or 6Gbps) adapter comes into play while leaving room for GbE cards and PCIe SSD cards. Btw not sure what to do with that PCIe x1 slot, that’s a good place for a dual GbE NIC to add more networking ports or an SATA adapter for attaching to larger capacity slower drives.

    StarTech 2.5" SAS and SATA drive enclosure on Amazon.com
    StarTech 2.5″ SAS SATA drive enclosure via Amazon.com

    If VMware is not a requirement, and you need a good entry level server for a large SOHO or small SMB environment, or, if you are looking to add a flexible server to a lab or for other things the TS140 is good (see disclosure below) and quiet.

    Otoh as mentioned, there is a current issue with the BIOS/firmware with the TS140 involving VMware (tried ESXi 5 & 5.5).

    However I did find a workaround which is that the current TS140 BIOS/Firmware does work with VMware if you install onto a USB drive, and then use external SAS, SATA or other accessible storage which is how I ended up using it.

    Lenovo TS140 resources include

  • TS140 Lenovo ordering website
  • TS140 Data and Spec Sheet (PDF here)
  • Lenovo ThinkServer TS140 Manual (PDF here)
  • Intel E3-1200 v3 processors capabilities (Web page here)
  • Lenovo Drivers and Software (Web page here)
  • Lenovo BIOS and Drivers (Web page here)
  • Enabling Virtualization Technology (VT) in TS140 BIOS (Press F1) (Read here)
  • Enabling Intel NIC (82579LM) GbE with VMware (Link to user forum and a blog site here)
  • My experience from a couple years ago dealing with Lenovo support for a laptop issue
  • Summary

    Disclosure: Lenovo loaned the TS140 to me for just under two months including covering shipping costs at no charge (to them or to me) hence this is not a sponsored post or review. On the other hand I have placed an order for a new TS140 similar to the one tested that I bought on-line from Lenovo.

    This new TS140 server that I bought joins the Dell Inspiron I added late last year (read more about that here) as well as other HP and Dell systems.

    Overall I give the Lenovo TS140 an provisional "A" which would be a solid "A" once the BIOS/firmware issue mentioned above is resolved for VMware. Otoh, if you are not concerned about using the TS140 for VMware (or can do a work around), then consider it as an "A".

    As mentioned above, I liked it so much I actually bought one to add to my collection.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, vSAN and VMware vExpert. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Lenovo buys IBM’s xSeries aka x86 server business, what about EMC?

    Storage I/O trends

    Lenovo buys IBM’s xSeries x86 server business for $2.3B USD, what about EMC?

    Once again Lenovo is new owner of some IBM computer technology, this time by acquiring the x86 (e.g. xSeries) server business unit from big blue. Today Lenovo announced its plan to acquire the IBM x86 server storage business unit for $2.3B USD.

    Research Triangle Park, North Carolina, and Armonk, New York – January 23, 2014

    Lenovo (HKSE: 992) (ADR: LNVGY) and IBM (NYSE: IBM) have entered into a definitive agreement in which Lenovo plans to acquire IBM’s x86 server business. This includes System x, BladeCenter and Flex System blade servers and switches, x86-based Flex integrated systems, NeXtScale and iDataPlex servers and associated software, blade networking and maintenance operations. The purchase price is approximately US$2.3 billion, approximately two billion of which will be paid in cash and the balance in Lenovo stock.

    IBM will retain its System z mainframes, Power Systems, Storage Systems, Power-based Flex servers, and PureApplication and PureData appliances.

    Read more here

    If you recall (or didn’t’t know) around a decade or so ago IBM also spun off its Laptop (e.g. Thinkpads) and workstation business unit to Lenovo after being one of the early PC players (I still have a model XT in my collection along with Mac SE and Newton).

    What this means for IBM?

    What this means is that IBM is selling off a portion of its systems technology group which is where the servers, storage and related hardware, software technologies report into. Note however that IBM is not selling off its entire server portfolio, only the x86 e.g. Intel/AMD based products that make up the xSeries as well as companion Blade and related systems. This means that IBM is retaining its Power based systems (and processors) that include the pSeries, iSeries and of course the zSeries mainframes  in addition to the storage hardware/software portfolio.

    However as part of this announcement, Lenovo is also licensing from IBM the Storwize/V7000 technology as well as tape summit resources, GPFS based scale out file systems used in SONAS and related products that are part of solution bundles tied to the x86 business.

    Again to be clear, IBM is not selling off (or at least at this time) Storwize, tape or other technology to Lenovo other than x86 server business. By server business, this means the technology, patents, people, processes, products, sales, marketing, manufacturing, R&D along with other entities that form the business unit, not all that different from when IBM divested the workstation/laptop aka PC business in the past.

    Storage I/O trends

    What this means for Lenovo?

    What Lenovo gets are an immediate (once the deal closes) expansion of their server portfolio including high-density systems for cloud, HPC as well as regular enterprise, not to mention also for SME and SMB. Lenovo also gets blade systems as well as converged systems (server, storage, networking, hardware, software) hence why IBM is also licensing some technology to Lenovo that it is not selling. Lenovo also gets the sales, marketing, design, support and other aspects to also expand their server business. By gaining the server business unit, Lenovo will now be in a place to take on Dell (who was also rumored to be in the market for the IBM servers), as well as HP, Oracle and other x86 system based suppliers.

    What about EMC and Lenovo?

    Yes, EMC, that storage company who is also a primary owner of VMware, as well as partner with Cisco and Intel in the VCE initiatives, not to mention who also entered into a partnership with Lenovo a year or so ago.

    In case you forgot or didn’t’t know, EMC after breaking up with Dell, entered into a partnership with Lenovo back in 2012.

    This partnership and initiatives included developing servers that in turn EMC could use for their various storage and data appliances which continue to leverage x86 type technology. In addition, that agreement found the EMC Iomega brand transitioning over into the Lenovo line-up for both domestic North America, as well as international including the chinese market. Hence I have an older Iomega IX4 that says EMC, and a newer one that says EMC/Lenovo, also note that at CES a few weeks ago, some new Iomega products were announced.

    In checking with Lenovo today, they indicated that it is business as usual and no changes with or to the EMC partnership.

    Via email from Lenovo spokesperson today:

    A key piece to Lenovo’s Enterprise strategy has always included strong partnerships. In fact today’s announcements reinforce that strategy very clearly.

    Given the new scale, footprint and Enterprise credibility that this server acquisition affords Lenovo, we see great opportunity in offering complimentary storage offerings to new and existing customers.

    Lenovo’s partnership with EMC is multifaceted and stays in-tact as an important part of Lenovo’s overall strategy to offer customers compelling solutions built on world-class technology.

    Lenovo will continue to offer Lenovo/EMC NAS products from our joint venture as well as resell EMC stand-alone storage platforms.

    IBM Storwize storage and other products are integral to the in-scope platforms and solutions we acquired. In order to ensure continuity of business and the best customer experience we will partner with IBM for storage products as well.

    We believe this is a great opportunity for all three companies, but most importantly these partnerships are in place and will remain healthy for the benefit for our customers.

    Hence it is my opinion that for now it is business as usual, the IBM x8x business unit has a new home, those people will be getting new email addresses and business cards similar to how some of their associates did when the PC group was sold off a few years ago.

    Otoh, there may also be new products that might become opportunities to be placed into he Lenovo EMC partnership, however that is just my speculation at this time. Likewise while there will be some groups within Lenovo focused on selling the converged Lenovo solutions coming from IBM that may in fact compete with EMC (among others) in some scenarios, that should be no more and hopefully less than what IBM has with their server groups at times competing with themselves.

    Storage I/O trends

    What does this mean for Cisco, Dell, HP and others?

    For Cisco, instead of competing with one of their OEMs (e.g. IBM) for networking equipment (note IBM also owns some of its own networking), the server competition shifts to Lenovo who is also a Cisco partner (its called coopitition), and perhaps business as usual in many areas. For Dell, in the mid-market space, things could get interesting and the Round Rock folks need to get creative and beyond VRTX.

    For HP, this is where IMHO it’s going to get really interesting as Lenovo gets things transitioned. Near-term, HP could have a disruptive upper hand, however longer-term, HP has to get their A-Game on. Oracle is in the game as are a bunch of others from Fujitsu to SuperMicro to outside of North America and in particular china there is also Huawei. Back to EMC and VCE, while I expect the Cisco partnership to stay, I also see a wild card where EMC can leverage their Lenovo partnership into more markets, while Cisco continues to move into storage and other adjacent areas (e.g. more coopitition).

    What this means now and going forward?

    Thus this is as much about enterprise, SME, SMB as it is HPC, cloud and high-density where the game is about volume. Likewise there is also the convergence or data infrastructure angle combing server, storage, networking hardware, software and services.

    One of the things I have noticed about Lenovo as a customer using ThinkPads for over 13 years now (not the same one) is that while they are affordable, instead of simply cutting cost and quality, they seem to have found ways to remove cost which is different then simply cutting to go cheap.

    Case in point about a year and a half ago I dropped my iPhone on my Lenovo X1 keyboard that is back-lit and broke a key. Calling Lenovo after trying to find a replacement key on the web, they said no worries and next morning a new keyboard for the laptop was on my doorstep by 10:30Am with instructions on how to remove the old, put in the new, and do the RMA, no questions asked (read more about this here).

    The reason I mention that story about my X1 laptop is that it ties to what I’m curious and watching with their soon to be expanded new server business.

    Will they go in and simply look to reduce cost by making cuts from design to manufacturing to part quality, service and support, or, find ways to remove complexity and cost while providing more value?

    Now I wonder whose technology will join my HP and Dell systems to fill some empty rack space in the not so distant future to support growth?

    Time will tell, congratulations to Lenovo and the IBMers who now have a new home best wishes.

    Ok, nuff said

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2014

    What does new EMC and Lenovo partnership mean?

    EMC and EMCworld

    The past several weeks have been busy with various merger, acquisitions and collaborating activity in the IT and data storage world. Summer time often brings new relationships and even summer marriages. The most recent is EMC and Lenovo announcing a new partnership that includes OEM sourcing of technology, market expansion and other initiatives. Hmm, does anybody remember who EMCs former desktop and server partner was, or who put Lenovo out for adoption several years ago?

    Here is the press release from EMC and Lenovo that you can read yourself vs. me simply paraphrasing it:

    Lenovo and EMC Team Up In Strategic Worldwide Partnership
    A Solid Step in Lenovo’s Aspiration to Be a Player in Industry Standard Servers and Networked Storage with EMC’s Leading Technology; EMC Further Strengthens Ability to Serve Customers’ Storage Solutions Needs in China and Other Emerging Markets; Companies Agree to Form SMB-Focused Storage Joint Venture
    BEIJING, China – August 1, 2012
    Lenovo (HKSE: 992) (ADR: LNVGY) and EMC Corporation (NYSE: EMC) today announced a broad partnership that enhances Lenovo’s position in industry standard servers and networked storage solutions, while significantly expanding EMC’s reach in China and other key, high-growth markets. The new partnership is expected to spark innovation and additional R&D in the server and storage markets by maximizing the product development talents and resources at both companies, while driving scale and efficiency in the partners’ respective supply chains.
    The partnership is a strong strategic fit, leveraging the two leading companies’ respective strengths, across three main areas:

    • First, Lenovo and EMC have formed a server technology development program that will accelerate and extend Lenovo’s capabilities in the x86 industry-standard server segment. These servers will be brought to market by Lenovo and embedded into selected EMC storage systems over time.
    • Second, the companies have forged an OEM and reseller relationship in which Lenovo will provide EMC’s industry-leading networked storage solutions to its customers, initially in China and expanding into other global markets in step with the ongoing development of its server business.
    • Finally, EMC and Lenovo plan to bring certain assets and resources from EMC’s Iomega business into a new joint venture which will provide Network Attached Storage (NAS) systems to small/medium businesses (SMB) and distributed enterprise sites.

    “Today’s announcement with industry leader EMC is another solid step in our journey to build on our foundation in PCs and become a leader in the new PC-plus era,” said Yuanqing Yang, Lenovo chairman and CEO. “This partnership will help us fully deliver on our PC-plus strategy by giving us strong back-end capabilities and business foundation in servers and storage, in addition to our already strong position in devices. EMC is the perfect partner to help us fully realize the PC-plus opportunity in the long term.”
    Joe Tucci, chairman and CEO of EMC, said, “The relationship with Lenovo represents a powerful opportunity for EMC to significantly expand our presence in China, a vibrant and very important market, and extend it to other parts of the world over time. Lenovo has clearly demonstrated its ability to apply its considerable resources and expertise not only to enter, but to lead major market segments. We’re excited to partner with Lenovo as we focus our combined energies serving a broader range of customers with industry-leading storage and server solutions.”
    In the joint venture, Lenovo will contribute cash, while EMC will contribute certain assets and resources of Iomega. Upon closing, Lenovo will hold a majority interest in the new joint venture. During and after the transition from independent operations to the joint venture, customers will experience continuity of service, product delivery and warranty fulfillment. The joint venture is subject to customary closing procedures including regulatory approvals and is expected to close by the end of 2012.
    The partnership described here is not considered material to either company’s current fiscal year earnings.
    About Lenovo
    Lenovo (HKSE: 992) (ADR: LNVGY) is a $US30 billion personal technology company and the world’s second largest PC company, serving customers in more than 160 countries. Dedicated to building exceptionally engineered PCs and mobile internet devices, Lenovo’s business is built on product innovation, a highly efficient global supply chain and strong strategic execution. Formed by Lenovo Group’s acquisition of the former IBM Personal Computing Division, the Company develops, manufactures and markets reliable, high-quality, secure and easy-to-use technology products and services. Its product lines include legendary Think-branded commercial PCs and Idea-branded consumer PCs, as well as servers, workstations, and a family of mobile internet devices, including tablets and smart phones. Lenovo has major research centers in Yamato, Japan; Beijing, Shanghai and Shenzhen, China; and Raleigh, North Carolina. For more information, see www.lenovo.com.
    About EMC
    EMC Corporation is a global leader in enabling businesses and service providers to transform their operations and deliver IT as a service. Fundamental to this transformation is cloud computing. Through innovative products and services, EMC accelerates the journey to cloud computing, helping IT departments to store, manage, protect and analyze their most valuable asset — information — in a more agile, trusted and cost-efficient way. Additional information about EMC can be found at www.EMC.com.

    StorageIO industry trends and perspectives

    What is my take?

    Disclosures
    I have been buying and using Lenovo desktop and laptop products for over a decade and currently typing this post from my X1 ThinkPad equipped with a Samsung SSD. Likewise I bought an Iomega IX4 NAS a couple of years ago (so I am a customer), am a Retrospect customer (EMC bought and then sold them off), used to be a Mozy user (now a former customer) and EMC has been a client of StorageIO in the past.

    Lenovo Thinkpad
    Some of my Lenovo(s) and EMC Iomega IX4

    Let us take a step back for a moment, Lenovo was the spinout and sale from IBM who has a US base in Raleigh North Carolina. While IBM still partners with Lenovo for desktops, IBM over the past years or decade(s) has been more strategically focused on big enterprise environments, software and services. Note that IBM has continued enhancing its own Intel based servers (e.g. xSeries), propriety Power processor series, storage and technology solutions (here, here, here and here among others). However, for the most part, IBM has moved away from catering to the Consumer, SOHO and SMB server, storage, desktop and related technology environments.

    EMC on the other hand started out in the data center growing up to challenge IBMs dominance of data storage in big environments to now being the industry maker storage player for big and little data, from enterprise to cloud to desktop to server, consumer to data center. EMC also was partnered with Dell who competes directly with Lenovo until that relationship ended a few years ago. EMC for its part has been on a growth and expansion strategy adding technologies, companies, DNA and ability along with staff in the desktop, server and other spaces from a data, information and storage perspective not to mention VMware (virtualization and cloud), RSA (security) among others such as Mozy for cloud backup. EMC is also using more servers in its solutions ranging from Iomega based NAS to VNX unified storage systems, Greenplum big data to Centera archiving, ATMOS and various data protection solutions among other products.

    StorageIO industry trends and perspectives

    Note that this is an industry wide trend of leveraging Intel Architecture (IA) along with AMD, Broadcom, and IBM Power among other general-purpose processors and servers as platforms for running storage and data applications or appliances.

    Overall, I think that this is a good move for both EMC and Lenovo to expand their reach into different adjacent markets leveraging and complimenting each other strengths.

    Ok, lets see who is involved in the next IT summer relationship, nuff said for now.

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Kudos to Lenovo: Customer service redefined, or re-established?

    Kudos to Lenovo who I called yesterday to get a replacement key for my X1 laptop keypad.

    After spending time on their website including finding the part number, sku and other information, I could not figure out how to actually order the part. Concerned about calling and getting routed between different call centers as is too often the case, I finally decided to give a try on the phone route.

    I was surprised, no, shocked at how quick and easy it was once I got routed to the Atlanta Lenovo support center to get what I needed.

    Thus late yesterday late afternoon when I called, the Atlanta Lenovo agent was able to take my laptop serial number, make and model, description of what part was needed all without transferring to other persons. They then made arrangements for not a new replacement key, rather an entire new keyboard with total phone time probably less than 15 minutes.

    This morning by 10:30AM CT a box with the new replacement keyboard arrived. In-between calls and other work, in a matter of minutes the old keyboard was removed, the new one installed, tested and I now get to type normally instead of dealing with a broken Y key.

    In less than 24 hours from making the call, UPS arrived back to pickup the old keyboard to return to the depot.

    Here are some photos for you propeller (tech heads or geek’s) beginning with the X1 keyboard and broken key before the replacement.

    Lenvo X1 keyboard replacement

    The following shows the keyboard removed looking towards the screen with the key board flat cables still installed. Note that the small black connectors (two of them) flip-up and the cables slide out (or in for installation).
    Lenvo X1 keyboard replacement

    In this photo, you can see one of the two keyboard connectors, plus where the Samsung SSD I installed replaces the HDD that the X1 shipped with. Also shown are the Sierra wireless 4G card that I use while traveling that provides an alternative when others are trying to figure out how to use available public WiFi.
    Lenvo X1 keyboard replacement

    In this image, you can see the DRAM (e.g. memory) along with two connectors where the keyboard cables connect to before cables have been reconnected.
    Lenvo X1 keyboard replacement

    With the new cables connected, keyboard reinstalled and tested, the old key board has been boxed up up, return shipping sticker applied, UPS called and the box picked up, on its way back to Lenovo.
    Lenvo X1 keyboard replacement

    For that, Kudos to Lenovo for delivering on what in the past taken for granted as good customer service and support, however in these days, all to often is the exception.

    Next time somebody asks why I use Lenovo ThinkPad’s guess what story I will tell them.

    Ok, nuff said for now

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved