Microsoft Windows Server 2019 Insiders Preview

Microsoft Windows Server 2019 Insiders Preview

Application Data Value Characteristics Everything Is Not The Same

Microsoft Windows Server 2019 Insiders Preview has been announced. Windows Server 2019 in the past might have been named 2016 R2 also known as a Long-Term Servicing Channel (LTSC) release. Microsoft recommends LTSC Windows Server for workloads such as Microsoft SQL Server, Share Point and SDDC. The focus of Microsoft Windows Server 2019 Insiders Preview is around hybrid cloud, security, application development as well as deployment including containers, software defined data center (SDDC) and software defined data infrastructure, as well as converged along with hyper-converged infrasture (HCI) management.

Windows Server 2019 Preview Features

Features and enhancements in the Microsoft Windows Server 2019 Insiders Preview span HCI management, security, hybrid cloud among others.

  • Hybrid cloud – Extending active directory, file server synchronize, cloud backup, applications spanning on-premises and cloud, management).
  • Security – Protect, detect and respond including shielded VMs, attested guarded fabric of host guarded machines, Windows and Linux VM (shielded), VMConnect for Windows and Linux troubleshooting of Shielded VM and encrypted networks, Windows Defender Advanced Threat Protection (ATP) among other enhancements.
  • Application platform – Developer and deployment tools for Windows Server containers and Windows Subsystem on Linux (WSL). Note that Microsoft has also been reducing the size of the Server image while extending feature functionality. The smaller images take up less storage space, plus load faster. As part of continued serverless and container support (Windows and Linux along with Docker), there are options for deployment orchestration including Kubernetes (in beta). Other enhancements include extending previous support for Windows Subsystem for Linux (WSL).

Other enhancements part of Microsoft Windows Server 2019 Insiders Preview include cluster sets in support of software defined data center (SDDC). Cluster sets expand SDDC clusters of loosely coupled grouping of multiple failover clusters including compute, storage as well as hyper-converged configurations. Virtual machines have fluidity across member clusters within a cluster set and unified storage namespace. Existing failover cluster management experiences is preserved for member clusters, along with a new cluster set instance of the aggregate resources.

Management enhancements include S2D software defined storage performance history, project Honolulu support for storage updates, along with powershell cmdlet updates, as well as system center 2019. Learn more about project Honolulu hybrid management here and here.

Microsoft and Windows LTSC and SAC

As a refresher, Microsoft Windows (along with other software) is now being released on two paths including more frequent semi-annual channel (SAC), and less frequent LTSC releases. Some other things to keep in mind that SAC are focused around server core and nano server as container image while LTSC includes server with desktop experience as well as server core. For example, Windows Server 2016 released fall of 2016 is an LTSC, while the 1709 release was a SAC which had specific enhancements for container related environments.

There was some confusion fall of 2017 when 1709 was released as it was optimized for container and serverless environments and thus lacked storage spaces direct (S2D) leading some to speculate S2D was dead. S2D among other items that were not in the 1709 SAC are very much alive and enhanced in the LTSC preview for Windows Server 2019. Learn more about Microsoft LTSC and SAC here.

Test Driving Installing The Bits

One of the enhancements with LTSC preview candidate server 2019 is improved upgrades of existing environments. Granted not everybody will choose the upgrade in place keeping existing files however some may find the capability useful. I chose to give the upgrade keeping current files in place as an option to see how it worked. To do the upgrade I used a clean and up to date Windows Server 2016 data center edition with desktop. This test system is a VMware ESXi 6.5 guest running on flash SSD storage. Before the upgrade to Windows Server 2019, I made a VMware vSphere snapshot so I could quickly and easily restore the system to a good state should something not work.

To get the bits, go to Windows Insiders Preview Downloads (you will need to register)

Windows Server 2019 LTSC build 17623 is available in 18 languages in an ISO format and require a key.

The keys for the pre-release unlimited activations are:
Datacenter Edition         6XBNX-4JQGW-QX6QG-74P76-72V67
Standard Edition             MFY9F-XBN2F-TYFMP-CCV49-RMYVH

First step is downloading the bits from the Windows insiders preview page including select language for the image to use.

Getting the windows server 2019 preview bits
Select the language for the image to download

windows server 2019 select language

Starting the download

Once you have the image download, apply it to your bare metal server or hypervisors guest. In this example, I copied the windows server 2019 image to a VMware ESXi server for a Windows Server 2016 guest machine to access via its virtual CD/DVD.

pre upgrade check windows server version
Verify the Windows Server version before upgrade

After download, access the image, in this case, I attached the image to the virtual machine CD, then accessed it and ran the setup application.

Microsoft Windows Server 2019 Insiders Preview download

Download updates now or later

license key

Entering license key for pre-release windows server 2019

Microsoft Windows Server 2019 Insiders Preview datacenter desktop version

Selecting Windows Server Datacenter with Desktop

Microsoft Windows Server 2019 Insiders Preview license

Accepting Software License for pre-release version.

Next up is determining to do a new install (keep nothing), or an in-place upgrade. I wanted to see how smooth the in-place upgrade was so selected that option.

Microsoft Windows Server 2019 Insiders Preview inplace upgrade

What to keep, nothing, or existing files and data


Confirming your selections

Microsoft Windows Server 2019 Insiders Preview install start

Ready to start the installation process

Microsoft Windows Server 2019 Insiders Preview upgrade in progress
Installation underway of Windows Server 2019 preview

Once the installation is complete, verify that Windows Server 2019 is now installed.

Microsoft Windows Server 2019 Insiders Preview upgrade completed
Completed upgrade from Windows Server 2016 to Microsoft Windows Server 2019 Insiders Preview

The above shows verifying the system build using Powershell, as well as the message in the lower right corner of the display. Granted the above does not show the new functionality, however you should get an idea of how quickly a Windows Server 2019 preview can be deployed to explore and try out the new features.

Where to learn more

Learn more Microsoft Windows Server 2019 Insiders Preview, Windows Server Storage Spaces Direct (S2D), Azure and related software defined data center (SDDC), software defined data infrastructures (SDDI) topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Microsoft Windows Server 2019 Insiders Preview gives a glimpse of some of the new features that are part of the next evolution of Windows Server as part of supporting hybrid IT environments. In addition to the new features and functionality that convey not only support for hybrid cloud, also hybrid applications development, deployment, devops and workloads, Microsoft is showing flexibility in management, ease of use, scalability, along with security as well as scale out stability. If you have not looked at Windows Server for a while, or involved with serverless, containers, Kubernetes among other initiatives, now is a good time to check out Microsoft Windows Server 2019 Insiders Preview.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Use Intel Optane NVMe U.2 SFF 8639 SSD drive in PCIe slot

Use NVMe U.2 SFF 8639 disk drive form factor SSD in PCIe slot

server storage I/O data infrastructure trends

Need to install or use an Intel Optane NVMe 900P or other Nonvolatile Memory (NVM) Express NVMe based U.2 SFF 8639 disk drive form factor Solid State Device (SSD) into PCIe a slot?

For example, I needed to connect an Intel Optane NVMe 900P U.2 SFF 8639 drive form factor SSD into one of my servers using an available PCIe slot.

The solution I used was an carrier adapter card such as those from Ableconn (PEXU2-132 NVMe 2.5-inch U.2 [SFF-8639] via Amazon.com among other global venues.

xxxx
Top Intel 750 NVMe PCIe AiC SSD, bottom Intel Optane NVMe 900P U.2 SSD with Ableconn carrier

The above image shows top an Intel 750 NVMe PCIe Add in Card (AiC) SSD and on the bottom an Intel Optane NVMe 900P 280GB U.2 (SFF 8639) drive form factor SSD mounted on an Ableconn carrier adapter.

NVMe server storage I/O sddc

NVMe Tradecraft Refresher

NVMe is the protocol that is implemented with different topologies including local via PCIe using U.2 aka SFF-8639 (aka disk drive form factor), M.2 aka Next Generation Form Factor (NGFF) also known as "gum stick", along with PCIe Add in Card (AiC). NVMe accessed devices can be installed in laptops, ultra books, workstations, servers and storage systems using the various form factors. U.2 drives are also refereed to by some as PCIe drives in that the NVMe command set protocol is implemented using PCIe x4 physical connection to the devices. Jump ahead if you want to skip over the NVMe primer refresh material to learn more about U.2 8639 devices.

data infrastructure nvme u.2 8639 ssd
Various SSD device form factors and interfaces

In addition to form factor, NVMe devices can be direct attached and dedicated, rack and shared, as well as accessed via networks also known as fabrics such as NVMe over Fabrics.

NVMeoF FC-NVMe NVMe fabric SDDC
The many facets of NVMe as a front-end, back-end, direct attach and fabric

Context is important with NVMe in that fabric can mean NVMe over Fibre Channel (FC-NVMe) where the NVMe command set protocol is used in place of SCSI Fibre Channel Protocol (e.g. SCSI_FCP) aka FCP or what many simply know and refer to as Fibre Channel. NVMe over Fabric can also mean NVMe command set implemented over an RDMA over Converged Ethernet (RoCE) based network.

NVM and NVMe accessed flash SCM SSD storage

Another point of context is not to confuse Nonvolatile Memory (NVM) which are the storage or memory media and NVMe which is the interface for accessing storage (e.g. similar to SAS, SATA and others). As a refresher, NVM or the media are the various persistent memories (PM) including NVRAM, NAND Flash, 3D XPoint along with other storage class memories (SCM) used in SSD (in various packaging).

Learn more about 3D XPoint with the following resources:

Learn more (or refresh) your NVMe server storage I/O knowledge, experience tradecraft skill set with this post here. View this piece here looking at NVM vs. NVMe and how one is the media where data is stored, while the other is an access protocol (e.g. NVMe). Also visit www.thenvmeplace.com to view additional NVMe tips, tools, technologies, and related resources.

NVMe U.2 SFF-8639 aka 8639 SSD

On quick glance, an NVMe U.2 SFF-8639 SSD may look like a SAS small form factor (SFF) 2.5" HDD or SSD. Also, keep in mind that HDD and SSD with SAS interface have a small tab to prevent inserting them into a SATA port. As a reminder, SATA devices can plug into SAS ports, however not the other way around which is what the key tab function does (prevents accidental insertion of SAS into SATA). Looking at the left-hand side of the following image you will see an NVMe SFF 8639 aka U.2 backplane connector which looks similar to a SAS port.

Note that depending on how implemented including its internal controller, flash translation layer (FTL), firmware and other considerations, an NVMe U.2 or 8639 x4 SSD should have similar performance to a comparable NVMe x4 PCIe AiC (e.g. card) device. By comparable device, I mean the same type of NVM media (e.g. flash or 3D XPoint), FTL and controller. Likewise generally an PCIe x8 should be faster than an x4, however more PCIe lanes does not mean more performance, its what’s inside and how those lanes are actually used that matter.

NVMe U.2 8639 2.5" 1.8" SSD driveNVMe U.2 8639 2.5 1.8 SSD drive slot pin
NVMe U.2 SFF 8639 Drive (Software Defined Data Infrastructure Essentials CRC Press)

With U.2 devices the key tab that prevents SAS drives from inserting into a SATA port is where four pins that support PCIe x4 are located. What this all means is that a U.2 8639 port or socket can accept an NVMe, SAS or SATA device depending on how the port is configured. Note that the U.2 8639 port is either connected to a SAS controller for SAS and SATA devices or a PCIe port, riser or adapter.

On the left of the above figure is a view towards the backplane of a storage enclosure in a server that supports SAS, SATA, and NVMe (e.g. 8639). On the right of the above figure is the connector end of an 8639 NVM SSD showing addition pin connectors compared to a SAS or SATA device. Those extra pins give PCIe x4 connectivity to the NVMe devices. The 8639 drive connectors enable a device such as an NVM, or NAND flash SSD to share a common physical storage enclosure with SAS and SATA devices, including optional dual-pathing.

More PCIe lanes may not mean faster performance, verify if those lanes (e.g. x4 x8 x16 etc) are present just for mechanical (e.g. physical) as well as electrical (they are also usable) and actually being used. Also, note that some PCIe storage devices or adapters might be for example an x8 for supporting two channels or devices each at x4. Likewise, some devices might be x16 yet only support four x4 devices.

NVMe U.2 SFF 8639 PCIe Drive SSD FAQ

Some common questions pertaining NVMe U.2 aka SFF 8639 interface and form factor based SSD include:

Why use U.2 type devices?

Compatibility with what’s available for server storage I/O slots in a server, appliance, storage enclosure. Ability to mix and match SAS, SATA and NVMe with some caveats in the same enclosure. Support higher density storage configurations maximizing available PCIe slots and enclosure density.

Is PCIe x4 with NVMe U.2 devices fast enough?

While not as fast as a PCIe AiC that fully supports x8 or x16 or higher, an x4 U.2 NVMe accessed SSD should be plenty fast for many applications. If you need more performance, then go with a faster AiC card.

Why not go with all PCIe AiC?

If you need the speed, simplicity, have available PCIe card slots, then put as many of those in your systems or appliances as possible. Otoh, some servers or appliances are PCIe slot constrained so U.2 devices can be used to increase the number of devices attached to a PCIe backplane while also supporting SAS, SATA based SSD or HDDs.

Why not use M.2 devices?

If your system or appliances supports NVMe M.2 those are good options. Some systems even support a combination of M.2 for local boot, staging, logs, work and other storage space while PCIe AiC are for performance along with U.2 devices.

Why not use NVMeoF?

Good question, why not, that is, if your shared storage system supports NVMeoF or FC-NVMe go ahead and use that, however, you might also need some local NVMe devices. Likewise, if yours is a software-defined storage platform that needs local storage, then NVMe U.2, M.2 and AiC or custom cards are an option. On the other hand, a shared fabric NVMe based solution may support a mixed pool of SAS, SATA along with NVMe U.2, M.2, AiC or custom cards as its back-end storage resources.

When not to use U.2?

If your system, appliance or enclosure does not support U.2 and you do not have a need for it. Or, if you need more performance such as from an x8 or x16 based AiC, or you need shared storage. Granted a shared storage system may have U.2 based SSD drives as back-end storage among other options.

How does the U.2 backplane connector attach to PCIe?

Via enclosures backplane, there is either a direct hardwire connection to the PCIe backplane, or, via a connector cable to a riser card or similar mechanism.

Does NVMe replace SAS, SATA or Fibre Channel as an interface?

The NVMe command set is an alternative to the traditional SCSI command set used in SAS and Fibre Channel. That means it can replace, or co-exist depending on your needs and preferences for access various storage devices.

Who supports U.2 devices?

Dell has supported U.2 aka PCIe drives in some of their servers for many years, as has Intel and many others. Likewise, U.2 8639 SSD drives including 3D Xpoint and NAND flash-based are available from Intel among others.

Can you have AiC, U.2 and M.2 devices in the same system?

If your server or appliance or storage system support them then yes. Likewise, there are M.2 to PCIe AiC, M.2 to SATA along with other adapters available for your servers, workstations or software-defined storage system platform.

NVMe U.2 carrier to PCIe adapter

The following images show examples of mounting an Intel Optane NVMe 900P accessed U.2 8639 SSD on an Ableconn PCIe AiC carrier. Once U.2 SSD is mounted, the Ableconn adapter inserts into an available PCIe slot similar to other AiC devices. From a server or storage appliances software perspective, the Ableconn is a pass-through device so your normal device drivers are used, for example VMware vSphere ESXi 6.5 recognizes the Intel Optane device, similar with Windows and other operating systems.

intel optane 900p u.2 8639 nvme drive bottom view
Intel Optane NVMe 900P U.2 SSD and Ableconn PCIe AiC carrier

The above image shows the Ableconn adapter carrier card along with NVMe U.2 8639 pins on the Intel Optane NVMe 900P.

intel optane 900p u.2 8639 nvme drive end view
Views of Intel Optane NVMe 900P U.2 8639 and Ableconn carrier connectors

The above image shows an edge view of the NVMe U.2 SFF 8639 Intel Optane NVMe 900P SSD along with those on the Ableconn adapter carrier. The following images show an Intel Optane NVMe 900P SSD installed in a PCIe AiC slot using an Ableconn carrier, along with how VMware vSphere ESXi 6.5 sees the device using plug and play NVMe device drivers.

NVMe U.2 8639 installed in PCIe AiC Slot
Intel Optane NVMe 900P U.2 SSD installed in PCIe AiC Slot

NVMe U.2 8639 and VMware vSphere ESXi
How VMware vSphere ESXi 6.5 sees NVMe U.2 device

Intel NVMe Optane NVMe 3D XPoint based and other SSDs

Here are some Amazon.com links to various Intel Optane NVMe 3D XPoint based SSDs in different packaging form factors:

Here are some Amazon.com links to various Intel and other vendor NAND flash based NVMe accessed SSDs including U.2, M.2 and AiC form factors:

Note in addition to carriers to adapt U.2 8639 devices to PCIe AiC form factor and interfaces, there are also M.2 NGFF to PCIe AiC among others. An example is the Ableconn M.2 NGFF PCIe SSD to PCI Express 3.0 x4 Host Adapter Card.

In addition to Amazon.com, Newegg.com, Ebay and many other venues carry NVMe related technologies.
The Intel Optane NVMe 900P are newer, however the Intel 750 Series along with other Intel NAND Flash based SSDs are still good price performers and as well as provide value. I have accumulated several Intel 750 NVMe devices over past few years as they are great price performers. Check out this related post Get in the NVMe SSD game (if you are not already).

Where To Learn More

View additional NVMe, SSD, NVM, SCM, Data Infrastructure and related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

NVMe accessed storage is in your future, however there are various questions to address including exploring your options for type of devices, form factors, configurations among other topics. Some NVMe accessed storage is direct attached and dedicated in laptops, ultrabooks, workstations and servers including PCIe AiC, M.2 and U.2 SSDs, while others are shared networked aka fabric based. NVMe over fabric (e.g. NVMeoF) includes RDMA over converged Ethernet (RoCE) as well as NVMe over Fibre Channel (e.g. FC-NVMe). Networked fabric accessed NVMe access of pooled shared storage systems and appliances can also include internal NVMe attached devices (e.g. as part of back-end storage) as well as other SSDs (e.g. SAS, SATA).

General wrap-up (for now) NVMe U.2 8639 and related tips include:

  • Verify the performance of the device vs. how many PCIe lanes exist
  • Update any applicable BIOS/UEFI, device drivers and other software
  • Check the form factor and interface needed (e.g. U.2, M.2 / NGFF, AiC) for a given scenario
  • Look carefully at the NVMe devices being ordered for proper form factor and interface
  • With M.2 verify that it is an NVMe enabled device vs. SATA

Learn more about NVMe at www.thenvmeplace.com including how to use Intel Optane NVMe 900P U.2 SFF 8639 disk drive form factor SSDs in PCIe slots as well as for fabric among other scenarios.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

AWS S3 Storage Gateway Revisited (Part I)

server storage I/O trends

AWS S3 Storage Gateway Revisited (Part I)

This Amazon Web Service (AWS) Storage Gateway Revisited posts is a follow-up to the AWS Storage Gateway test drive and review I did a few years ago (thus why it’s called revisited). As part of a two-part series, the first post looks at what AWS Storage Gateway is, how it has improved since my last review of AWS Storage Gateway along with deployment options. The second post in the series looks at a sample test drive deployment and use.

If you need an AWS primer and overview of various services such as Elastic Cloud Compute (EC2), Elastic Block Storage (EBS), Elastic File Service (EFS), Simple Storage Service (S3), Availability Zones (AZ), Regions and other items check this multi-part series (Cloud conversations: AWS EBS, Glacier and S3 overview (Part I) ).

AWS

As a quick refresher, S3 is the AWS bulk, high-capacity unstructured and object storage service along with its companion deep cold (e.g. inactive) Glacier. There are various S3 storage service classes including standard, reduced redundancy storage (RRS) along with infrequent access (IA) that have different availability durability, performance, service level and cost attributes.

Note that S3 IA is not Glacier as your data always remains on-line accessible while Glacier data can be off-line. AWS S3 can be accessed via its API, as well as via HTTP rest calls, AWS tools along with those from third-party’s. Third party tools include NAS file access such as S3FS for Linux that I use for my Ubuntu systems to mount S3 buckets and use similar to other mount points. Other tools include Cloudberry, S3 Motion, S3 Browser as well as plug-ins available in most data protection (backup, snapshot, archive) software tools and storage systems today.

AWS S3 Storage Gateway and What’s New

The Storage Gateway is the AWS tool that you can use for accessing S3 buckets and objects via your block volume, NAS file or tape based applications. The Storage Gateway is intended to give S3 bucket and object access to on-premises applications and data infrastructures functions including data protection (backup/restore, business continuance (BC), business resiliency (BR), disaster recovery (DR) and archiving), along with storage tiering to cloud.

Some of the things that have evolved with the S3 Storage Gateway include:

  • Easier, streamlined download, installation, deployment
  • Enhanced Virtual Tape Library (VTL) and Virtual Tape support
  • File serving and sharing (not to be confused with Elastic File Services (EFS))
  • Ability to define your own bucket and associated parameters
  • Bucket options including Infrequent Access (IA) or standard
  • Options for AWS EC2 hosted, or on-premises VMware as well as Hyper-V gateways (file only supports VMware and EC2)

AWS Storage Gateway Three Functions

AWS Storage Gateway can be deployed for three basic functions:

    AWS Storage Gateway File Architecture via AWS.com

  • File Gateway (NFS NAS) – Files, folders, objects and other items are stored in AWS S3 with a local cache for low latency access to most recently used data. With this option, you can create folders and subdirectory similar to a regular file system or NAS device as well as configure various security, permissions, access control policies. Data is stored in S3 buckets that you specify policies such as standard or Infrequent Access (IA) among other options. AWS hosted via EC2 as well as VMware Virtual Machine (VM) for on-premises file gateway.

    Also, note that AWS cautions on multiple concurrent writers to S3 buckets with Storage Gateway so check the AWS FAQs which may have changed by the time you read this. Current file share limits (subject to change) include 1 file gateway share per S3 bucket (e.g. a one to one mapping between file share and a bucket). There can be 10 file shares per gateway (e.g. multiple shares each with its own bucket per gateway) and a maximum file size of 5TB (same as maximum S3 object size). Note that you might hear about object storage systems supporting unlimited size objects which some may do, however generally there are some constraints either on their API front-end, or what is currently tested. View current AWS Storage Gateway resource and specification limits here.

  • AWS Storage Gateway Non-Cached Volume Architecture via AWS.com

    AWS Storage Gateway Cached Volume Architecture via AWS.com

  • Volume Gateway (Block iSCSI) – Leverages S3 with a point in time backup as an AWS EBS snapshot. Two options exist including Cached volumes with low-latency access to most recently used data (e.g. data is stored in AWS, with a local cache copy on disk or SSD). The other option is Stored Volumes (e.g. non-cached) where primary copy is local and periodic snapshot backups are sent to AWS. AWS provides EC2 hosted, as well as VMs for VMware and various Hyper-V Windows Server based VMs.

    Current Storage Gateway volume limits (subject to change) include maximum size of a cached volume 32TB, maximum size of a stored volume 16TB. Note that snapshots of cached volumes larger than 16TB can only be restored to a storage gateway volume, they can not be restored as an EBS volume (via EC2). There are a maximum of 32 volumes for a gateway with total size of all volumes for a gateway (cached) of 1,024TB (e.g. 1PB). The total size of all volumes for a gateway (stored volume) is 512TB. View current AWS Storage Gateway resource and specification limits here.

  • AWS Storage Gateway VTL Architecture via AWS.com

  • Virtual Tape Library Gateway (VTL) – Supports saving your data for backup/BC/DR/archiving into S3 and Glacier storage tiers. Being a Virtual Tape Library (e.g. VTL) you can specify emulation of tapes for compatibility with your existing backup, archiving and data protection software, management tools and processes.

    Storage Gateway limits for tape include minimum size of a virtual tape 100GB, maximum size of a virtual tape 2.5TB, maximum number of virtual tapes for a VTL is 1,500 and total size of all tapes in a VTL is 1PB. Note that the maximum number of virtual tapes in an archive is unlimited and total size of all tapes in an archive is also unlimited. View current AWS Storage Gateway resource and specification limits here.

    AWS

Where To Learn More

What This All Means

As to which gateway function and mode (cached or non-cached for Volumes) depends on what it is that you are trying to do. Likewise choosing between EC2 (cloud hosted) or on-premises Hyper-V and VMware VMs depends on what your data infrastructure support requirements are. Overall I like the progress that AWS has put into evolving the Storage Gateway, granted it might not be applicable for all usage cases. Continue reading more and view images from the AWS Storage Gateway Revisited test drive in part two located here.

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

Azure Stack TP3 Overview Preview Review Part II

server storage I/O trends

Azure Stack TP3 Overview Preview (Part II) Install Review

This is part two of a two-part series looking at Microsoft Azure Stack with a focus on my experiences installing Microsoft Azure Stack Technical Preview 3 (TP3) including into a nested VMware vSphere ESXi environment. Read part one here that provides a general overview of Azure Stack.

Azure Stack Review and Install

Being familiar with Microsoft Azure public cloud having used it for a few years now, I wanted to gain some closer insight, experience, expand my trade craft on Azure Stack by installing TP3. This is similar to what I have done in the past with OpenStack, Hadoop, Ceph, VMware, Hyper-V and many others, some of which I need to get around to writing about sometime. As a refresher from part one of this series, the following is an image via Microsoft showing the Azure Stack TP3 architecture, click here or on the image to learn more including the names and functions of the various virtual machines (VMs) that make up Azure Stack.

Microsoft Azure Stack architecture
Click here or on the above image to view list of VMs and other services (Image via Microsoft.com)

Whats Involved Installing Azure Stack TP3?

The basic steps are as follows:

  • Read this Azure Stack blog post (Azure Stack)
  • Download the bits (e.g. the Azure Stack software) from here, where you access the Azure Stack Downloader tool.
  • Planning your deployment making decisions on Active Directory and other items.
  • Prepare the target server (physical machine aka PM, or virtual machine VM) that will be the Azure Stack destination.
  • Copy Azure Stack software and installer to target server and run pre-install scripts.
  • Modify PowerShell script file if using a VM instead of a PM
  • Run the Azure Stack CloudBuilder setup, configure unattend.xml if needed or answer prompts.
  • Server reboots, select Azure Stack from two boot options.
  • Prepare your Azure Stack base system (time, network NICs in static or DHCP, if running on VMware install VMtools)
  • Determine if you will be running with Azure Active Directory (AAD) or standalone Active Directory Federated Services (ADFS).
  • Update any applicable installation scripts (see notes that follow)
  • Deploy the script, then extended Azure Stack TP3 PoC as needed

Note that this is a large download of about 16GB (23GB with optional WIndows Server 2016 demo ISO).

Use the AzureStackDownloader tool to download the bits (about 16GB or 23GB with optional Windows Server 2016 base image) which will either be in several separate files which you stitch back together with the MicrosoftAzureStackPOC tool, or as a large VHDX file and smaller 6.8GB ISO (Windows Server 2016). Prepare your target server system for installation once you have all the software pieces downloaded (or do the preparations while waiting for download).

Once you have the software downloaded, if it is a series of eight .bin files (7 about 2GB, 1 around 1.5GB), good idea to verify their checksums, then stitch them together on your target system, or on a staging storage device or file share. Note that for the actual deployment first phase, the large resulting cloudbuilder.vhdx file will need to reside in the C:\ root location of the server where you are installing Azure Stack.

server storageio nested azure stack tp3 vmware

Azure Stack deployment prerequisites (Microsoft) include:

  • At least 12 cores (or more), dual socket processor if possible
  • As much DRAM as possible (I used 100GB)
  • Put the operating system disk on flash SSD (SAS, SATA, NVMe) if possible, allocate at least 200GB (more is better)
  • Four x 140GB or larger (I went with 250GB) drives (HDD or SSD) for data deployment drives
  • A single NIC or adapter (I put mine into static instead of DHCP mode)
  • Verify your physical or virtual server BIOS has VT enabled

The above image helps to set the story of what is being done. On the left is for bare metal (BM) or physical machine (PM) install of Azure Stack TP3, on the right, a nested VMware (vSphere ESXi 6.5) with virtual machine (VM) 11 approach. Note that you could also do a Hyper-V nested among other approaches. Shown in the image above common to both a BM or VM is a staging area (could be space on your system drive) where Azure Stack download occurs. If you use a separate staging area, then simply copy the individual .bin files and stitch together into the larger .VHDX, or, copy the larger .VHDX, which is better is up to your preferences.

Note that if you use the nested approach, there are a couple of configuration (PowerShell) scripts that need to be updated. These changes are to trick the installer into thinking that it is on a PM when it checks to see if on physical or virtual environments.

Also note that if using nested, make sure you have your VMware vSphere ESXi host along with specific VM properly configured (e.g. that virtualization and other features are presented to the VM). With vSphere ESXi 6.5 virtual machine type 11 nesting is night and day easier vs. earlier generations.

Something else to explain here is that you will initially start the Azure Stack install preparation using a standard Windows Server (I used a 2016 version) where the .VHDX is copied into its C:\ root. From there you will execute some PowerShell scripts to setup some configuration files, one of which needs to be modified for nesting.

Once those prep steps are done, there is a Cloudbuilder deploy script that gets run that can be done with an unattend.xml file or manual input. This step will cause a dual-boot option to be added to your server where you can select Azure Stack or your base prep Windows Server instance, followed by reboot.

After the reboot occurs and you choose to boot into Azure Stack, this is the server instance that will actually run the deployment script, as well as build and launch all the VMs for the Azure Stack TP3 PoC. This is where I recommend having a rough sketch like above to annotate layers as you go to remember what layer working at. Don’t worry, it becomes much easier once all is said and done.

Speaking of preparing your server, refer to Microsoft specs, however in general give the server as much RAM and cores as possible. Also if possible place the system disk on a flash SSD (SAS, SATA, NVMe) and make sure that it has at least 200GB, however 250 or even 300GB is better (just in case you need more space).

Additional configuration tips include allocating four data disks for Azure, if possible make these SSDs as well as, however more important IMHO to have at least the system on fast flash SSD. Another tip is to enable only one network card or NIC and put it into static vs. DHCP address mode to make things easier later.

Tip: If running nested, vSphere 6.5 worked the smoothest as had various issues or inconsistencies with earlier VMware versions, even with VMs that ran nested just fine.

Tip: Why run nested? Simple, I wanted to be able to use using VMware tools, do snapshots to go back in time, plus share the server with some other activities until ready to give Azure Stack TP3 its own PM.

Tip: Do not connect the POC machine to the following subnets (192.168.200.0/24, 192.168.100.0/27, 192.168.101.0/26, 192.168.102.0/24, 192.168.103.0/25, 192.168.104.0/25) as Azure Stack TP3 uses those.

storageio azure stack tp3 vmware configuration

Since I decided to use a nested VM deploying using VMware, there were a few extra steps needed that I have included as tips and notes. Following is view via vSphere client of the ESXi host and VM configuration.

The following image combines a couple of different things including:

A: Showing the contents of C:\Azurestack_Supportfiles directory

B: Modifying the PrepareBootFromVHD.ps1 file if deploying on virtual machine (See tips and notes)

C: Showing contents of staging area including individual .bin files along with large CloudBuilder.vhdx

D: Running the PowerShell script commands to prepare the PrepareBootFromVHD.ps1 and related items

prepariing azure stack tp3 cloudbuilder for nested vmware deployment

From PowerShell (administrator):

# Variables
$Uri = 'https://raw.githubusercontent.com/Azure/Azure stack/master/Deployment/'
$LocalPath = 'c:\AzureStack_SupportFiles'

# Create folder
New-Item $LocalPath -type directory

# Download files
( 'BootMenuNoKVM.ps1', 'PrepareBootFromVHD.ps1', 'Unattend.xml', 'unattend_NoKVM.xml') | foreach { Invoke-WebRequest ($uri + $_) -OutFile ($LocalPath + '\' + $_) }

After you do the above, decide if you will be using an Unattend.xml or manual entry of items for building the Azure Stack deployment server (e.g. a Windows Server). Note that the above PowerShell script created the C:\azurestack_supportfiles folder and downloads the script files for building the cloud image using the previously downloaded Azure Stack CloudBuilder.vhdx (which should be in C:\).

Note and tip is that if you are doing a VMware or virtual machine based deployment of TP3 PoC, you will need to change C:\PrepareBootFromVHD.ps1 in the Azure Stack support files folder. Here is a good resource on what gets changed via Github that shows an edit on or about line 87 of PrepareBootFromVHD.ps1. If you run the PrepareBootFromVHD.ps1 script on a virtual machine you will get an error message, the fix is relatively easy (after I found this post).

Look in PrepareBootFromVHD.ps1 for something like the following around line 87:

if ((get-disk | where {$_.isboot -eq $true}).Model -match 'Virtual Disk')       {      Write-Host "The server is currently already booted from a virtual hard disk, to boot the server from the CloudBuilder.vhdx you will need to run this script on an Operating System that is installed on the physical disk of this server."      Exit      }

You can either remove the "exit" command, or, change the test for "Virtual Disk" to something like "X", for fun I did both (and it worked).

Note that you only have to make the above and another change in a later step if you are deploying Azure Stack TP3 as a virtual machine.

Once you are ready, go ahead and launch the PrepareBootFromVHD.ps1 script which will set the BCDBoot entry (more info here).

azure stack tp3 cloudbuilder nested vmware deployment

You will see a reboot and install, this is installing what will be called the physical instance. Note that this is really being installed on the VM system drive as a secondary boot option (e.g. azure stack).

azure stack tp3 dual boot option

After the reboot, login to the new Azure Stack base system and complete any configuration including adding VMware Tools if using VMware nested. Some other things to do include make sure you have your single network adapter set to static (makes things easier), and any other updates or customizations. Before you run the next steps, you need to decide if going to use Azure Active Directory (AAD) or local ADFS.

Note that if you are not running on a virtual machine, simply open a PowerShell (administrator) session, and run the deploy script. Refer to here for more guidance on the various options available including discussion on using AAD or ADFS.

Note if you run the deployment script on a virtual machine, you will get an error which is addressed in the next section, otherwise, sit back and watch the progress..

CloudBuilder Deployment Time

Once you have your Azure Stack deployment system and environment ready, including a snapshot if on virtual machine, launch the PowerShell deployment script. Note that you will need to have decided if deploying with Azure Active Directory (AAD) or Azure Directory Federated Services (ADFS) for standalone aka submarine mode. There are also other options you can select as part of the deployment discussed in the Azure Stack tips here (a must read) and here. I chose to do a submarine mode (e.g. not connected to Public Azure and AAD) deployment.

From PowerShell (administrator):

cd C:\CloudDeployment:\Setup
$adminpass = ConvertTo-SecureString "youradminpass" -AsPlainText -Force
.\InstallAzureStackPOC.ps1 -AdminPassword $adminpass -UseADFS

Deploying on VMware Virtual Machines Tips

Here is a good tip via Gareth Jones (@garethjones294) that I found useful for updating one of the deployment script files (BareMetal_Tests.ps1 located in C:\CloudDeployment\Roles\PhysicalMachines\Tests folder) so that it would skip the bare metal (PM) vs. VM tests. Another good resource, even though it is for TP2 and early versions of VMware is TP2 deployment experiences by Niklas Akerlund (@vNiklas).

Note that this is a bit of a chick and egg scenario unless you are proficient at digging into script files since the BareMetal_Tests.ps1 file does not get unpacked until you run the CloudBuilder deployment script. If you run the script and get an error, then make the changes below, and rerun the script as noted. Once you make the modification to the BareMetal_Tests.ps1 file, keep a copy in a safe place for future use.

Here are some more tips for deploying Azure Stack on VMware,

Per the tip mentioned about via Gareth Jones (tip: read Gareths post vs. simply cut and paste the following which is more of a guide):

Open BareMetal_Tests.ps1 file in PowerShell ISE and navigate to line 376 (or in that area)
Change $false to $true which will stop the script failing when checking to see if the Azure Stack is running inside a VM.
Next go to line 453.
Change the last part of the line to read “Should Not BeLessThan 0”
This will stop the script checking for the required amount of cores available.

After you make the above correction as with any error (and fix) during Azure Stack TP3 PoC deployment, simply run the following.

cd C:\CloudDeployment\Setup
.\InstallAzureStackPOC.ps1 -rerun

Refer to the extra links in the where to learn more section below that offer various tips, tricks and insight that I found useful, particular for deploying on VMware aka nested. Also in the links below are tips on general Azure Stack, TP2, TP3, adding services among other insight.

starting azure stack tp3 deployment

Tip: If you are deploying Azure Stack TP3 PoC on virtual machine, once you start the script above, copy the modified BareMetal_Tests.ps1 file

Once the CloudBuilder deployment starts, sit back and wait, if you are using SSDs, it will take a while, if using HDDs, it will take a long while (up to hours), however check in on it now and then to see progress of if any errors. Note that some of the common errors will occur very early in the deployment such as the BareMetal_Tests.ps1 mentioned above.

azure stack tp3 deployment finished

Checking in periodically to see how the deployment progress is progressing, as well as what is occurring. If you have the time, watch some of the scripts as you can see some interesting things such as the software defined data center (SDDC) aka software-defined data infrastructure (SDDC) aka Azure Stack virtual environment created. This includes virtual machine creation and population, creating the software defined storage using storage spaces direct (S2D), virtual network and active directory along with domain controllers among others activity.

azure stack tp3 deployment progress

After Azure Stack Deployment Completes

After you see the deployment completed, you can try accessing the management portal, however there may be some background processing still running. Here is a good tip post on connecting to Azure Stack from Microsoft using Remote Desktop (RDP) access. Use RDP from the Azure Stack deployment Windows Server and connect to a virtual machine named MAS-CON01, launch Server Manager and for Local Server disable Internet Explorer Enhanced Security (make sure you are on the right system, see the tip mentioned above). Disconnect from MAS-CON01 (refer to the Azure Stack architecture image above), then reconnect, and launch Internet Explorer with an URL of (note documentation side to use which did not work for me).

Note the username for the Azure Stack system is AzureStack\AzureStackAdmin with a password of what you set for administrative during setup. If you get an error, verify the URLs, check your network connectivity, wait a few minutes as well as verify what server you are trying to connect from and too. Keep in mind that even if deploying on a PM or BM (e.g. non virtual server or VM), the Azure Stack deployment TP3 PoC creates a "virtual" software-defined environment with servers, storage (Azure Stack uses Storage Spaces Direct [S2D] and software defined network.

accessing azure stack tp3 management portal dashboard

Once able to connect to Azure Stack, you can add new services including virtual machine image instances such as Windows (use the Server 2016 ISO that is part of Azure Stack downloads), Linux or others. You can also go to these Microsoft resources for some first learning scenarios, using the management portals, configuring PowerShell and troubleshooting.

Where to learn more

The following provide more information and insight about Azure, Azure Stack, Microsoft and Windows among related topics.

  • Azure Stack Technical Preview 3 (TP3) Overview Preview Review
  • Azure Stack TP3 Overview Preview Review Part II
  • Azure Stack Technical Preview (get the bits aka software download here)
  • Azure Stack deployment prerequisites (Microsoft)
  • Microsoft Azure Stack troubleshooting (Microsoft Docs)
  • Azure Stack TP3 refresh tips (Azure Stack)
  • Here is a good post with a tip about not applying certain Windows updates to AzureStack TP3 installs.
  • Configure Azure Stack TP3 to be available on your own network (Azure Stack)
  • Azure Stack TP3 Marketplace syndication (Azure Stack)
  • Azure Stack TP3 deployment experiences (Azure Stack)
  • Frequently asked questions for Azure Stack (Microsoft)
  • Azure Active Directory (AAD) and Active Directory Federation Services (ADFS)
  • Deploy Azure Stack (Microsoft)
  • Connect to Azure Stack (Microsoft)
  • Azure Stack TP2 deployment experiences by Niklas Akerlund (@vNiklas) useful for tips for TP3
  • Deployment Checker for Azure Stack Technical Preview (Microsoft Technet)
  • Azure stack and other tools (Github)
  • How to enable nested virtualization on Hyper-V Windows Server 2016
  • Dell EMC announce Microsoft Hybrid Cloud Azure Stack (Dell EMC)
  • Dell EMC Cloud for Microsoft Azure Stack (Dell EMC)
  • Dell EMC Cloud for Microsoft Azure Stack Data Sheet (Dell EMC PDF)
  • Dell EMC Cloud Chats (Dell EMC Blog)
  • Microsoft Azure stack forum
  • Dell EMC Microsoft Azure Stack solution
  • Gaining Server Storage I/O Insight into Microsoft Windows Server 2016
  • Overview Review of Microsoft ReFS (Reliable File System) and resource links
  • Via WServerNews.com Cloud (Microsoft Azure) storage considerations
  • Via CloudComputingAdmin.com Cloud Storage Decision Making: Using Microsoft Azure for cloud storage
  • www.thenvmeplace.com, www.thessdplace.com, www.objectstoragecenter.com and www.storageio.com/converge
  • What this all means

    A common question is if there is demand for private and hybrid cloud, in fact, some industry expert pundits have even said private, or hybrid are dead which is interesting, how can something be dead if it is just getting started. Likewise, it is early to tell if Azure Stack will gain traction with various organizations, some of whom may have tried or struggled with OpenStack among others.

    Given a large number of Microsoft Windows-based servers on VMware, OpenStack, Public cloud services as well as other platforms, along with continued growing popularity of Azure, having a solution such as Azure Stack provides an attractive option for many environments. That leads to the question of if Azure Stack is essentially a replacement for Windows Servers or Hyper-V and if only for Windows guest operating systems. At this point indeed, Windows would be an attractive and comfortable option, however, given a large number of Linux-based guests running on Hyper-V as well as Azure Public, those are also primary candidates as are containers and other services.

    software defined data infrastructures SDDI and SDDC

    Some will say that if OpenStack is struggling in many organizations and being free open source, how Microsoft can have success with Azure Stack. The answer could be that some organizations have struggled with OpenStack while others have not due to lack of commercial services and turnkey support. Having installed both OpenStack and Azure Stack (as well as VMware among others), Azure Stack is at least the TP3 PoC is easy to install, granted it is limited to one node, unlike the production versions. Likewise, there are easy to use appliance versions of OpenStack that are limited in scale, as well as more involved installs that unlock full functionality.

    OpenStack, Azure Stack, VMware and others have their places, alongside, or supporting containers along with other tools. In some cases, those technologies may exist in the same environment supporting different workloads, as well as accessing various public clouds, after all, Hybrid is the home run for many if not most legality IT environments.

    Ok, nuff said (for now…).

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book "Software-Defined Data Infrastructure Essentials" (CRC Press).

    Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

    Gaining Server Storage I/O Insight into Microsoft Windows Server 2016

    Server Storage I/O Insight into Microsoft Windows Server 2016

    server storage I/O trends
    Updated 12/8/16

    In case you had not heard, Microsoft announced the general availability (GA, also known as Release To Manufacturing (RTM) ) of the newest version of its Windows server operating system aka Windows Server 2016 along with System Center 2016. Note that as well as being released to traditional manufacturing distribution mediums as well as MSDN, the Windows Server 2016 bits are also available on Azure.

    Microsoft Windows Server 2016
    Windows Server 2016 Welcome Screen – Source Server StorageIOlab.com

    For some this might be new news, or a refresh of what Microsoft announced a few weeks ago (e.g. the formal announcement). Likewise, some of you may not be aware that Microsoft is celebrating WIndows Server 20th Birthday (read more here).

    Yet for others who have participated in the public beta aka public technical previews (TP) over the past year or two or simply after the information coming out of Microsoft and other venues, there should not be a lot of surprises.

    Whats New With Windows Server 2016

    Microsoft Windows Server 2016 Desktop
    Windows Server 2016 Desktop and tools – Source Server StorageIOlab.com

    Besides a new user interface including visual GUI and Powershell among others, there are many new feature functionalities summarized below:

    • Enhanced time-server with 1ms accuracy
    • Nano and Windows Containers (Linux via Hyper-V)
    • Hyper-V enhanced Linux services including shielded VMs
    • Simplified management (on-premisess and cloud)
    • Storage Spaces Direct (S2D) and Storage Replica (SR) – view more here and here


    Storage Replica (SR) Scenarios including synchronous and asynchronous – Via Microsoft.com

    • Resilient File System aka ReFS (now default file system) storage tiering (cache)
    • Hot-swap virtual networking device support
    • Reliable Change Tracking (RCT) for faster Hyper-V backups
    • RCT improves resiliency vs. VSS change tracking
    • PowerShell and other management enhancements
    • Including subordinated / delegated management roles
    • Compliment Azure AD with on premise AD
    • Resilient/HA RDS using Azure SQL DB for connection broker
    • Encrypted VMs (at rest and during live migration)
    • AD Federation Services (FS) authenticate users in LDAP dir.
    • vTPM for securing and encrypting Hyper-V VMs
    • AD Certificate Services (CS) increase support for TPM
    • Enhanced TPM support for smart card access management
    • AD Domain Services (DS) security resiliency for hybrid and mobile devices

    Here is a Microsoft TechNet post that goes into more detail of what is new in WIndows Server 2016.

    Free ebook: Introducing Windows Server 2016 Technical Preview (Via Microsoft Press)

    Check out the above free ebook, after looking through it, I recommend adding it to your bookshelf. There are lots of good intro and overview material for Windows Server 2016 to get you up to speed quickly, or as a refresh.

    Storage Spaces Direct (S2D) CI and HCI

    Storage Spaces Direct (S2D) builds on Storage Spaces that appeared in earlier Windows and Windows Server editions. Some of the major changes and enhancements include ability to leverage local direct attached storage (DAS) such as internal (or external) dedicated NVMe, SAS and SATA HDDs as well as flash SSDs that used for creating software defined storage for various scenarios.

    Scenarios include converged infrastructure (CI) disaggregated as well as aggregated hyper-converged infrastructure (HCI) for Hyper-V among other workloads. Windows Server 2016 S2D nodes communicate (from a storage perspective) via a software storage bus. Data Protection and availability is enabled between S2D nodes via Storage Replica (SR) that can do software based synchronous and asynchronous replication.


    Aggregated – Hyper-Converged Infrastructure (HCI) – Source Microsoft.com


    Desegregated – Converged Infrastructure (CI) – Source Microsoft.com

    The following is a Microsoft produced YouTube video providing a nice overview and insight into Windows Server 2016 and Microsoft Software Defined Storage aka S2D.




    YouTube Video Storage Spaces Direct (S2D) via Microsoft.com

    Server storage I/O performance

    What About Performance?

    A common question that comes up with servers, storage, I/O and software defined data infrastructure is what about performance?

    Following are some various links to different workloads showing performance for Hyper-V, S2D and Windows Server. Note as with any benchmark, workload or simulation take them for what they are, something to compare that may or might not be applicable to your own workload and environments.

    • Large scale VM performance with Hyper-V and in-memory transaction processing (Via Technet)
    • Benchmarking Microsoft Hyper-V server, VMware ESXi and Xen Hypervisors (Via cisjournal PDF)
    • Server 2016 Impact on VDI User Experience (Via LoginVSI)
    • Storage IOPS update with Storage Spaces Direct (Via TechNet)
    • SQL Server workload (benchmark) Order Processing Benchmark using In-Memory OLTP (Via Github)
    • Setting up testing Windows Server 2016 and S2D using virtual machines (Via MSDN blogs)
    • Storage throughput with Storage Spaces Direct (S2D TP5 (Via TechNet)
    • Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I)

    Where To Learn More

    For those of you not as familiar with Microsoft Windows Server and related topics, or that simply need a refresh, here are several handy links as well as resources.

    • Introducing Windows Server 2016 (Free ebook from Microsoft Press)
    • What’s New in Windows Server 2016 (Via TechNet)
    • Microsoft S2D Software Storage Bus (Via TechNet)
    • Understanding Software Defined Storage with S2D in Windows Server 2016 (Via TechNet)
    • Microsoft Storage Replica (SR) (Via TechNet)
    • Server and Storage I/O Benchmark Tools: Microsoft Diskspd (Part I)
    • Microsoft Windows S2D Software Defined Storage (Via TechNet)
    • Windows Server 2016 and Active Directory (Redmond Magazine Webinar)
    • Data Protection for Modern Microsoft Environments (Redmond Magazine Webinar)
    • Resilient File System aka ReFS (Via TechNet)
    • DISKSPD now on GitHub, and the mysterious VMFLEET released (Via TechNet)
    • Hyper-converged solution using Storage Spaces Direct in Windows Server 2016 (Via TechNet)
    • NVMe, SSD and HDD storage configurations in Storage Spaces Direct TP5 (Via TechNet)
    • General information about SSD at www.thessdplace.com and NVMe at www.thenvmeplace.com
    • How to run nested Hyper-V and Windows Server 2016 (Via Altaro and via MSDN)
    • How to run Nested Windows Server and Hyper-V on VMware vSphere ESXi (Via Nokitel)
    • Get the Windows Server 2016 evaluation bits here
    • Microsoft Azure Stack overview and related material via Microsoft
    • Introducing Windows Server 2016 (Via MicrosoftPress)
    • Various WIndows Server and S2D lab scripts (Via Github)
    • Storage Spaces Direct – Lab Environment Setup (Via Argon Systems)
    • Setting up S2D with a 4 node configuration (Via StarWind blog)
    • SQL Server workload (benchmark) Order Processing Benchmark using In-Memory OLTP (Via Github)
    • Setting up testing Windows Server 2016 and S2D here using virtual machines (Via MSDN blogs)
    • Hyper-V large-scale VM performance for in-memory transaction processing (Via Technet)
    • BrightTalk Webinar – Software-Defined Data Centers (SDDC) are in your Future (if not already here)
    • Microsoft TechNet: Understand the cache in Storage Spaces Direct
    • BrightTalk Weibniar – Software-Defined Data Infrastructures Enabling Software-Defined Data Centers
    • Happy 20th Birthday Windows Server, ready for Server 2016?
    • Server StorageIO resources including added links, tools, reports, events and more.

    What This All Means

    While Microsoft Windows Server recently celebrated its 20th birthday (or anniversary), a lot has changed as well as evolved. This includes Windows Servers 2016 supporting new deployment and consumption models (e.g. lightweight Nano, full data center with desktop interface, on-premises, bare metal, virtualized (Hyper-V, VMware, etc) as well as cloud). Besides how consumed and configured, which can also be for CI and HCI modes, Windows Server 2016 along with Hyper-V extend the virtualization and container capabilities into non-Microsoft environments specifically around Linux and Docker. Not only are the support for those environments and platforms enhanced, so to are the management capabilities and interfaces from Powershell to Bash Linux shell being part of WIndows 10 and Server 2016.

    What this all means is that if you have not looked at Windows Server in some time, its time you do, even if you are not a WIndows or Microsoft fan, you will want to know what it is that has been updated (perhaps even update your fud if that is the case) to stay current. Get your hands on the bits and try Windows Server 2016 on a bare metal server, or as a VM guest, or via cloud including Azure, or simply leverage the above resources to learn more and stay informed.

    Ok, nuff said, for now…

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, vSAN and VMware vExpert. Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier) and twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    NVMe Place NVM Non Volatile Memory Express Resources

    Updated 8/31/19
    NVMe place server Storage I/O data infrastructure trends

    Welcome to NVMe place NVM Non Volatile Memory Express Resources. NVMe place is about Non Volatile Memory (NVM) Express (NVMe) with Industry Trends Perspectives, Tips, Tools, Techniques, Technologies, News and other information.

    Disclaimer

    Please note that this NVMe place resources site is independent of the industry trade and promoters group NVM Express, Inc. (e.g. www.nvmexpress.org). NVM Express, Inc. is the sole owner of the NVM Express specifications and trademarks.

    NVM Express Organization
    Image used with permission of NVM Express, Inc.

    Visit the NVM Express industry promoters site here to learn more about their members, news, events, product information, software driver downloads, and other useful NVMe resources content.

     

    The NVMe Place resources and NVM including SCM, PMEM, Flash

    NVMe place includes Non Volatile Memory (NVM) including nand flash, storage class memories (SCM), persistent memories (PM) are storage memory mediums while NVM Express (NVMe) is an interface for accessing NVM. This NVMe resources page is a companion to The SSD Place which has a broader Non Volatile Memory (NVM) focus including flash among other SSD topics. NVMe is a new server storage I/O access method and protocol for fast access to NVM based storage and memory technologies. NVMe is an alternative to existing block based server storage I/O access protocols such as AHCI/SATA and SCSI/SAS devices commonly used for access Hard Disk Drives (HDD) along with SSD among other things.

    Server Storage I/O NVMe PCIe SAS SATA AHCI
    Comparing AHCI/SATA, SCSI/SAS and NVMe all of which can coexist to address different needs.

    Leveraging the standard PCIe hardware interface, NVMe based devices (that have an NVMe controller) can be accessed via various operating systems (and hypervisors such as VMware ESXi) with both in the box drivers or optional third-party device drivers. Devices that support NVMe can be 2.5″ drive format packaged that use a converged 8637/8639 connector (e.g. PCIe x4) coexisting with SAS and SATA devices as well as being add-in card (AIC) PCIe cards supporting x4, x8 and other implementations. Initially, NVMe is being positioned as a back-end to servers (or storage systems) interface for accessing fast flash and other NVM based devices.

    NVMe as back-end storage
    NVMe as a “back-end” I/O interface for NVM storage media

    NVMe as front-end server storage I/O interface
    NVMe as a “front-end” interface for servers or storage systems/appliances

    NVMe has also been shown to work over low latency, high-speed RDMA based network interfaces including RoCE (RDMA over Converged Ethernet) and InfiniBand (read more here, here and here involving Mangstor, Mellanox and PMC among others). What this means is that like SCSI based SAS which can be both a back-end drive (HDD, SSD, etc) access protocol and interface, NVMe can also being used for back-end can also be used as a front-end of server to storage interface like how Fibre Channel SCSI_Protocol (aka FCP), SCSI based iSCSI, SCSI RDMA Protocol via InfiniBand (among others) are used.

    NVMe features

    Main features of NVMe include among others:

    • Lower latency due to improve drivers and increased queues (and queue sizes)
    • Lower CPU used to handle larger number of I/Os (more CPU available for useful work)
    • Higher I/O activity rates (IOPs) to boost productivity unlock value of fast flash and NVM
    • Bandwidth improvements leveraging various fast PCIe interface and available lanes
    • Dual-pathing of devices like what is available with dual-path SAS devices
    • Unlock the value of more cores per processor socket and software threads (productivity)
    • Various packaging options, deployment scenarios and configuration options
    • Appears as a standard storage device on most operating systems
    • Plug-play with in-box drivers on many popular operating systems and hypervisors

    Shared external PCIe using NVMe
    NVMe and shared PCIe (e.g. shared PCIe flash DAS)

    NVMe related content and links

    The following are some of my tips, articles, blog posts, presentations and other content, along with material from others pertaining to NVMe. Keep in mind that the question should not be if NVMe is in your future, rather when, where, with what, from whom and how much of it will be used as well as how it will be used.

    • How to Prepare for the NVMe Server Storage I/O Wave (Via Micron.com)
    • Why NVMe Should Be in Your Data Center (Via Micron.com)
    • NVMe U2 (8639) vs. M2 interfaces (Via Gamersnexus)
    • Enmotus FuzeDrive MicroTiering (StorageIO Lab Report)
    • EMC DSSD D5 Rack Scale Direct Attached Shared SSD All Flash Array Part I (Via StorageIOBlog)
    • Part II – EMC DSSD D5 Direct Attached Shared AFA (Via StorageIOBlog)
    • NAND, DRAM, SAS/SCSI & SATA/AHCI: Not Dead, Yet! (Via EnterpriseStorageForum)
    • Non Volatile Memory (NVM), NVMe, Flash Memory Summit and SSD updates (Via StorageIOblog)
    • Microsoft and Intel showcase Storage Spaces Direct with NVM Express at IDF ’15 (Via TechNet)
    • MNVM Express solutions (Via SuperMicro)
    • Gaining Server Storage I/O Insight into Microsoft Windows Server 2016 (Via StorageIOblog)
    • PMC-Sierra Scales Storage with PCIe, NVMe (Via EEtimes)
    • RoCE updates among other items (Via InfiniBand Trade Association (IBTA) December Newsletter)
    • NVMe: The Golden Ticket for Faster Flash Storage? (Via EnterpriseStorageForum)
    • What should I consider when using SSD cloud? (Via SearchCloudStorage)
    • MSP CMG, Sept. 2014 Presentation (Flash back to reality – Myths and Realities – Flash and SSD Industry trends perspectives plus benchmarking tips)– PDF
    • Selecting Storage: Start With Requirements (Via NetworkComputing)
    • PMC Announces Flashtec NVMe SSD NVMe2106, NVMe2032 Controllers With LDPC (Via TomsITpro)
    • Exclusive: If Intel and Micron’s “Xpoint” is 3D Phase Change Memory, Boy Did They Patent It (Via Dailytech)
    • Intel & Micron 3D XPoint memory — is it just CBRAM hyped up? Curation of various posts (Via Computerworld)
    • How many IOPS can a HDD, HHDD or SSD do (Part I)?
    • How many IOPS can a HDD, HHDD or SSD do with VMware? (Part II)
    • I/O Performance Issues and Impacts on Time-Sensitive Applications (Via CMG)
    • Via EnterpriseStorageForum: 5 Hot Storage Technologies to Watch
    • Via EnterpriseStorageForum: 10-Year Review of Data Storage

    Non-Volatile Memory (NVM) Express (NVMe) continues to evolve as a technology for enabling and improving server storage I/O for NVM including nand flash SSD storage. NVMe streamline performance enabling more work to be done (e.g. IOPs), data to be moved (bandwidth) at a lower response time using less CPU.

    NVMe and SATA flash SSD performance

    The above figure is a quick look comparing nand flash SSD being accessed via SATA III (6Gbps) on the left and NVMe (x4) on the right. As with any server storage I/O performance comparisons there are many variables and take them with a grain of salt. While IOPs and bandwidth are often discussed, keep in mind that with the new protocol, drivers and device controllers with NVMe that streamline I/O less CPU is needed.

    Additional NVMe Resources

    Also check out the Server StorageIO companion micro sites landing pages including thessdplace.com (SSD focus), data protection diaries (backup, BC/DR/HA and related topics), cloud and object storage, and server storage I/O performance and benchmarking here.

    If you are in to the real bits and bytes details such as at device driver level content check out the Linux NVMe reflector forum. The linux-nvme forum is a good source if you are developer to stay up on what is happening in and around device driver and associated topics.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    Disclaimer

    Disclaimer: Please note that this site is independent of the industry trade and promoters group NVM Express, Inc. (e.g. www.nvmexpress.org). NVM Express, Inc. is the sole owner of the NVM Express specifications and trademarks. Check out the NVM Express industry promoters site here to learn more about their members, news, events, product information, software driver downloads, and other useful NVMe resources content.

    NVM Express Organization
    Image used with permission of NVM Express, Inc.

    Wrap Up

    Watch for updates with more content, links and NVMe resources to be added here soon.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Breaking the VMware ESXi 5.5 ACPI boot loop on Lenovo TD350

    Storage I/O trends

    Breaking the VMware ESXi 5.5 ACPI boot loop on Lenovo TD350

    Do you have a Lenovo TD350 or for that many other servers that when trying to load or run VMware vSphere ESXi 5.5 u2 (or other versions) and run into the boot loop at the “Initializing ACPI” point?

    Lenovo TD350 server

    VMware ACPI boot loop

    The symptoms are that you see ESXi start its boot process, loading drivers and modules (e.g. black screen), then you see the Yellow boot screen with Timer and Scheduler initialized, and at the “Initializing ACPI” point, ka boom, either a boot loop starts (e.g. the above processes repeats after system boots).

    The fix is actually pretty quick and simple, finding it took a bit of time, trial and error.

    There were of course the usual suspects such as

    • Checking to BIOS and firmware version of the motherboard on the Lenovo TD350 (checked this, however did not upgrade)
    • Making sure that the proper VMware ESXi patches and updates were installed (they were, this was a pre built image from another working server)
    • Having the latest installation media if this was a new install (tried this as part of trouble shooting to make sure the pre built image was ok)
    • Remove any conflicting devices (small diversion hint: make sure if you have cloned a working VMware image to an internal drive that it is removed to avoid same file system UUID errors)
    • Boot into BIOS making sure that for processor VT is enabled, for SATA that AHCI is enabled for any drives as opposed to IDE or RAID, and that for boot, make sure set to Legacy vs. Auto (e.g. disable UEFI support) as well as verify boot order. Having been in auto mode for UEFI support for some other activity, this was easy to change, however was not the magic silver bullet I was looking for.

    Breaking the VMware ACPI boot loop on Lenovo TD350

    After doing some searching and coming up with some interesting and false leads, as well as trying several boots, BIOS configuration changes, even cloning the good VMware ESXi boot image to an internal drive if there was a USB boot issue, the solution was rather simple once found (or remembered).

    Lenovo TD350 Basic BIOS settings
    Lenovo TD350 BIOS basic settings

    Lenovo TD350 processor BIOS settings
    Lenovo TD350 processor settings

    Make sure that in your BIOS setup under PCIE that you have that you disable “Above 4GB decoding".

    Turns out that I had enabled "Above 4GB decoding" for some other things I had done.

    Lenovo TD350 fix VMware ACPO error
    Lenovo TD350 disabling above 4GB decoding on PCIE under advanced settings

    Once I made the above change, press F10 to save BIOS settings and boot, VMware ESXi had no issues getting past the ACPI initializing and the boot loop was broken.

    Where to read, watch and learn more

    • Lenovo TS140 Server and Storage I/O lab Review
    • Lenovo ThinkServer TD340 Server and StorageIO lab Review
    • Part II: Lenovo TS140 Server and Storage I/O lab Review
    • Software defined storage on a budget with Lenovo TS140

    Storage I/O trends

    What this all means and wrap up

    In this day and age of software defined focus, remember to double-check how your hardware BIOS (e.g. software) is defined for supporting various software defined server, storage, I/O and networking software for cloud, virtual, container and legacy environments. Watch for future posts with my experiences using the Lenovo TD350 including with Windows 2012 R2 (bare metal and virtual), Ubuntu (bare metal and virtual) with various application workloads among other things.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    VMware VVOLs storage I/O fundementals (Part 1)

    VMware VVOL’s storage I/O fundamentals (Part I)

    Note that this is a three part series with the first piece here (e.g. Are VMware VVOL’s in your virtual server and storage I/O future?), the second piece here (e.g.VMware VVOL’s and storage I/O fundamentals Part 1) and the third piece here (e.g. VMware VVOL’s and storage I/O fundamentals Part 2).

    Some of you may already be participating in the VMware beta of VVOL involving one of the initial storage vendors also in the beta program.

    Ok, now let’s go a bit deeper, however if you want some good music to listen to while reading this, check out @BruceRave GoDeepMusic.Net and shows here.

    Taking a step back, digging deeper into Storage I/O and VVOL’s fundamentals

    Instead of a VM host accessing its virtual disk (aka VMDK) which is stored in a VMFS formatted data store (part of ESXi hypervisor) built on top of a SCSI LUN (e.g. SAS, SATA, iSCSI, Fibre Channel aka FC, FCoE aka FC over Ethernet, IBA/SRP, etc) or an NFS file system presented by a storage system (or appliance), VVOL’s push more functionality and visibility down into the storage system. VVOL’s shift more intelligence and work from the hypervisor down into the storage system. Instead of a storage system simply presenting a SCSI LUN or NFS mount point and having limited (coarse) to no visibility into how the underlying storage bits, bytes as well as blocks are being used, storage systems gain more awareness.

    Keep in mind that even files and objects still get ultimately mapped to pages and blocks aka sectors even on nand flash-based SSD’s. However also keep an eye on some new technology such as the Seagate Kinetic drive that instead of responding to SCSI block based commands, leverage object API’s and associated software on servers. Read more about these emerging trends here and here at objectstoragecenter.com.

    With a normal SCSI LUN the underlying storage system has no knowledge of how the upper level operating system, hypervisor, file system or application such as a database (doing raw IO) is allocating the pages or blocks of memory aka storage. It is up to the upper level storage and data management tools to map from objects and files to the corresponding extents, pages and logical block address (LBA) understood by the storage system. In the case of a NAS solution, there is a layer of abstractions placed over the underlying block storage handling file management and the associated file to LBA mapping activity.

    Storage I/O basics
    Storage I/O and IOP basics and addressing: LBA’s and LBN’s

    Getting back to VVOL, instead of simply presenting a LUN which is essentially a linear range of LBA’s (think of a big table or array) that the hypervisor then manages data placement and access, the storage system now gains insight into what LBA’s correspond to various entities such as a VMDK or VMX, log, clone, swap or other VMware objects. With this more insight, storage systems can now do native and more granular functions such as clone, replication, snapshot among others as opposed to simply working on a coarse LUN basis. The similar concepts extend over to NAS NFS based access. Granted, there are more to VVOL’s including ability to get the underlying storage system more closely integrated with the virtual machine, hypervisor and associated management including supported service manage and classes or categories of service across performance, availability, capacity, economics.

    What about VVOL, VAAI and VASA?

    VVOL’s are building from earlier VMware initiatives including VAAI and VASA. With VAAI, VMware hypervisor’s can off-load common functions to storage systems that support features such as copy, clone, zero copy among others like how a computer can off-load graphics processing to a graphics card if present.

    VASA however provides a means for visibility, insight and awareness between the hypervisor and its associated management (e.g. vCenter etc) as well as the storage system. This includes storage systems being able to communicate and publish to VMware its capabilities for storage space capacity, availability, performance and configuration among other things.

    With VVOL’s VASA gets leveraged for unidirectional (e.g. two-way) communication where VMware hypervisor and management tools can tell the storage system of things, configuration, activities to do among others. Hence why VASA is important to have in your VMware CASA.

    What’s this object storage stuff?

    VVOL’s are a form of object storage access in that they differ from traditional block (LUN’s) and files (NAS volumes/mount points). However, keep in mind that not all object storage are the same as there are object storage access and architectures.

    object storage
    Object Storage basics, generalities and block file relationships

    Avoid making the mistake of when you hear object storage that means ANSI T10 (the folks that manage the SCSI command specifications) Object Storage Device (OSD) or something else. There are many different types of underlying object storage architectures some with block and file as well as object access front ends. Likewise there are many different types of object access that sit on top of object architectures as well as traditional storage system.

    Object storage I/O
    An example of how some object storage gets accessed (not VMware specific)

    Also keep in mind that there are many different types of object access mechanism including HTTP Rest based, S3 (e.g. a common industry defacto standard based on Amazon Simple Storage Service), SNIA CDMI, SOAP, Torrent, XAM, JSON, XML, DICOM, IL7 just to name a few, not to mention various programmatic bindings or application specific implementations and API’s. Read more about object storage architectures, access and related topics, themes and trends at www.objecstoragecenter.com

    Lets take a break here and when you are ready, click here to read the third piece in this series VMware VVOL’s and storage I/O fundamentals Part 2.

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Are VMware VVOLs in your virtual server and storage I/O future?

    Are VMware VVOL’s in your virtual server and storage I/O future?

    Note that this is a three part series with the first piece here (e.g. Are VMware VVOL’s in your virtual server and storage I/O future?), the second piece here (e.g. VMware VVOL’s and storage I/O fundamentals Part 1) and the third piece here (e.g. VMware VVOL’s and storage I/O fundamentals Part 2).

    With VMworld 2014 just around the corner, for some of you the question is not if Virtual Volumes (VVOL’s) are in your future, rather when, where, how and with what.

    What this means is that for some hands on beta testing is already occurring or will be soon, while for others that might be around the corner or down the road.

    Some of you may already be participating in the VMware beta of VVOL involving one of the first storage vendors also in the beta program.

    VMware vvol beta

    On the other hand, some of you may not be in VMware centric environments and thus VVOL’s may not yet be in your vocabulary.

    How do you know if VVOL are in your future if you don’t know what they are?

    First, to be clear, as of the time this was written VMware VVOL’s are not released and only in beta as well as having been covered in earlier VMworld’s. Consequently what you are going to read here is based on what VVOL material has already been made public in various venues including earlier VMworld’s and VMware blogs among other places.

    The quick synopsis of VMware VVOL’s overview:

  • Higher level of abstraction of storage vs. traditional SCSI LUN’s or NAS NFS mount points
  • Tighter level of integration and awareness between VMware hypervisors and storage systems
  • Simplified management for storage and virtualization administrators
  • Removing complexity to support increased scaling
  • Enable automation and service managed storage aka software defined storage management
  • VVOL considerations and your future

    As mentioned, as of this writing, VVOL’s are still a future item granted they exist in beta.

    For those of you in VMware environments, now is the time to add VVOL to your vocabulary which might mean simply taking the time to read a piece like this, or digging deeper into the theories of operations, configuration, usage, hints and tips, tutorials along with vendor specific implementations.

    Explore your options, and ask yourself, do you want VVOL or do you need it

    What support does your current vendor(s) have for VVOL or what is their statement of direction (SOD) which you might have to get from them under NDA.

    This means that there will be some first vendors with some of their products supporting VVOL’s with more vendors and products following (hence watch for many statements of direction announcements).

    Speaking of vendors, watch for a growing list of vendors to announce their current or plans for supporting VVOL’s, not to mention watch some of them jump up and down like Donkey in Shrek saying "oh oh pick me pick me".

    When you ask a vendor if they support VVOL’s, move beyond the simple yes or no, ask which of their specific products, it is a block (e.g. iSCSI) or NAS file (e.g. NFS) based and other caveats or configuration options.

    Watch for more information about VVOL’s in the weeks and months to come both from VMware along with from their storage provider partners.

    How will VVOL impact your organizations best practices, policies, workflow’s including who does what, along with associated responsibilities.

    Where to learn more

    Check out the companion piece to this that takes a closer look at storage I/O and VMware VVOL fundamentals here and here.

    Also check out this good VMware blog via Cormac Hogan (@CormacJHogan) that includes a video demo, granted its from 2012, however some of this stuff actually does take time and thus this is very timely. Speaking of VMware, Duncan Epping (aka @DuncanYB) at his Yellow-Bricks site has some good posts to check out as well with links to others including this here. Also check out the various VVOL related sessions at VMworld as well as the many existing, and soon to be many more blogs, articles and videos you can find via Google. And if you need a refresher, Why VASA is important to have in your VMware CASA.

    Of course keep an eye here or whichever venue you happen to read this for future follow-up and companion posts, and if you have not done so, sign up for the beta here as there are lots of good material including SDKs, configuration guides and more.

    VVOL Poll

    What are you VVOL plans, view results and cast your vote here

    Wrap up (for now)

    Hope you found this quick overview on VVOL’s of use, since VVOL’s at the time of this writing are not yet released, you will need to wait for more detailed info, or join the beta or poke around the web (for now).

    Keep an eye on and learn more about VVOL’s at VMworld 2014 as well as in various other venues.

    IMHO VVOL’s are or will be in your future, however the question will be is there going to be a back to the future moment for some of you with VVOL’s?

    Also what VVOL questions, comments and concerns are in your future and on your mind?

    And remember to check out the second part to this series here.

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    iVMcontrol iPhone VMware management, iTool or iToy?

    Storage I/O trends

    iVMcontrol iPhone VMware management, iTool or iToy?

    A few months back I was looking for a simple easy to use yet robust tool for accessing and managing my VMware environment from my iPhone. The reason being is that I don’t always like to carry a laptop or tablet around, not to mention neither fits in a pocket very well. Needless to say there are many options for accessing VMware products and implementations that run on tablets including iPads as well as laptops among others.

    Why do I need iVMcontrol

    I wanted something that I could quickly access and check on a VM guest, start or stop things, gain status updates if or when needed from my iPhone. Also keeping in mind that this would be a tool that would not be used constantly throughout the day, maybe at best one or twice a week, hence needed to be affordable as well. At $9.99 USD the tool I found and selected (iVMcontrol) was not for free, however I have gotten that value out of the tool already in just a few months of having it.

    As mentioned, the tool is iVMcontrol which you can get from the iTunes store (here’s the link).

    Storage I/O IVM on iPhone
    View of iVMcontrol from iPhone

    Granted iVMcomtrol is not the same as other app’s for full-sized tablets or laptops, however for an iPhone it’s not bad! In fact other than a few nuances namely using a virtual mouse, it’s pretty good for what I use it for.

    That’s the key is that while I use the vSphere client or vCenter Browser for real activities, iVMcontrol served a different purpose. That purpose is for example if I just need to check on something or do basic functions without having to get the laptop out or something else.  Even in the lab if I’m making a change or need to start or stop things and forget the laptop in another room, no worries simply use the iPhone.

    Sure using a tablet would be easier, however I usually don’t care a tablet in my pocket.

    How often do I use iVMcontrol?

    Depends however usually a couple of times a week depending on what I’m doing.

    For example if I need to quickly check on a guest VM, start or stop something, or general status check iVMcontrol has come in handy.

    Storage I/O IVM main screen
    Various VMware hosts (PM’s) in a VMware datacenter

    Storage I/O IVM main screen
    Various Guest VMs on VMware host (PM)

    iVM VMware storage I/O space
    VMware host storage space capacity usage

    Storage I/O IVM main screen
    Managing a guest VM

    iVM Windows guest
    Accessing Windows Guest VM via iVMcontrol

    iVM Windows guest storage I/O activity
    Checking on Windows Guest Storage I/O activity

    As you can see the screen is small, sure you can zoom in thus good for checking in on activity, or doing basic things. However for more involved activity, that’s where a tablet or regular computer comes into play accessing the VM guests, or VMware using the vSphere Client or vCenter web client type tools.

    Is iVMcontrol an iTool or iToy?

    IMHO its a tool, granted its also a fun toy.

    Is a tool such as iVMcontrol a necessity or a nice to have for when I need to use it to check on something quick.

    That depends on what you need vs. wants.

    For me, it is a convince tool to have when I need it, however just because I have it does not mean I have to use it all the time.

    Ok, nuff said (for now)

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Dell Inspiron 660 i660, Virtual Server Diamond in the rough?

    Storage I/O trends

    Dell Inspiron 660 i660, Virtual Server Diamond in the rough?

    During the 2013 post thanksgiving black friday shopping day, I did some on-line buying including a Dell Inspiron 660 i660 (5629BK) to be used as a physical machine (PM) or VMware host (among other things).

    Now technically I know, this is a workstation or desktop and thus not what some would consider a server, however as another PM to add to my VMware environment (or be used as a bare metal platform), it is a good companion to my other systems.

    Via Dell.com Dell 660 i660

    Taking a step back, needs vs. wants

    Initially my plan for this other system was to go with a larger, more expensive model with as many DDR3 DIMM (memory) and PCIe x4/x8/x16 expansion slots as possible. Some of my other criteria were PCIe Gen 3, latest Intel processor generation with VT (Virtualization Technology) and Extended Page Tables (EPT) for server virtualization support without breaking my budget. Heck, I would love a Dell VRTX or some similar types of servers from the likes of Cisco, HP, IBM, Lenovo, Supermicro among many others. On the other hand, I really don’t need one of those types of systems yet, unless of course somebody wants to send some to play with (excuse me, test drive, try-out).

    Hence needs are what I must have or need, while wants are those things that would be, well, nice to have.

    Server shopping and selection

    In the course of shopping around, looking at alternatives and having previously talked with Robert Novak (aka @gallifreyan) and he reminded me to think outside the box a bit, literally. Check out Roberts blog (aka rsts11 a great blog name btw for those of use who used to work with RSTS, RSX and others) including a post he did shortly after I had a conversation with him. If you read his post and continue through this one, you should be able to connect the dots.

    While I still have a need and plans for another server with more PCIe and DDR3 (maybe wait for DDR4? ;) ) slots, I found a Dell Inspiron 660.

    Candidly normally I would have skipped over this type or class of system, however what caught my eye was that while limited to only two DDR3 DIMM slots and a single PCIe x16 slot, there were three extra x1 slots which while not as robust, certainly gave me some options if I need to use those for older, slower things. Likewise leveraging higher density DIMM’s, the system is already now at 16GB RAM waiting for larger DIMM’s if needed.

    VMware view of Inspiron 600

    The Dell Inspiron 660-i660 I found had a price of a little over $550 (delivered) with an Intel i5-3330 processor (quad-core, quad thread 3GHz clock), PCIe Gen 3, one PCIe x16 and three PCIe x1 slots, 8GB DRAM (since reallocated), GbE port and built-in WiFi, Windows 8 (since P2V and moved into the VMware environment), keyboard and mouse, plus a 1TB 6Gb SATA drive, I could afford two, maybe three or four of these in place of a larger system (at least for now). While for something’s I have a need for a single larger server, there are other things where having multiple smaller ones with enough processing performance, VT and EPT support comes in handy (if not required for some virtual servers).

    Some of the enhancements that I made were once the initial setup of the Windows system was complete, did a clone and P2V of that image, and then redeploying the 1TB SATA drive to join others in the storage pool. Thus the 1TB SATA HDD has been replaced with (for now) a 500GB Momentus XT HHDD which by time you read this could already changed to something else.

    Another enhancements was bumping up the memory from 8GB to 16GB, and then adding a StarTech enclosure (See below) for more internal SAS / SATA storage (it supports both 2.5" SAS and SATA HDD’s as well as SSD’s). In addition to the on-board SATA drive port plus one being used for the CD/DVD, there are two more ports for attaching to the StarTech or other large 3.5" drives that live in the drive bay. Depending on what I’m using this system for, it has different types of adapters for external expansion or networking some of which have already included 6Gbps and 12Gbps SAS HBA’s.

    What about adding more GbE ports?

    As this is not a general purpose larger system with many expansion ports for PCIe slots, that is one of the downsides you get for this cost. However depending on your needs, you have some options. For example I have some Intel PCIe x1 GbE cards to give extra networking connectivity if or when needed. Note however that as these are PCIe x1 slots they are PCIe Gen 1 so from a performance perspective exercise caution when mixing these with other newer, faster cards when performance matters (more on this in the future).

    Via Amazon.com Intel PCIe x1 GbE card
    Via Amazon.com Intel (Gigabit CT PCI-E Network Adapter EXPI9301CTBLK)

    One of the caveats to be aware of if you are going to be using VMware vSphere/ESXi is that the Realtek GbE NIC on the Dell Inspiron D600-i660 may not play well, however there are work around’s. Check out some of the work around’s over at Kendrick Coleman (@KendrickColeman) and Erik Bussink (@ErikBussink) sites both of which were very helpful and I can report that the Realtek GbE is working fine with VMware ESXi 5.5a.

    Need some extra SAS and SATA internal expansion slots for HDD and SSD’s?

    The StarTech 4 x 2.5″ SAS and SATA internal enclosures supports various speed SSD and HDD’s depending on what you connect the back-end connector port to. On the back of the enclosure chassis there is a connector that is a pass-thru to the SAS drive interface that also accepts SATA drives. This StarTech enclosure fits nicely into an empty 5.2″ CD/DVD expansion bay and then attach the individual drive bays to your internal motherboard SAS or SATA ports, or to those on another adapter.

    Via Amazon.com StarTech 4 port SAS / SATA enclosure
    Via Amazon.com StarTech 4 x 2.5" SAS and SATA internal enclosure

    So far I have used these enclosures attached to various adapters at different speeds as well as with HDD, HHDD, SSHD and SSD’s at various SAS/SATA interface speeds up to 12Gbps. Note that unlike some other enclosures that have SAS or SATA expander, the drive bays in the StarTech are pass-thru hence are not regulated by the expander chip and its speed. Price for these StarTech enclosures is around $60-90 USD and are good for internal storage expansion (hmm, need to build your own NAS or VSAN or storage server appliance? ;) ).

    Via Amazon Molex power connector

    Note that you will also need to get a Molex power connector to go from the back of the drive enclosure to an available power port such as for expansion DVD/CD that you can find at a Radio Shack, Fry’s or many other venues for couple of dollars. Double check your specific system and cable connector leads to verify what you will need.

    How is it working and performing

    So far so good, in addition to using it for some initial calibration and validation activities, the D660 is performing very well and no buyers remorse. Ok, sure, would like more PCIe Gen 3 x4/x8/x16 or an extra on-board Ethernet, however all the other benefits have outweighed those pitfalls.

    Speaking of which, if you think a SSD (or other fast storage device) is fast on a 6Gbps SAS or PCIe Gen 2 interface for physical or virtual servers, wait until you experience those IOPs or latencies at 12Gbps SAS and PCIe Gen 3 with a faster current generation Intel processor, just saying ;)…

    Server and Storge I/O IOPS and vmware   
    

    In the above chart (slide scroll bar to view more to the right) a Windows 7 64 bit systems (VMs configured with 14GB DRAM) on VMware vSphere V5.5.1 is shown running on different hardware configurations. The Windows system is running Futuremark PCMark 7 Pro (v1.0.4). From left to right the Windows VM on the Dell Inspiron 660 with 16GB physical DRAM using a SSHD (Solid State Hybrid Drive). Second from the left shows results running on a Dell T310 with an Intel X3470 processor also on a SSHD. Middle is the workload on the Dell 660 running on a HHDD, second from right is the workload on the Dell T310 also on a HHDD, while on the right is the same workload on an HP DCS5800 with an Intel E8400. The workload results show a composite score, system storage, simulating user productivity, lightweight processing, and compute intensive tasks.

    Futuremark PCMark Windows benchmark
    Futuremark PCMark

    Don’t forget about the KVM (Keyboard Video Mouse)

    Mention KVM to many people in and around the server, storage and virtualization world and they think KVM as in the hypervisor, however to others it means Key board, Video and Mouse aka the other KVM. As part of my recent and ongoing upgrades, it was also time to upgrade from the older smaller KVM’s to a larger, easier to use model. The benefit, support growth while also being easier to work with. Having done some research on various options that also varied in price, I settled in on the StarTech shown below.

    Via Amazon.com StarTech 8 port KVM
    Via Amazon.com StarTech 8 Port 1U USB KVM Switch

    What’s cool about the above 8 port StarTech KVM switch is that it comes with 8 cables (there are 8 ports) that on one end look like a regular VGA monitor screen cable connector. However on the other end that attached to your computer, there is the standard VGA connection that attached to your video out, and a short USB tail cable that attached to an available USB port for Keyboard and Mouse. Needless to say it helps to cut down on the cable clutter while coming in around $38.00 USD per server port being managed, or about a dollar a month over a little over three years.

    Word of caution on make and models

    Be advised that there are various makes and models of the Dell Inspiron available that differ in the processor generation and thus feature set included. Pay attention to which make or model you are looking at as the prices can vary, hence double-check the processor make and model and then visit the Intel site to see if it is what you are expecting. For example I double checked that the processor for the different models I looked at were i5-3330 (view Intel specifications for that processor here).

    Summary

    Thanks to Robert Novak (aka @gallifreyan) for taking some time providing useful tips and ideas to help think outside the box for this, as well as some future enhancements to my server and StorageIO lab environment.

    Consequently while the Dell Inspiron D600-i660 was not the server that I wanted, it has turned out to be the system that I need now and hence IMHO a diamond in the rough, if you get the right make and mode.

    Ok, nuff said

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2013 StorageIO and UnlimitedIO All Rights Reserved

    Server virtualization nested and tiered hypervisors

    Storage I/O trends

    Server virtualization nested and tiered hypervisors

    A few years ago I did a piece (click here) about the then emerging trend of tiered hypervisors, particular using different products or technologies in the same environment.

    Tiered snow tools
    Tiered snow management tools and technologies

    Tiered hypervisors can be as simple as using different technologies such as VMware vSphere/ESXi, Microsoft Hyper-V, KVM or Xen in your environment on different physical machines (PMs) for various business and application purposes. This is similar to having different types or tiers of technology including servers, storage, networks or data protection to meet various needs.

    Another aspect is nesting hypervisors on top of each other for testing, development and other purposes.

    nested hypervisor

    I use nested VMware ESXi for testing various configurations as well as verifying new software when needed, or creating a larger virtual environment for functionality simulations. If you are new to nesting which is running a hypervisor on top of another hypervisor such as ESXi on ESXi or Hyper-V on ESXi here are a couple of links to get you up to speed. One is a VMware knowledge base piece, two are from William Lam (@lamw) Virtual Ghetto (getting started here and VSAN here) and the other is from Duncan Epping @DuncanYB Yellow Bricks sites.

    Recently I did a piece over at FedTech titled 3 Tips for Maximizing Tiered Hypervisors that looks at using multiple virtualization tools for different applications and how they can give a number of benefits.

    Here is an excerpt:

    Tiered hypervisors can be run in different configurations. For example, an agency can run multiple server hyper­visors on the same physical blade or server or on separate servers. Having different tiers or types of hypervisors for server and desktop virtualization is similar to using multiple kinds of servers or storage hardware to meet different needs. Lower-cost hypervisors may have lacked some functionality in the past, but developers often add powerful new capabilities, making them an excellent option.

    IT administrators who are considering the use of tiered or multiple hypervisors should know the answers to these questions:

    • How will the different hypervisors be managed?
    • Will the environment need new management tools for backup, monitoring, configuration, provisioning or other routine functions?
    • Do existing tools offer support for different hypervisors?
    • Will the hypervisors have dedicated PMs or be nested?
    • How will IT migrate virtual machines and their guests between different hypervisors? For example if using VMware and Hyper-V, will you use VMware vCenter Multi-Hypervisor Manager or something similar?

    So how about it, how are you using and managing tiered hypervisors?

    Ok, nuff said for now.

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved