Microsoft Azure Elastic SAN from Cloud to On-Prem

What is Azure Elastic SAN

Azure Elastic SAN (AES) is a new (now GA) Azure Cloud native storage service that provides scalable, resilient, easy management with rapid provisioning, high performance, and cost-effective storage. AES (figure 1) supports many workloads and computing resources. Workloads that benefit from AES include tier 1 and tier 2, such as Mission Critical, Database, and VDI, among others traditionally relying upon consolidated Storage Area Network (SAN) shared storage.

Compute resources that can use AES, including bare metal (BM) physical machines (PM), virtual machines (VM), and containers, among others, using iSCSI for access. AES is accessible by computing resources and services within the Azure Cloud in various regions (check Azure Website for specific region availability) and from on-prem core and edge locations using iSCSI. The AES management experience and value proposition are similar to traditional hardware or software-defined shared SAN storage combined with Azure cloud-based management capabilities.

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 1 General Concept and Use of Azure Elastic SAN (AES)

While Microsoft Azure describes AES as a cloud-native storage solution, that does not mean that AES is only for containers and other cloud-native apps or DevOPS. Rather, AES has been built for and is native to the cloud (e.g., software-defined) that can be accessed by various compute and other resources (e.g., VMs, Containers, AKS, etc) using iSCSI.

How Azure Elastic SAN differs from other Azure Storage

AES differs from traditional Azure block storage (e.g., Azure Disks) in that the storage is independent of the host compute server (e.g., BM, PM, VM, containers). With AES, similar to a conventional software-defined or hardware-based shared SAN solution, storage is disaggregated from host servers for sharing and management using iSCSI for connectivity. By comparison, AES differs from traditional Azure VM-based storage typically associated with a given virtual machine in a DAS (Direct Attached Storage) type configuration. Likewise, similar to conventional on-prem environments, there is a mix of DAS and SAN, including some host servers that leverage both.

AES supports Azure VM, Azure Kubernetes Service (AKS), cloud-native, edge, and on-prem computing (BM, VM, etc.) via iSCSI. Support for Azure VMware Solution (AVS) is in preview; check the Microsoft Azure website for updates and new feature functionality enhancements.

Does this mean everything is moving to AES? Similar to traditional SANs, there are roles and needs for various storage options, including DAS, shared block, file, and object, among storage offerings. Likewise, Microsoft and Azure have expanded their storage offerings to include AES, DAS (azure disks, including Ultra, premium, and standard, among other options), append, block, and page blobs (objects), and files, including Azure file sync, tables, and Data Box, among other storage services.

Azure Elastic Storage Feature Highlights

AES feature highlights include, among others:

    • Management via Azure Portal and associated tools
    • Azure cloud-based shared scalable bock storage
    • Scalable capacity, low latency, and high performance (IOPs and throughput)
    • Space capacity-optimized without the need for data reduction
    • Accessible from within Azure cloud and from on-prem using iSCSI
    • Supports Azure compute  (VMs, Containers/AKS, Azure VMware Solution)
    • On-prem access via iSCSI from PM/BM, VM, and containers
    • Variable number of volumes and volume size per volume group
    • Flexible easy to use Azure cloud-based management
    • Encryption and network private endpoint security
    • Local (LRS) and Zone (ZRS) with replication resiliency
    • Volume snapshots and cluster support

Who is Azure Elastic SAN for

AES is for those who need cost-effective, shared, resilient, high capacity, high performance (IOPS, Bandwidth), and low latency block storage within Azure and from on-prem access. Others who can benefit from AES include those who need shared block storage for clustering app workloads, server and storage consolidation, and hybrid and migration. Another consideration is for those familiar with traditional hardware and software-defined SANs to facilitate hybrid and migration strategies.

How Azure Elastic SAN works

Azure Elastic SAN is a software-defined (cloud native if you prefer) block storage offering that presents a virtual SAN accessible within Azure Cloud and to on-prem core and edge locations currently via iSCSI. Using iSCSI, Azure VMs, Clusters, Containers, Azure VMware Solution among other compute and services, and on-prem BM/PM, VM, and containers, among others, can access AES storage volumes.

From the Azure Portal or associated tools (Azure CLI or PowerShell), create an AES SAN, giving it a 3 to 24-character name and specify storage capacity (base units with performance and any additional space capacity). Next, create a Volume Group, assigning it to a specific subscription and resource group (new or existing), then specify which Azure Region to use, type of redundancy (LRS or GRS), and Zone to use. LRS provides local redundancy, while ZRS provides enhanced zone resiliency, with highspeed synchronous resiliency without setting up multiple SAN systems and their associated replication configurations along with networking considerations (e.g., Azure takes care of that for you within their service).

The next step is to create volumes by specifying the volume name, volume group to use, volume size in GB, maximum IOPs, and bandwidth. Once you have made your AES volume group and volumes, you can create private endpoints, change security and access controls, and access the volumes from Azure or on-prem resources using iSCSI. Note that AES currently needs to be LRS (not ZRS) for clustered shared storage and that Key management includes using your keys with Azure key vault.

Using Azure Elastic SAN

Using AES is straightforward, and there are good easy to follow guides from Microsoft Azure, including the following:

The following images show what AES looks like from the Azure Portal, as well as from an Azure Windows Server VM and an onprem physical machine (e.g., Windows 10 laptop).

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 2 AES Azure Portal Big Picture

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 3 AES Volume Groups Portal View

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 4  AES Volumes Portal View

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 5 AES Volume Snapshot Views

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 6 AES Connected Volume Portal View

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 7 AES Volume iSCSI view from on-prem Windows Laptop

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 8 AES iSCSI Volume attached to Azure VM

Azure Elastic SAN Cost Pricing

The cost of AES is elastic, depending on whether you scale capacity with performance (e.g., base unit) or add more space capacity. If you need more performance, add base unit capacity, increasing IOPS, bandwidth, and space. In other words, base capacity includes storage space and performance, which you can grow in various increments. Remember that AES storage resources get shared across volumes within a volume group.

Azure Elastic SAN is billed hourly based on a monthly per-capacity base unit rate, with a minimum of 1TB  provisioned capacity with minimum performance (e.g., 5,000 IOPs, 200MBps bandwidth). The base unit rate varies by region and type of redundancy, aka resiliency. For example, at the time of this writing, looking at US East, the Local Redundant Storage (LRS) base unit rate is 1TB with 5,000 IOPs and 200MBps bandwidth, costing $81.92 per unit per month.

The above example breaks down to a rate of $0.08 per GB per month, or $0.000110 per GB per hour (assumes 730 hours per month). An example of simply adding storage capacity without increasing base unit (e.g., performance) for US East is $61.44 per month. That works out to $0.06 per GB per month (no additional provisioned IOPs or Bandwidth) or $0.000083 per GB per hour.

Note that there are extra fees for Zone Redundant Storage (ZRS). Learn more about Azure Elastic SAN pricing here, as well as via a cost calculator here.

Azure Elastic SAN Performance

Performance for Azure Elastic SAN includes IOPs, Bandwidth, and Latency. AES IOPs get increased in increments of 5,000 per base TB. Thus, an AES with a base of 10TB would have 50,000 IOPs distributed (shared) across all of its volumes (e.g., volumes are not restricted). For example, if the base TB is increased from 10TB to 20TB, then the IOPs would increase from 50,000 to 100,000 IOPs.

On the other hand, if the base capacity (10TB) is not increased, only the storage capacity would increase from 10TB to 20TB, and the AES would have more capacity but still only have the 50,000 IOPs. AES bandwidth throughput increased by 200MBps per TB. For example, a 5TB AES would have 5 x 200MBps (1,000 MBps) throughput bandwidth shared across the volume groups volumes.

Note that while the performance gets shared across volumes, individual volume performance is determined by its capacity with a maximum of 80,000 IOPs and up to 1,024 MBps. Thus, to reach 80,000 IOPS and 1,024 MBps, an AES volume would have to be at least 107GB in space capacity. Also, note that the aggregate performance of all volumes cannot exceed the total of the AES. If you need more performance, then create another AES.

Will all VMs or compute resources see performance improvements with AES? Traditional Azure Disks associated with VMs have per-disk performance resource limits, including IOPs and Bandwidth. Likewise, VMs have storage limits based on their instance type and size, including the number of disks (HDD or SSD), performance (IOPS and bandwidth), and the number of CPUs and memory.

What this means is that an AES volume could have more performance than what a given VM is limited to. Refer to your VM instance sizing and configuration to determine its IOP and bandwidth limits; if needed, explore changing the size of your VM instance to leverage the performance of Azure Elastic SAN storage.

Additional Resources Where to learn more

The following links are additional resources to learn about Microsoft Azure Elastic SAN and related data infrastructures and tradecraft topics.

Azure AKS Storage Concepts 
Azure Elastic SAN (AES) Documentation and Deployment Guides
Azure Elastic SAN Microsoft Blog
Azure Elastic SAN Overview
Azure Elastic SAN Performance topics
Azure Elastic SAN Pricing calculator
Azure Products by Region (see where AES is currently available)
Azure Storage Offerings 
Azure Virtual Machine (VM) sizes
Azure Virtual Machine (VM) types
Azure Elastic SAN General Pricing
Azure Storage redundancy 
Azure Service Level Agreements (SLA) 
StorageIOBlog.com Data Box Family 
StorageIOBlog.com Data Box Review
StorageIOBlog.com Data Box Test Drive 
StorageIOblog.com Microsoft Hyper-V Alive Enhanced with Win Server 2025
StorageIOblog.com If NVMe is the answer, what are the questions?
StorageIOblog.com NVMe Primer (or refresh)

Additional learning experiences along with common questions (and answers), are found in my Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Azure Elastic SAN (AES) is a new and now generally available shared block storage offering that is accessible using iSCSI from within Azure Cloud and on-prem environments. Even with iSCSI, AES is relatively easy to set up and use for shared storage, mainly if you are used to or currently working with hardware or software-defined SAN storage solutions.

With NVMe over TCP fabrics gaining industry and customer traction, I’m hoping for Microsoft to adding that in the future. Currently, AES supports LRS and ZRS for redundancy, and an excellent future enhancement would be to add Geo Redundant Storage (GRS) capabilities for those who need it.

I like the option of elastic shared storage regarding performance, availability, capacity, and economic costs (PACE). Suppose you understand the value proposition of evolving from dedicated DAS to shared SAN (independent of the underlying fabric network); or are currently using some form of on-prem shared block storage. In that case, you will find AES familiar and easy to use. Granted, AES is not a solution for everything as there are roles for other block storage, including DAS such as Azure disks and VMs within Azure, along with on-prem DAS, as well as file, object, and blobs, tables, among others.

Wrap up

The notion that all cloud storage must be objects or blobs is tied those who only need, provide, or prefer those solutions. The reality is that everything is not the same. Thus, there is a need for various storage mediums, devices, tiers, access, and types of services. Microsoft and Azure have done an excellent job of providing. I like what Microsoft Azure is doing with Azure Elastic SAN.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Nine time Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of UnlimitedIO LLC.

Part II – EMC DSSD D5 Direct Attached Shared AFA

Part II – EMC DSSD D5 Direct Attached Shared AFA

server storage I/O trends

This is the second post in a two-part series on the EMC DSSD D5 announcement, you can read part one here.

Lets take a closer look at how EMC DSSD D5 works, its hardware and software components, how it compares and other considerations.

How Does DSSD D5 Work

Up to 48 Linux servers attach via dual port PCIe Gen 3 x8 cards that are stateless. Stateless simply means they do not have any flash or are not being used as storage cards, rather, they are essentially just an NVMe adapter card. With the first release block, HDFS file along with object and APIs are available for Linux systems. These drivers enabling the shared NVMe storage to be accessed by applications using different streamlined server and storage I/O driver software stacks to cut latency. DSSD D5 is meant to be a rack scale solutions so distance is measured as inside a rack (e.g. a couple of meters).

The 5U tall DSSD D5 supports 48 servers via a pair of I/O Modules (IOM) each with 48 ports that in turn attach to the data plane and on to the Flash Modules (FM). Also attached to the data plane are a pair of controllers that are active / active for performing management tasks, however they do not sit in the data path. This means that host client directly access the FMs without having to go through a controller which is the case in traditional storage systems and AFAs. The controllers only get involved when there is some setup, configuration or other management activities, otherwise they get out-of-the-way, kind of like how management should function. There when you need them to help, then get out-of-the-way so productive work can be done.

EMC DSSD shared ssd das
Pardon the following hand drawn sketches, you can see some nice pretty diagrams, videos and other content via the EMC Pulse Blog as well as elsewhere.

Note that the host client servers take on the responsibility for managing and coordinating data consistency meaning data can be shared between servers assuming applicable software is used for implementing integrity. This means that clustering and other software that can support shared storage are able to support low latency high performance read and write activity the DSSD D5 as opposed to relying on the underlying storage system for handling the shared storage coordination such as in a NAS. Another note is that the DSSD D5 is optimized for concurrent multi-threaded and asynchronous I/O operations along with atomic writes for data integrity that enable the multiple cores in today’s faster processors to be more effectively leveraged.

The data plane is a mesh or switch or expander based back plane enabling any of the north bound (host client-server) 96 (2 x 48) PCIe Gen 3 x4 ports to reach the up to 36 (or as few as 18) FMs that are also dual pathed. Note that the host client-server PCIe dual port cards are Gen 3 x8 while the DSSD D5 ports are Gen 3 x4. Simple math should tell you that if are going to have 2 x PCIe Gen 3 x4 ports running at full speed, you want to have a Gen 3 x8 connection inside the server to get full performance.

Think of the data plane similar to how a SAS expander works in an enclosure or a SAS switch, the difference being it is PCIe and not SAS or other protocol. Note that even though the terms mesh, fabric, switch, network are used, these are NOT attached to traditional LAN, SAN, NAS or other networks. Instead, this is a private “networked back plane” between the server and storage devices (e.g. FM).

EMC DSSD D5 details

The dual controllers (e.g. control plane) over see the flash management including garbage collection among other tasks, as well as storage is thin provisioned.

Dual Controllers (active/active) are connected to each other (e.g. control plane) as well as to the data path, however, do not sit in the data path. Thus this is a fast path control path approach meaning the controllers can get involved to do management functions when needed, and get out-of-the-way of work when not needed. The controllers are hot-swap and add global management functions including setting up, tearing down host client/server I/O paths, mappings and affinities. Controllers also support the management of CUBIC RAID data protection functions performed by the Flash Modules (FM).

Other functions the controllers implement leveraging their CPUs and DRAM include flash translation layer (FTL) functions normally handled by SSD cards, drives or other devices. These FTL functions include wear-leveling for durability, garbage collection, voltage power management among other tasks. The result is that the flash modules are able to spend more time and their resources handling I/O operations vs. handling management tasks vs. traditional off the shelf SSD drives, cards or devices.

The FMs insert from the front and come in two sizes of 2TB and 4TB of raw NAND capacity. What’s different about the FMs vs. some other vendors approach is that these are not your traditional PCIe flash cards, instead they are custom cards with a proprietary ASIC and raw nand dies. DRAM is used in the FM as a buffer to hold data for write optimization as well as enhance wear-leveling to increase flash endurance.

The result is up to thousands of nand dies spread over up to 36 FMs however more important, more performance being derived out of those resources. The increased performance comes from DSSD implementing its own flash translation layer, garbage collection, power voltage management among other techniques to derive more useful work per watt of energy consumed.

EMC DSSD performance claims:

  • 100 microsecond latency for small IOs
  • 100GB bandwidth for large IOs
  • 10 Million small IO IOPs
  • Up to 144TB raw capacity

How Does It Compare To Other AFA and SSD solutions

There will be many apples to oranges comparisons as is often the case with new technologies or at least until others arrive in the market.

Some general comparisons that may be apples to oranges as opposed to apples to apples include:

  • Shared and dense fast nand flash (eMLC) SSD storage
  • disaggregated flash SSD storage from server while enabling high performance, low latency
  • Eliminate pools or ponds of dedicated SSD storage capacity and performance
  • Not a SAN yet more than server-side flash or flash SSD JBOD
  • Underlying Flash Translation Layer (FTL) is disaggregated from SSD devices
  • Optimized hardware and software data path
  • Requires special server-side stateless adapter for accessing shared storage

Some other comparisons include:

  • Hybrid and AFA shared via some server storage I/O network (good sharing, feature rich, resilient, slower performance and higher latency due to hardware, network and server I/O software stacks). For example EMC VMAX, VNX, XtremIO among others.
  • Server attached flash SSD aka server SAN (flash SSD creates islands of technology, lower resource sharing, data shuffling between servers, limited or no data services, management complexity). For example PCIe flash SSD state full (persistent) cards where data is stored or used as a cache along with associated management tools and drivers.
  • DSSD D5 is a rack-scale hybrid approach combing direct attached shared flash with lower latency, higher performance vs. traditional AFA or hybrid storage array, better resource usage, sharing, management and performance vs. traditional dedicated server flash. Compliment server-side data infrastructure and applications scale-out software. Server applications can reach NVMe storage via user spacing with block, hdfs, Flood and other APIs.

Using EMC DSSD D5 in possible hybrid ways

What Happened to Server PCIe cards and Server SANs

If you recall a few years ago the industry rage was flash SSD PCIe server cards from vendors such as EMC, FusionIO (now part of SANdisk), Intel (still Intel), LSI (now part of Seagate), Micron (still Micron) and STEC (now part of Western Digital) among others. Server side flash SSD PCIe cards are still popular particular with newer NVMe controller based models that use the NVMe protocol stack instead of AHC/SATA or others.

However as is often the case, things evolve and while there is still a place for server-side state full PCIe flash cards either for data or as cache, there is also the need to combine and simplify management, as well as streamline the software I/O stacks which is where EMC DSSD D5 comes into play. It enables consolidation of server-side SSD cards into a shared 5U chassis enabling up to 48 dual pathed servers access to the flash pools while using streamlined server software stacks and drivers that leverage NVMe over PCIe.

Where to learn more

Continue reading with the following links about NVMe, flash SSD and EMC DSSD.

  • Part one of this series here and part two here.
  • Performance Redefined! Introducing DSSD D5 Rack-Scale Flash Solution (EMC Pulse Blog)
  • EMC Unveils DSSD D5: A Quantum Leap In Flash Storage (EMC Press Release)
  • EMC Declares 2016 The “Year of All-Flash” For Primary Storage (EMC Press Release)
  • EMC DSSD D5 Rack-Scale Flash (EMC PDF Overview)
  • EMC DSSD and Cloudera Evolve Hadoop (EMC White Paper Overview)
  • Software Aspects of The EMC DSSD D5 Rack-Scale Flash Storage Platform (EMC PDF White Paper)
  • EMC DSSD D5 (EMC PDF Architecture and Product Specification)
  • EMC VFCache respinning SSD and intelligent caching (Part II)
  • EMC To Acquire DSSD, Inc., Extends Flash Storage Leadership
  • Part II: XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined
  • XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined
  • Learn more about flash SSD here and NVMe here at thenvmeplace.com
  • What this all means

    EMC with DSSD D5 now has another solution to offer clients, granted their challenge as it has been over the past couple of decades now will be to educate and compensate their sales force and partners on what technology solution to put for different needs.

    On one hand, life could be simpler for EMC if they only had one platform solution that would then be the answer to every problem, something that some other vendors and startups face. Likewise, if all you have is one solution, then while you can try to make that solution fit different environments, or, get the environment to adapt to the solution, having options is a good thing if those options can remove complexity along with cost while boosting productivity.

    I would like to see support for other operating systems such as Windows, particular with the future Windows 2016 based Nano, as well as hypervisors including VMware, Hyper-V among others. On the other hand I also would like to see a Sharp Aquous Quattron 80" 1080p 240Hz 3D TV on my wall to watch HD videos from my DJI Phantom Drone. For now focusing on Linux makes sense, however, would be nice to see some more platforms supported.

    Keep an eye on the NVMe space as we are seeing NVMe solutions appearing inside servers, storage system, external dedicated and shared, as well as some other emerging things including NVMe over Fabric. Learn more about EMC DSSD D5 here.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

    Supermicro CSE-M14TQC Use your media bay to add 12 Gbps SAS SSD drives to your server

    Storage I/O trends

    Supermicro CSE-M14TQC Use your media bay to add 12 Gbps SAS SSD drives to your server

    Do you have a computer server, workstation or mini-tower PC that needs to have more 2.5" form factor hard disk drive (HDD), solid state device (SSD) or hybrid flash drives added yet no expansion space?

    Do you also want or need the HDD or SSD drive expansion slots to be hot swappable, 6 Gbps SATA3 along with up to 12 Gbps SAS devices?

    Do you have an available 5.25" media bay slot (e.g. where you can add an optional CD or DVD drive) or can you remove your existing CD or DVD drive using USB for software loading?

    Do you need to carry out the above without swapping out your existing server or workstation on a reasonable budget, say around $100 USD plus tax, handling, shipping (your prices may vary)?

    If you need implement the above, then here is a possible solution, or in my case, an real solution.

    Via StorageIOblog Supermicro 4 x 2.5 12Gbps SAS enclosure CSE-M14TQC
    Supermicro CSE-M14TQC with hot swap canister before installing in one of my servers

    In the past I have used a solution from Startech that supports up to 4 x 2.5" 6 Gbps SAS and SATA drives in a 5.25" media bay form factor installing these in my various HP, Dell and Lenovo servers to increase internal storage bays (slots).

    Via Amazon.com StarTech SAS and SATA expansion
    Via Amazon.com StarTech 4 x 2.5" SAS and SATA internal enclosure

    I still use the StarTech device shown (read earlier reviews and experiences here, here and here) above in some of my servers which continue to be great for 6Gbps SAS and SATA 2.5" HDDs and SSDs. However for 12 Gbps SAS devices, I have used other approaches including external 12 Gbps SAS enclosures.

    Recently while talking with the folks over at Servers Direct, I mentioned how I was using StarTech 4 x 2.5" 6Gbps SAS/SATA media bay enclosure as a means of boosting the number of internal drives that could be put into some smaller servers. The Servers Direct folks told me about the Supermicro CSE-M14TQC which after doing some research, I decided to buy one to complement the StarTech 6Gbps enclosures, as well as external 12 Gbps SAS enclosures or other internal options.

    What is the Supermicro CSE-M14TQC?

    The CSE-M14TQC is a 5.25" form factor enclosure that enables four (4) 2.5" hot swappable (if your adapter and OS supports hot swap) 12 Gbps SAS or 6 Gbps SATA devices (HDD and SSD) to fit into the media bay slot normally used by CD/DVD devices in servers or workstations. There is a single Molex male power connector on the rear of the enclosure that can be used to attach to your servers available power using applicable connector adapters. In addition there are four seperate drive connectors (e.g. SATA type connectors) that support up to 12 Gbps SAS per drive which you can attach to your servers motherboard (note SAS devices need a SAS controller), HBA or RAID adapters internal ports.

    Cooling is provided via a rear mounted 12,500 RPM 16 cubic feet per minute fan, each of the four drives are hot swappable (requires operating system or hypervisor support) contained in a small canister (provided with the enclosure). Drives easily mount to the canister via screws that are also supplied as part of the enclosure kit. There is also a drive activity and failure notification LED for the devices. If you do not have any available SAS or SATA ports on your servers motherboard, you can use an available PCIe slot and add a HBA or RAID card for attaching the CSE-M14TQC to the drives. For example, a 12 Gbps SAS (6 Gbps SATA) Avago/LSI RAID card, or a 6 Gbps SAS/SATA RAID card.

    Via Supermicro CSE-M14TQC rear details (4 x SATA and 1 Molex power connector)

    Via StorageIOblog Supermicro 4 x 2.5 rear view CSE-M14TQC 12Gbps SAS enclosure
    CSE-M14TQCrear view before installation

    Via StorageIOblog Supermicro CSE-M14TQC 12Gbps SAS enclosure cabling
    CSE-M14TQC ready for installation with 4 x SATA (12 Gbps SAS) drive connectors and Molex power connector

    Tip: In the case of the Lenovo TS140 that I initially installed the CSE-M14TQC into, there is not a lot of space for installing the drive connectors or Molex power connector to the enclosure. Instead, attach the cables to the CSE-M14TQC as shown above before installing the enclosure into the media bay slot. Simply attach the connectors as shown and feed them through the media bay opening as you install the CSE-M14TQC enclosure. Then attach the drive connectors to your HBA, RAID card or server motherboard and the power connector to your power source inside the server.

    Note and disclaimer, pay attention to your server manufactures power loading and specification along with how much power will be used by the HDD or SSD’s to be installed to avoid electrical power or fire issues due to overloading!

    Via StorageIOblog Supermicro CSE-M14TQC enclosure Lenovo TS140
    CSE-M14TQC installed into Lenovo TS140 empty media bay

    Via StorageIOblog Supermicro CSE-M14TQC drive enclosure Lenovo TS140

    CSE-M14TQC installed with front face plated installed on Lenovo TS140

    Where to read, watch and learn more

    Storage I/O trends

    What this all means and wrap up

    If you have a server that simply needs some extra storage capacity by adding some 2.5" HDDs, or boosting performance with fast SSDs yet do not have any more internal drive slots or expansion bays, leverage your media bay. This applies to smaller environments where you might have one or two servers, as well as for environments where you want or need to create a scale out software defined storage or hyper-converged platform using your own hardware. Another option is that if you have a lab or test environment for VMware vSphere ESXi Windows, Linux, Openstack or other things, this can be a cost-effective approach to adding both storage space capacity as well as performance and leveraging newer 12Gbps SAS technologies.

    For example, create a VMware VSAN cluster using smaller servers such as Lenovo TS140 or equivalent where you can install a couple of 6TB or 8TB higher capacity 3.5" drive in the internal drive bays, then adding a couple of 12 Gbps SAS SSDs along with a couple of 2.5" 2TB (or larger) HDDs along with a RAID card, and high-speed networking card. If VMware VSAN is not your thing, how about setting up a Windows Server 2012 R2 failover cluster including Scale Out File Server (SOFS) with Hyper-V, or perhaps OpenStack or one of many other virtual storage appliances (VSA) or software defined storage, networking or other solutions. Perhaps you need to deploy more storage for a big data Hadoop based analytics system, or cloud or object storage solution? On the other hand, if you simply need to add some storage to your storage or media or gaming server or general purpose server, the CSE-M14TQC can be an option along with other external solutions.

    Ok, nuff said

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Cisco buys Whiptail continuing the SSD storage I/O flash cash cache dash

    Storage I/O trends

    Cisco buys Whiptail continuing the Storage storage I/O flash cash cache dash

    Congratulations to Virident for being bought today for $645 Million USD by Western Digital (WD). Virident a nand flash PCIe card startup vendor has been around for several years and in the last year or two has gain more industry awareness as a competitor to FusionIO among others.

    There is a nand flash solid state devices (SSD) cash-dash occurring, not to mention fast cache dances that is occurring the IT and data infrastructure (e.g. storage and IO) sector specifically.

    Why the nand flash SSD cash dash and cache dance?

    Yesterday hard disk drive (HDD) vendor Western Digital (WD) bought Virident a nand flash PCIe Solid State Device (SSD) card vendor for $650M, and today networking and server vendor Cisco bought Whiptail a SSD based storage system startup for a little over $400M. Here is an industry trends perspective post that I did yesterday on WD and Virident.

    Obviously this begs a couple of questions, some of which I raised in my post yesterday about WD, Virident, Seagate, FusionIO and others.

    Questions include

    Does this mean Cisco is getting ready to take on EMC, NetApp, HDS and its other storage partners who leverage the Cisco UCS server?

    IMHO at least near term no more than they have in the past, nor any more than EMCs partnership with Lenovo indicates a shift in what is done with vBlocks. On the other hand, some partners or customers may be as nervous as a long-tailed cat next to a rocking chair (Google it if you don’t know what it means ;).

    Is Cisco going to continue to offer Whiptail SSD storage solutions on a standalone basis, or pull them in as part of solutions similar to what it has done on other acquisitions?

    Storage I/O trends

    IMHO this is one of the most fundamental questions and despite the press release and statements about this being a UCS focus, a clear sign of proof for Cisco is how they reign in (if they go that route) Whiptail from being sold as a general storage solution (with SSD) as opposed to being part of a solution bundle.

    How will Cisco manage its relationship in a coopitition manner cooperating with the likes of EMC in the joint VCE initiative along with FlexPod partner NetApp among others? Again time will tell.

    Also while most of the discussions about NetApp have been around the UCS based FlexPod business, there is the other side of the discussion which is what about NetApp E Series storage including the SSD based EF540 that competes with Whiptail (among others).

    Many people may not realize how much DAS storage including fast SAS, high-capacity SAS and SATA or PCIe SSD cards Cisco sells as part of UCS solutions that are not vBlock, FlexPod or other partner systems.

    NetApp and Cisco have partnerships that go beyond the FlexPod (UCS and ONTAP based FAS) so will be interesting to see what happens in that space (if anything). This is where Cisco and their UCS acquiring Whiptail is not that different from IBM buying TMS to complement their servers (and storage) while also partnering with other suppliers, same holds true for server vendors Dell, HP, IBM and Oracle among others.

    Can Cisco articulate and convince their partners, customers, prospects and others that the whiptail acquisition is more about direct attached storage
    (DAS) which includes both internal dedicated and external shared device?

    Keep in mind that DAS does not have to mean Dumb A$$ Storage as some might have you believe.

    Then there are the more popular questions of who is going to get bought next, what will NetApp, Dell, Seagate, Huawei and a few others do?

    Oh, btw, funny how have not seen any of the pubs mention that Whiptail CEO Dan Crain is a former Brocadian (e.g. former Brocade CTO) who happens to be a Cisco competitor, just saying.

    Congratulations to Dan and his crew and enjoy life at Cisco.

    Stay tuned as the fall 2013 nand flash SSD cache dash and cash dance activities are well underway.

    Ok, nuff said (for now).

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Putting some VMware ESX storage tips together: (Part II)

    In the first part of this post I showed how to use a tip from Dunacn Epping to fake VMware into thinking that a HHDD (Hybrid Hard Disk Drive) was a SSD.

    Now lets look at using a tip from Dave Warburton to make an internal SATA HDD into an RDM for one of my Windows-based VMs.

    My challenge was that I have a VM with a guest that I wanted to have a Raw Device Mapping (RDM) internal SATA HDD accessible to it, expect the device was an internal SATA device. Given that using the standard tools and reading some of the material available, it would have been easy to give up and quit since the SATA device was not attached to an FC or iSCSI SAN (such as my Iomega IX4 I bought from Amazon.com).

    Image of internal RDM with vMware
    Image of internal SATA drive being added as a RDM with vClient

    Thanks to Dave’s great post that I found, I was able to create a RDM of an internal SATA drive, present it to the existing VM running Windows 7 ultimate and it is now happy, as am I.

    Pay close attention to make sure that you get the correct device name for the steps in Dave’s post (link is here).

    For the device that I wanted to use, the device name was:

    From the ESX command line I found the device I wanted to use which is:

    t10.ATA_____ST1500LM0032D9YH148_____Z110S6M5

    Then I used the following ESX shell command per Dave’s tip to create an RDM of an internal SATA HDD:

    vmkfstools -z /vmfs/devices/disks/ t10.ATA_____ST1500LM0032D9YH148_____Z110S6M5
     /vmfs/volumes/dat1/rdm_ST1500L.vmdk

    Then the next steps were to update an existing VM using vSphere client to use the newly created RDM.

    Hint, Pay very close attention to your device naming, along with what you name the RDM and where you find it. Also, recommend trying or practicing on a spare or scratch device first, if something is messed up. I practiced on a HDD used for moving files around and after doing the steps in Dave’s post, added the RDM to an existing VM, started the VM and accessed the HDD to verify all was fine (it was). After shutting down the VM, I removed the RDM from it as well as from ESX, and then created the real RDM.

    As per Dave’s tip, vSphere Client did not recognize the RDM per say, however telling it to look at existing virtual disks, select browse the data stores, and low and behold, the RDM I was looking for was there. The following shows an example of using vSphere to add the new RDM to one of my existing VMs.

    In case you are wondering, why I want to make a non SAN HDD as a RDM vs. doing something else? Simple, the HDD in question is a 1.5TB HDD that has backups on that I want to use as is. The HDD is also bit locker protected and I want the flexibility to remove the device if I have to being accessible via a non-VM based Windows system.


    Image of my VMware server with internal RDM and other items

    Could I have had accomplished the same thing using a USB attached device accessible to the VM?

    Yes, and in fact that is how I do periodic updates to removable media (HDD using Seagate Goflex drives) where I am not as concerned about performance.

    While I back up off-site to Rackspace and AWS clouds, I also have a local disk based backup, along with creating periodic full Gold or master off-site copies. The off-site copies are made to removable Seagate Goflex SATA drives using a USB to SATA Goflex cable. I also have the Goflex eSATA to SATA cable that comes in handy to quickly attach a SATA device to anything with an eSATA port including my Lenovo X1.

    As a precaution, I used a different HDD that contained data I was not concerned about if something went wrong to test to the process before doing it with the drive containing backup data. Also as a precaution, the data on the backup drive is also backed up to removable media and to my cloud provider.

    Thanks again to both Dave and Duncan for their great tips; I hope that you find these and other material on their sites as useful as I do.

    Meanwhile, time to get some other things done, as well as continue looking for and finding good work a rounds and tricks to use in my various projects, drop me a note if you see something interesting.

    Ok, nuff said for now.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    How can direct attached storage (DAS) make a comeback if it never left?

    Server and StorageIO industry trend and perspective DAS

    Have you seen or heard the theme that Direct Attached Storage (DAS), either dedicated or shared, internal or external is making a comeback?

    Wait, if something did not go away, how can it make a comeback?

    IMHO it is as simple as for the past decade or so, DAS has been overshadowed by shared networked storage including switched SAS, iSCSI, Fibre Channel (FC) and FC over Ethernet (FCoE) based block storage area networks (SAN) and file based (NFS and Windows SMB/CIFS) network attached storage (NAS) using IP and Ethernet networks. This has been particularly true by most of the independent storage vendors who have become focused on networked storage (SAN or NAS) solutions.

    However some of the server vendors have also jumped into the deep end of the storage pool with their enthusiasm for networked storage, even though they still sell a lot of DAS including internal dedicated, along with external dedicated and shared storage.

    Server and StorageIO industry trend and perspective DAS

    The trend for DAS storage has evolved with the interfaces and storage mediums including from parallel SCSI and IDE to SATA and more recently 3Gbs and 6Gbs SAS (with 12Gbs in first lab trials). Similarly the storage mediums include a mix of fast 10K and 15K hard disk drives (HDD) along with high-capacity HDDs and ultra-high performance solid state devices (SSD) moving from 3.5 to 2.5 inch form factors.

    While there has been a lot of industry and vendor marketing efforts around networked storage (e.g. SAN and NAS), DAS based storage was over shadowed so it should not be a surprise that those focused on SAN and NAS are surprised to hear DAS is alive and well. Not only is DAS alive and well, it’s also becoming an important scaling and convergence topic for adding extra storage to appliances as well as servers including those for scale out, big data, cloud and high density not to mention high performance and high productivity computing.

    Server and StorageIO industry trend and perspective DAS

    Consequently its becoming ok to talk about DAS again. Granted you might get some peer pressure from your trend setting or trend following friends to get back on the networked storage bandwagon. Keep this in mind, take a look at some of the cool trend setting big data and little data (database) appliances, backup, dedupe and archive appliances, cloud and scale out NAS and object storage systems among others and will likely find DAS on the back-end. On a smaller scale, or in high-density rack deployments in large cloud or similar environments you may also find DAS including switched shared SAS.

    Does that mean SANs are dead?
    No, not IMHO despite what some vendors marketers and their followers will claim which is ironic given how some of them were leading the DAS is dead campaign in favor of iSCSI or FC or NAS a few years ago. However simply comparing DAS to SAN or NAS in a competing way is like comparing apples to oranges, instead, look at how and where they can complement and enable each other. In other words, different tools for various tasks, various storage and interfaces for different needs.

    Thus IMHO DAS never left or went anywhere per say, it just was not fashionable or cool to talk about until now as it is cool and trend to discuss it again.

    Ok, nuff said for now.

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Getting SASy, the other shared storage option for disk and SSD systems

    Here is a link to a recent guest post that I was invited to do over at The Virtualization Practice (TVP) pertaining to Getting SASsy, the other shared server to storage interconnect for disk and SSD systems. Serial Attached SCSI (SAS) is better known as an interface for connecting hard disk drives (HDD) to servers and storage systems; however it is also widely used for attaching storage systems to physical as well as virtual servers. An important storage requirement for virtual machine (VM) environments with more than one physical machine (PM) server is shared storage. SAS has become a viable interconnect along with other Storage Area Network (SAN) interfaces including Fibre Channel (FC), Fibre Channel over Ethernet (FCoE) and iSCSI for block access.

    Read more here.

    Ok, nuff said for now.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2011 StorageIO and UnlimitedIO All Rights Reserved

    Industry Trends and Perspectives: 6GB SAS and DAS are not Dumb A$$ Storage

    Blog: Industry Trends and Perspectives: 6GB SAS and DAS are not Dumb A$$ Storage

    This is part of an ongoing series of short industry trends and perspectives blog posts briefs.

    These short posts compliment other longer posts along with traditional industry trends and perspective white papers, research reports, solution brief content found at www.storageio.com/reports.

    With 6G that increases performance as well as connectivity flexibility, more servers are supporting SAS natively while storage system continue to add support for 3.5" and 2.5" small form factor high performance and large capacity SAS drives. Shared SAS DAS storage systems are being deployed for consolidation attached to two or more servers as well as for clustered solutions.

    Another area where shared SAS DAS storage is being deployed is in cloud, scale out NAS and bulk storage environments as a price performance alternative to iSCSI or Fibre Channel solutions.

    Keep an eye on these and other trends including converged systems, server, storage and networking management along with associated tools.

    Related and companion material:
    Article: Green and SASy = Energy and Economic, Effective Storage
    Article: The Many Faces of SAS – Beyond the DAS Factor

    That is all for now, hope you find this ongoing series of current and emerging Industry Trends and Perspectives interesting.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved