Microsoft Azure Elastic SAN from Cloud to On-Prem

What is Azure Elastic SAN

Azure Elastic SAN (AES) is a new (now GA) Azure Cloud native storage service that provides scalable, resilient, easy management with rapid provisioning, high performance, and cost-effective storage. AES (figure 1) supports many workloads and computing resources. Workloads that benefit from AES include tier 1 and tier 2, such as Mission Critical, Database, and VDI, among others traditionally relying upon consolidated Storage Area Network (SAN) shared storage.

Compute resources that can use AES, including bare metal (BM) physical machines (PM), virtual machines (VM), and containers, among others, using iSCSI for access. AES is accessible by computing resources and services within the Azure Cloud in various regions (check Azure Website for specific region availability) and from on-prem core and edge locations using iSCSI. The AES management experience and value proposition are similar to traditional hardware or software-defined shared SAN storage combined with Azure cloud-based management capabilities.

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 1 General Concept and Use of Azure Elastic SAN (AES)

While Microsoft Azure describes AES as a cloud-native storage solution, that does not mean that AES is only for containers and other cloud-native apps or DevOPS. Rather, AES has been built for and is native to the cloud (e.g., software-defined) that can be accessed by various compute and other resources (e.g., VMs, Containers, AKS, etc) using iSCSI.

How Azure Elastic SAN differs from other Azure Storage

AES differs from traditional Azure block storage (e.g., Azure Disks) in that the storage is independent of the host compute server (e.g., BM, PM, VM, containers). With AES, similar to a conventional software-defined or hardware-based shared SAN solution, storage is disaggregated from host servers for sharing and management using iSCSI for connectivity. By comparison, AES differs from traditional Azure VM-based storage typically associated with a given virtual machine in a DAS (Direct Attached Storage) type configuration. Likewise, similar to conventional on-prem environments, there is a mix of DAS and SAN, including some host servers that leverage both.

AES supports Azure VM, Azure Kubernetes Service (AKS), cloud-native, edge, and on-prem computing (BM, VM, etc.) via iSCSI. Support for Azure VMware Solution (AVS) is in preview; check the Microsoft Azure website for updates and new feature functionality enhancements.

Does this mean everything is moving to AES? Similar to traditional SANs, there are roles and needs for various storage options, including DAS, shared block, file, and object, among storage offerings. Likewise, Microsoft and Azure have expanded their storage offerings to include AES, DAS (azure disks, including Ultra, premium, and standard, among other options), append, block, and page blobs (objects), and files, including Azure file sync, tables, and Data Box, among other storage services.

Azure Elastic Storage Feature Highlights

AES feature highlights include, among others:

    • Management via Azure Portal and associated tools
    • Azure cloud-based shared scalable bock storage
    • Scalable capacity, low latency, and high performance (IOPs and throughput)
    • Space capacity-optimized without the need for data reduction
    • Accessible from within Azure cloud and from on-prem using iSCSI
    • Supports Azure compute  (VMs, Containers/AKS, Azure VMware Solution)
    • On-prem access via iSCSI from PM/BM, VM, and containers
    • Variable number of volumes and volume size per volume group
    • Flexible easy to use Azure cloud-based management
    • Encryption and network private endpoint security
    • Local (LRS) and Zone (ZRS) with replication resiliency
    • Volume snapshots and cluster support

Who is Azure Elastic SAN for

AES is for those who need cost-effective, shared, resilient, high capacity, high performance (IOPS, Bandwidth), and low latency block storage within Azure and from on-prem access. Others who can benefit from AES include those who need shared block storage for clustering app workloads, server and storage consolidation, and hybrid and migration. Another consideration is for those familiar with traditional hardware and software-defined SANs to facilitate hybrid and migration strategies.

How Azure Elastic SAN works

Azure Elastic SAN is a software-defined (cloud native if you prefer) block storage offering that presents a virtual SAN accessible within Azure Cloud and to on-prem core and edge locations currently via iSCSI. Using iSCSI, Azure VMs, Clusters, Containers, Azure VMware Solution among other compute and services, and on-prem BM/PM, VM, and containers, among others, can access AES storage volumes.

From the Azure Portal or associated tools (Azure CLI or PowerShell), create an AES SAN, giving it a 3 to 24-character name and specify storage capacity (base units with performance and any additional space capacity). Next, create a Volume Group, assigning it to a specific subscription and resource group (new or existing), then specify which Azure Region to use, type of redundancy (LRS or GRS), and Zone to use. LRS provides local redundancy, while ZRS provides enhanced zone resiliency, with highspeed synchronous resiliency without setting up multiple SAN systems and their associated replication configurations along with networking considerations (e.g., Azure takes care of that for you within their service).

The next step is to create volumes by specifying the volume name, volume group to use, volume size in GB, maximum IOPs, and bandwidth. Once you have made your AES volume group and volumes, you can create private endpoints, change security and access controls, and access the volumes from Azure or on-prem resources using iSCSI. Note that AES currently needs to be LRS (not ZRS) for clustered shared storage and that Key management includes using your keys with Azure key vault.

Using Azure Elastic SAN

Using AES is straightforward, and there are good easy to follow guides from Microsoft Azure, including the following:

The following images show what AES looks like from the Azure Portal, as well as from an Azure Windows Server VM and an onprem physical machine (e.g., Windows 10 laptop).

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 2 AES Azure Portal Big Picture

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 3 AES Volume Groups Portal View

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 4  AES Volumes Portal View

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 5 AES Volume Snapshot Views

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 6 AES Connected Volume Portal View

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 7 AES Volume iSCSI view from on-prem Windows Laptop

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 8 AES iSCSI Volume attached to Azure VM

Azure Elastic SAN Cost Pricing

The cost of AES is elastic, depending on whether you scale capacity with performance (e.g., base unit) or add more space capacity. If you need more performance, add base unit capacity, increasing IOPS, bandwidth, and space. In other words, base capacity includes storage space and performance, which you can grow in various increments. Remember that AES storage resources get shared across volumes within a volume group.

Azure Elastic SAN is billed hourly based on a monthly per-capacity base unit rate, with a minimum of 1TB  provisioned capacity with minimum performance (e.g., 5,000 IOPs, 200MBps bandwidth). The base unit rate varies by region and type of redundancy, aka resiliency. For example, at the time of this writing, looking at US East, the Local Redundant Storage (LRS) base unit rate is 1TB with 5,000 IOPs and 200MBps bandwidth, costing $81.92 per unit per month.

The above example breaks down to a rate of $0.08 per GB per month, or $0.000110 per GB per hour (assumes 730 hours per month). An example of simply adding storage capacity without increasing base unit (e.g., performance) for US East is $61.44 per month. That works out to $0.06 per GB per month (no additional provisioned IOPs or Bandwidth) or $0.000083 per GB per hour.

Note that there are extra fees for Zone Redundant Storage (ZRS). Learn more about Azure Elastic SAN pricing here, as well as via a cost calculator here.

Azure Elastic SAN Performance

Performance for Azure Elastic SAN includes IOPs, Bandwidth, and Latency. AES IOPs get increased in increments of 5,000 per base TB. Thus, an AES with a base of 10TB would have 50,000 IOPs distributed (shared) across all of its volumes (e.g., volumes are not restricted). For example, if the base TB is increased from 10TB to 20TB, then the IOPs would increase from 50,000 to 100,000 IOPs.

On the other hand, if the base capacity (10TB) is not increased, only the storage capacity would increase from 10TB to 20TB, and the AES would have more capacity but still only have the 50,000 IOPs. AES bandwidth throughput increased by 200MBps per TB. For example, a 5TB AES would have 5 x 200MBps (1,000 MBps) throughput bandwidth shared across the volume groups volumes.

Note that while the performance gets shared across volumes, individual volume performance is determined by its capacity with a maximum of 80,000 IOPs and up to 1,024 MBps. Thus, to reach 80,000 IOPS and 1,024 MBps, an AES volume would have to be at least 107GB in space capacity. Also, note that the aggregate performance of all volumes cannot exceed the total of the AES. If you need more performance, then create another AES.

Will all VMs or compute resources see performance improvements with AES? Traditional Azure Disks associated with VMs have per-disk performance resource limits, including IOPs and Bandwidth. Likewise, VMs have storage limits based on their instance type and size, including the number of disks (HDD or SSD), performance (IOPS and bandwidth), and the number of CPUs and memory.

What this means is that an AES volume could have more performance than what a given VM is limited to. Refer to your VM instance sizing and configuration to determine its IOP and bandwidth limits; if needed, explore changing the size of your VM instance to leverage the performance of Azure Elastic SAN storage.

Additional Resources Where to learn more

The following links are additional resources to learn about Microsoft Azure Elastic SAN and related data infrastructures and tradecraft topics.

Azure AKS Storage Concepts 
Azure Elastic SAN (AES) Documentation and Deployment Guides
Azure Elastic SAN Microsoft Blog
Azure Elastic SAN Overview
Azure Elastic SAN Performance topics
Azure Elastic SAN Pricing calculator
Azure Products by Region (see where AES is currently available)
Azure Storage Offerings 
Azure Virtual Machine (VM) sizes
Azure Virtual Machine (VM) types
Azure Elastic SAN General Pricing
Azure Storage redundancy 
Azure Service Level Agreements (SLA) 
StorageIOBlog.com Data Box Family 
StorageIOBlog.com Data Box Review
StorageIOBlog.com Data Box Test Drive 
StorageIOblog.com Microsoft Hyper-V Alive Enhanced with Win Server 2025
StorageIOblog.com If NVMe is the answer, what are the questions?
StorageIOblog.com NVMe Primer (or refresh)

Additional learning experiences along with common questions (and answers), are found in my Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

Azure Elastic SAN (AES) is a new and now generally available shared block storage offering that is accessible using iSCSI from within Azure Cloud and on-prem environments. Even with iSCSI, AES is relatively easy to set up and use for shared storage, mainly if you are used to or currently working with hardware or software-defined SAN storage solutions.

With NVMe over TCP fabrics gaining industry and customer traction, I’m hoping for Microsoft to adding that in the future. Currently, AES supports LRS and ZRS for redundancy, and an excellent future enhancement would be to add Geo Redundant Storage (GRS) capabilities for those who need it.

I like the option of elastic shared storage regarding performance, availability, capacity, and economic costs (PACE). Suppose you understand the value proposition of evolving from dedicated DAS to shared SAN (independent of the underlying fabric network); or are currently using some form of on-prem shared block storage. In that case, you will find AES familiar and easy to use. Granted, AES is not a solution for everything as there are roles for other block storage, including DAS such as Azure disks and VMs within Azure, along with on-prem DAS, as well as file, object, and blobs, tables, among others.

Wrap up

The notion that all cloud storage must be objects or blobs is tied those who only need, provide, or prefer those solutions. The reality is that everything is not the same. Thus, there is a need for various storage mediums, devices, tiers, access, and types of services. Microsoft and Azure have done an excellent job of providing. I like what Microsoft Azure is doing with Azure Elastic SAN.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Nine time Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of UnlimitedIO LLC.

Who Will Be At Top Of Storage World Next Decade?

Who Will Be At Top Of Storage World Next Decade?

server storage I/O data infrastructure trends

Data Storage regardless of if hardware, legacy, new, emerging, cloud service or various software defined storage (SDS) approaches are all fundamental resource components of data infrastructures along with compute server, I/O networking as well as management tools, techniques, processes and procedures.

fundamental Data Infrastructure resource components
Fundamental Data Infrastructure resources

Data infrastructures include legacy along with software defined data infrastructures (SDDI), along with software defined data centers (SDDC), cloud and other environments to support expanding workloads more efficiently as well as effectively (e.g. boosting productivity).

Data Infrastructures and workloads
Data Infrastructure and other IT Layers (stacks and altitude levels)

Various data infrastructures resource components spanning server, storage, I/O networks, tools along with hardware, software, services get defined as well as composed into solutions or services which may in turn be further aggregated into more extensive higher altitude offerings (e.g. further up the stack).

IT and Data Infrastructure Stack Layers
Various IT and Data Infrastructure Stack Layers (Altitude Levels)

Focus on Data Storage Present and Future Predictions

Drew Robb (@Robbdrew) has a good piece over at Enterprise Storage Forum looking at the past, present and future of who will rule the data storage world that includes several perspective predictions comments from myself as well as others. Some of the perspectives and predictions by others are more generic and technology trend and buzzword bingo focus which should not be a surprise. For example including the usual performance, Cloud and Object Storage, DPDK, RDMA/RoCE, Software-Defined, NVM/Flash/SSD, CI/HCI, NVMe among others.

Here are some excerpts from Drews piece along with my perspective and prediction comments of who may rule the data storage roost in a decade:

Amazon Web Services (AWS) – AWS includes cloud and object storage in the form of S3. However, there is more to storage than object and S3 with AWS also having Elastic File Services (EFS), Elastic Block Storage (EBS), database, message queue and on-instance storage, among others. for traditional, emerging and storage for the Internet of Things (IoT).

It is difficult to think of AWS not being a major player in a decade unless they totally screw up their execution in the future. Granted, some of their competitors might be working overtime putting pins and needles into Voodoo Dolls (perhaps bought via Amazon.com) while wishing for the demise of Amazon Web Services, just saying.

Voodoo Dolls via Amazon.com
Voodoo Dolls and image via Amazon.com

Of course, Amazon and AWS could follow the likes of Sears (e.g. some may remember their catalog) and ignore the future ending up on the where are they now list. While talking about Amazon and AWS, one will have to wonder where Wall Mart will end up in a decade with or without a cloud of their own?

Microsoft – With Windows, Hyper-V and Azure (including Azure Stack), if there is any company in the industry outside of AWS or VMware that has quietly expanded its reach and positioning into storage, it is Microsoft, said Schulz.

Microsoft IMHO has many offerings and capabilities across different dimensions as well as playing fields. There is the installed base of Windows Servers (and desktops) that have the ability to leverage Software Defined Storage including Storage Spaces Direct (S2D), ReFS, cache and tiering among other features. In some ways I’m surprised by the number of people in the industry who are not aware of Microsoft’s capabilities from S2D and the ability to configure CI as well as HCI (Hyper Converged Infrastructure) deployments, or of Hyper-V abilities, Azure Stack to Azure among others. On the other hand, I run into Microsoft people who are not aware of the full portfolio offerings or are just focused on Azure. Needless to say, there is a lot in the Microsoft storage related portfolio as well as bigger broader data infrastructure offerings.

NetApp – Schulz thinks NetApp has the staying power to stay among the leading lights of data storage. Assuming it remains as a freestanding company and does not get acquired, he said, NetApp has the potential of expanding its portfolio with some new acquisitions. “NetApp can continue their transformation from a company with a strong focus on selling one or two products to learning how to sell the complete portfolio with diversity,” said Schulz.

NetApp has been around and survived up to now including via various acquisitions, some of which have had mixed results vs. others. However assuming NetApp can continue to reinvent themselves, focusing on selling the entire solution portfolio vs. focus on specific products, along with good execution and some more acquisitions, they have the potential for being a top player through the next decade.

Dell EMC – Dell EMC is another stalwart Schulz thinks will manage to stay on top. “Given their size and focus, Dell EMC should continue to grow, assuming execution goes well,” he said.

There are some who I hear are or have predicted the demise of Dell EMC, granted some of those predicted the demise of Dell and or EMC years ago as well. Top companies can and have faded away over time, and while it is possible Dell EMC could be added to the where are they now list in the future, my bet is that at least while Michael Dell is still involved, they will be a top player through the next decade, unless they mess up on execution.

Cloud and software defined storage data infrastructure
Various Data Infrastructures and Resources involving Data Storage

Huawei – Huawei is one of the emerging giants from China that are steadily gobbling up market share. It is now a top provider in many categories of storage, and its rapid ascendancy is unlikely to stop anytime soon. “Keep an eye on Huawei, particularly outside of the U.S. where they are starting to hit their stride,” said Schulz.

In the US, you have to look or pay attention to see or hear what Huawei is doing involving data storage, however that is different in other parts of the world. For example, I see and hear more about them in Europe than in the US. Will Huawei do more in the US in the future? Good question, keep an eye on them.

VMware – A decade ago, Storage Networking World (SNW) was by far the biggest event in data storage. Everyone who was anyone attended this twice yearly event. And then suddenly, it lost its luster. A new forum known as VMworld had emerged and took precedence. That was just one of the indicators of the disruption caused by VMware. And Schulz expects the company to continue to be a major force in storage. “VMware will remain a dominant player, expanding its role with software-defined storage,” said Schulz.

VMware has a dominant role in data storage not just because of the relationship with Dell EMC, or because of VSAN which continues to gain in popularity, or the soon to be released VMware on AWS solution options among others. Sure all of those matters, however, keep in mind that VMware solutions also tie into and work with other legacies as well as software-defined storage solution, services as well as tools spanning block, file, object for virtual machines as well as containers.

"Someday soon, people are going to wake up like they did with VMware and AWS," said Schulz. "That’s when they will be asking ‘When did Microsoft get into storage like this in such a big way.’"

What the above means is that some environments may not be paying attention to what AWS, Microsoft, VMware among others are doing, perhaps discounting them as the old or existing while focusing on new, emerging what ever is trendy in the news this week. On the other hand, some environments may see the solution offerings from those mentioned as not relevant to their specific needs, or capable of scaling to their requirements.

Keep in mind that it was not that long ago, just a few years that VMware entered the market with what by today’s standard (e.g. VSAN and others) was a relatively small virtual storage appliance offering, not to mention many people discounted and ignored VMware as a practical storage solution provider. Things and technology change, not to mention there are different needs and solution requirements for various environments. While a solution may not be applicable today, give it some time, keep an eye on them to avoid being surprised asking the question, how and when did a particular vendor get into storage in such a big way.

Is Future Data Storage World All Cloud?

Perhaps someday everything involving data storage will be in or part of the cloud.

Does this mean everything is going to the cloud, or at least in the next ten years? IMHO the simple answer is no, even though I see more workloads, applications, and data residing in the cloud, there will also be an increase in hybrid deployments.

Note that those hybrids will span local and on-premises or on-site if you prefer, as well as across different clouds or service providers. Granted some environments are or will become all in on clouds, while others are or will become a hybrid or some variation. Also when it comes to clouds, do not be scared, be prepared. Also keep an eye on what is going on with containers, orchestration, management among other related areas involving persistent storage, a good example is Dell EMCcode RexRay among others.

Server Storage I/O resources
Various data storage focus areas along with data infrastructures.

What About Other Vendors, Solutions or Services?

In addition to those mentioned above, there are plenty of other existing, new and emerging vendors, solutions, and services to keep an eye on, look into, test and conduct a proof of concept (PoC) trial as part of being an informed data infrastructure and data storage shopper (or seller).

Keep in mind that component suppliers some of whom like Cisco also provides turnkey solutions that are also part of other vendors offerings (e.g. Dell EMC VxBlock, NetApp FlexPod among others), Broadcom (which includes Avago/LSI, Brocade Fibre Channel, among others), Intel (servers, I/O adapters, memory and SSDs), Mellanox, Micron, Samsung, Seagate and many others.

E8, Excelero, Elastifile (software defined storage), Enmotus (micro-tiering, read Server StorageIOlab report here), Everspin (persistent and storage class memories including NVDIMM), Hedvig (software defined storage), NooBaa, Nutanix, Pivot3, Rozo (software defined storage), WekaIO (scale out elastic software defined storage, read Server StorageIO report here).

Some other software defined management tools, services, solutions and components I’m keeping an eye on, exploring, digging deeper into (or plan to) include Blue Medora, Datadog, Dell EMCcode and RexRay docker container storage volume management, Google, HPE, IBM Bluemix Cloud aka IBM Softlayer, Kubernetes, Mangstor, OpenStack, Oracle, Retrospect, Rubrix, Quest, Starwind, Solarwinds, Storpool, Turbonomic, Virtuozzo (software defined storage) among many others

What about those not mentioned? Good question, some of those I have mentioned in earlier Server StorageIO Update newsletters, as well as many others mentioned in my new book "Software Defined Data Infrastructure Essentials" (CRC Press). Then there are those that once I hear something interesting from on a regular basis will get more frequent mentions as well. Of course, there is also a list to be done someday that is basically where are they now, e.g. those that have disappeared, or never lived up to their full hype and marketing (or technology) promises, let’s leave that for another day.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Where To Learn More

Learn more about related technology, trends, tools, techniques, and tips with the following links.

Data Infrastructures and workloads
Data Infrastructures Resources (Servers, Storage, I/O Networks) enabling various services

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

It is safe to say that each new year will bring new trends, techniques, technologies, tools, features, functionality as well as solutions involving data storage as well as data infrastructures. This means a usual safe bet is to say that the current year is the most exciting and has the most new things than in the past when it comes to data infrastructures along with resources such as data storage. Keep in mind that there are many aspects to data infrastructures as well as storage all of which are evolving. Who Will Be At Top Of Storage World Next Decade? What say you?

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book "Software-Defined Data Infrastructure Essentials" (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

Cisco Next Gen 32Gb Fibre Channel NVMe SAN Updates

server storage I/O trends

Cisco Next Gen 32Gb Fibre Channel and NVMe SAN Updates

Cisco announced today next generation MDS storage area networking (SAN) Fibre Channel (FC) switches with 32Gb, along with NVMe over FC support.

Cisco Fibre Channel (FC) Directors (Left) and Switches (Right)

Highlights of the Cisco announcement include:

  • MDS 9700 48 port 32Gbps FC switching module
  • High density 768 port 32Gbps FC directors
  • NVMe over FC for attaching fast flash SSD devices (current MDS 9700, 9396S, 9250i and 9148S)
  • Integrated analytics engine for management insight awareness
  • Multiple concurrent protocols including NVMe, SCSI (e.g. SCSI_FCP aka FCP) and FCoE

Where to Learn More

The following are additional resources to learn more.

What this all means, wrap up and summary

Fibre Channel remains relevant for many environments and it makes sense that Cisco known for Ethernet along with IP networking enhance their offerings. By having 32Gb Fibre Channel, along with adding NVMe over Fabric provides existing (and new) Cisco customers to support their legacy (e.g. FC) and emerging (NVMe) workloads as well as devices. For those environments that still need some mix of Fibre Channel, as well as NVMe over fabric this is a good announcement. Keep an eye and ear open for which storage vendors jump on the NVMe over Fabric bandwagon now that Cisco as well as Brocade have made switch support announcements.

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Watch for the spring 2017 release of his new book “Software-Defined Data Infrastructure Essentials” (CRC Press).

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

CompTIA needs input for their Storage+ certification, can you help?

CompTIA needs input for their Storage+ certification, can you help?

The CompTIA folks are looking for some comments and feedback from those who are involved with data storage in various ways as part of planning for their upcoming enhancements to the Storage+ certification testing.

As a point of disclosure, I am member of the CompTIA Storage+ certification advisory committee (CAC), however I don’t get paid or receive any other type of renumeration for contributing my time to give them feedback and guidance other than a thank, Atta boy for giving back and playing it forward to help others in the IT community similar to what my predecessors did.

I have been asked to pass this along to others (e.g. you or who ever forwards it on to you).

Please take a few moments and feel free to share with others this link here to the survey for CompTIA Storage+.

What they are looking for is to validate the exam blueprint generated from a recent Job Task Analysis (JTA) process.

In other words, does the certification exam show real-world relevance to what you and your associates may be doing involved with data storage.

This is opposed to being aligned with those whose’s job it is to create test questions and may not understand what it is you the IT pro involved with storage does or does not do.

If you have ever taken a certification exam test and scratched your head or wondered out why some questions that seem to lack real-world relevance were included, vs. ones of practical on-the-job experience were missing, here’s your chance to give feedback.

Note that you will not be rewarded with an Amex or Amazon gift card, Starbucks or Dunkin Donuts certificates, free software download or some other incentive to play and win, however if you take the survey let me know and will be sure to tweet you an Atta boy or Atta girl! However they are giving away a free T-Shirt to every 10 survey takers.

Btw, if you really need something for free, send me a note (I’m not that difficult to find) as I have some free copies of Resilient Storage Networking (RSN): Designing Flexible Scalable Data Infrastructures (Elsevier) you simply pay shopping and handling. RSN can be used to help prepare you for various storage testing as well as other day-to-day activities.

CompTIA is looking for survey takers who have some hands-on experience or involved with data storage (e.g. can you spell SAN, NAS, Disk or SSD and work with them hands-on then you are a candidate ;).

Welcome to the CompTIA Storage+ Certification Job Task Analysis (JTA) Survey

  • Your input will help CompTIA evaluate which test objectives are most important to include in the CompTIA Storage+ Certification Exam
  • Your responses are completely confidential.
  • The results will only be viewed in the aggregate.
  • Here is what (and whom) CompTIA is looking for feedback from:

  • Has at least 12 to 18 months of experience with storage-related technologies.
  • Makes recommendations and decisions regarding storage configuration.
  • Facilitates data security and data integrity.
  • Supports a multiplatform and multiprotocol storage environment with little assistance.
  • Has basic knowledge of cloud technologies and object storage concepts.
  • As a small token of CompTIA appreciation for your participation, they will provide an official CompTIA T-shirt to every tenth (1 of every 10) person who completes this survey. Go here for official rules.

    Click here to complete the CompTIA Storage+ survey

    Contact CompTIA with any survey issues, research@comptia.org

    What say you, take a few minutes like I did and give some feedback, you will not be on the hook for anything, and if you do get spammed by the CompTIA folks, let me know and I in turn will spam them back for spamming you as well as me.

    Ok, nuff said

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Despite being declared dead, Fibre Channel continues to evolve with FC-BB-6

    Despite being declared dead, Fibre Channel continues to evolve with FC-BB-6

    Like many technologies that have been around for more than a decade or two, they often get declared dead when something new appears and Fibre Channel (FC) for networking with your servers and storage falls into that category. It seems like just yesterday when iSCSI was appearing on the storage networking scene in the early 2000s that FC was declared dead yet it remains and continues to evolve including moving over Ethernet with FC over Ethernet (FCoE).

    Recently the Fibre Channel Industry Association (FCIA) made an announcement on the continued development and enhancements including FC-BB-6 that applies to both "classic" or "legacy" FC as well as the newer and emerging FCoE implementations. FCIA is not alone in the FCIA activity as they are as the name implies the industry consortium that works with the T11 standards folks. T11 is a Technical Committee of the International Committee on Information Technology Standards (INCITS, pronounced "insights").

    FCIA Fibre Channel Industry Association

    Keep in mind that a couple of pieces to Fibre Channel which are the upper levels and lower level transports.

    With FCoE, the upper level portions get mapped natively on Ethernet without having to map on top of IP as happens with distance extension using FCIP.

    Likewise FCoE is more than simply mapping one of the FC upper level protocols (ULPs) such as the SCSI command set (aka SCSI_FCP) on IP (e.g. iSCSI). Think of ULPs almost in a way as a guest that gets transported or carried across the network, however lets also be careful not to play the software defined network (SDN) or virtual network, network virtualization or IO virtualization (IOV) card, or at least yet, we will leave that up to some creative marketers ;).

    At the heart of the Fibre Channel beyond the cable and encoding scheme are a set of protocols, command sets and one in particular is FC Backbone now in its 6th version (read more here at the T11 site, or here at the SNIA site).

    Some of the highlights of the FCIA announcement include:

    VN2VN connectivity support enabling direct point to point virtual links (not to be confused with point to point physical cabling) between nodes in an FCoE network simplifying configurations for smaller SAN networks where zoning might not be needed (e.g. remove complexity and cost).

    Support for Domain ID scalability including more efficient use by FCoE fabrics enabling large scalability of converged SANs. Also keep an eye on the emerging T11 FC-SW-6 distributed switch architecture for implementation over Ethernet in final stages of development.

    Here are my perspectives on this announcement by the FCIA:

    "Fibre Channel is a proven protocol for networked data center storage that just got better," said Greg Schulz, founder StorageIO.  "The FC-BB-6 standard helps to unlock the full potential of the Fibre Channel protocol that can be implemented on traditional Fibre Channel as well as via Ethernet based networks. This means FC-BB-6 enabled Fibre Channel protocol based networks give flexibility, scalability and secure high-performance resilient storage networks to be implemented."

    Both "classic" or "legacy" Fibre Channel based cabling and networking are still alive with a road map that you can view here.

    However FCoE also continues to mature and evolve and in some ways, FC-BB-6 and its associated technologies and capabilities can be seen as the bridge between the past and the future. Thus while the role of both FC and FCoE along with other ways of of networking with your servers and storage continue to evolve, so to does the technology. Also keep in mind that not everything is the same in the data center or information factory which is why we have different types of server, storage and I/O networks to address different needs, requirements and preferences.

    Additional reading and viewing on FC, FCoE and storage networking::

  • Where has the FCoE hype and FUD gone? (with poll)
  • More storage and IO metrics that matter
  • Why FC and FCoE vendors get beat up over bandwidth?
  • Is FCoE Struggling to Gain Traction, or on a normal adoption course?
  • Has FCoE entered the trough of disillusionment?
  • More on Fibre Channel over Ethernet (FCoE) which points to this PDF
  • Will 6Gb SAS kill Fibre Channel?
  • From bits to bytes: Decoding Encoding
  • How many IOPS can a HDD, HHDD or SSD do?
  • Can we get a side of context with them IOPS and other storage metrics?
  • Server Storage I/O Network Benchmark Winter Olympic Games
  • Is FCoE Struggling to Gain Traction, or on a normal adoption course?
  • SAN, LAN, MAN, WAN, POTS and PANs – Network servers and storage beyond the cable (BrightTalk webinar)
  • Networking With Your Servers and Storage: Cloud, Virtual and Physical Environments (BrightTalk webinar)
  • Cloud and Virtual Storage Networking (CRC Press), Resilient Storage Networks (Elsevier)
  • Also check out these tips and articles, industry trends perspectives commentary in the news along with past and upcoming event activities.

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

  • Care to help Coraid with a Storage I/O Content Conversation?

    Storage I/O trends

    Blog post – Can you help Coraid with a Storage I/O Content Conversation?

    Over the past week or so have had many email conversations with the Coraid marketing/public relations (PR) folks who want to share some of their content unique or custom content with you.

    Normally I (aka @StorageIO) does not accept unsolicited (placed) content (particular for product pitch/placements) from vendors or their VARs, PR, surrogates including third or fourth party placement firms. Granted StorageIOblog.com does have site sponsors , per our policies that is all that those are, advertisements with no more or less influence than for others. StorageIO does do commissioned or sponsored custom content including white papers, solution briefs among other things with applicable disclosures, retention of editorial tone and control.

    Who is Coraid and what do they do?

    However wanting to experiment with things, not to mention given Coraids persistence, let’s try something to see how it works.

    Coraid for those who are not aware provides an alternative storage and I/O networking solution called ATA over Ethernet or AoE (here is a link to Coraids Analyst supplied content page). AoE enables servers with applicable software to use storage equipped with AoE technology (or via an applicable equipped appliance) to use Ethernet as an interconnect and transport. AoE is on the low-end an alternative to USB, Thunderbolt or direct attached SATA or SAS, along with switched or shared SAS (keep in mind SATA can plug into SAS, not vice versa).

    In addition AoE is an alternative to the industry standard iSCSI (SCSI command set mapped onto IP) which can be found in various solutions including as a software stack. Another area where AoE is positioned by Coraid is as an alternative to Fibre Channel SCSI_FCP (FCP) and Fibre Channel over Ethernet (FCoE). Keep in mind that Coraid AoE is block based (granted they have other solutions) as opposed to NAS (file) such as NFS, CIFS/SMB/SAMBA, pNFS or HDFS among others and is using native Ethernet as opposed to being layered on top of iSCSI.

    Storage I/O trends

    So here is the experiment

    Since Coraid wanted to get their unique content placed either by them or via others, lets see what happens in the comments section here at StorageIOblog.com. The warning of course is keep it respectful, courteous and no bashing or disparaging comments about others (vendors, products, technology).

    Thus the experiment is simple, lets see how the conversation evolves into the caveats, benefits, tradeoffs and experiences of those who have used or looked into the solution (pro or con) and why a particular opinion. If you have a perspective or opinion, no worries, however put it in context including if you are a Coraid employee, var, reseller, surrogate and likewise for those with other view (state who you are, your affiliation and other disclosure). Likewise if providing or supplying links to any content (white papers, videos, webinars) including via third-party provide applicable disclosures (e.g. it was sponsored and by whom etc.).

    Disclosure

    While I have mentioned or provided perspectives about them via different venues (online, print and in person) in the past, Coraid has never been a StorageIO client. Likewise this is not an endorsement for or against Coraid and their AoE or other solutions, simply an industry trends perspective.

    Ok, nuff said (for now).

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Can we get a side of context with them IOPS server storage metrics?

    Can we get a side of context with them server storage metrics?

    Storage I/O trends

    Updated 2/10/2018

    Whats the best server storage I/O network metric or benchmark? It depends as there needs to be some context with them IOPS and other server storage I/O metrics that matter.

    There is an old saying that the best I/O (Input/Output) is the one that you do not have to do.

    In the meantime, let’s get a side of some context with them IOPS from vendors, marketers and their pundits who are tossing them around for server, storage and IO metrics that matter.

    Expanding the conversation, the need for more context

    The good news is that people are beginning to discuss storage beyond space capacity and cost per GByte, TByte or PByte for both DRAM or nand flash Solid State Devices (SSD), Hard Disk Drives (HDD) along with Hybrid HDD (HHDD) and Solid State Hybrid Drive (SSHD) based solutions. This applies to traditional enterprise or SMB IT data center with physical, virtual or cloud based infrastructures.

    hdd and ssd iops

    This is good because it expands the conversation beyond just cost for space capacity into other aspects including performance (IOPS, latency, bandwidth) for various workload scenarios along with availability, energy effective and management.

    Adding a side of context

    The catch is that IOPS while part of the equation are just one aspect of performance and by themselves without context, may have little meaning if not misleading in some situations.

    Granted it can be entertaining, fun to talk about or simply make good press copy for a million IOPS. IOPS vary in size depending on the type of work being done, not to mention reads or writes, random and sequential which also have a bearing on data throughout or bandwidth (Mbytes per second) along with response time.

    However, are those million IOP’s applicable to your environment or needs?

    Likewise, what do those million or more IOPS represent about type of work being done? For example, are they small 64 byte or large 64 Kbyte sized, random or sequential, cached reads or lazy writes (deferred or buffered) on a SSD or HDD?

    How about the response time or latency for achieving them IOPS?

    In other words, what is the context of those metrics and why do they matter?

    storage i/o iops
    Click on image to view more metrics that matter including IOP’s for HDD and SSD’s

    Metrics that matter give context for example IO sizes closer to what your real needs are, reads and writes, mixed workloads, random or sequential, sustained or bursty, in other words, real world reflective.

    As with any benchmark take them with a grain (or more) of salt, they key is use them as an indicator then align to your needs. The tool or technology should work for you, not the other way around.

    Here are some examples of context that can be added to help make IOP’s and other metrics matter:

    • What is the IOP size, are they 512 byte (or smaller) vs. 4K bytes (or larger)?
    • Are they reads, writes, random, sequential or mixed and what percentage?
    • How was the storage configured including RAID, replication, erasure or dispersal codes?
    • Then there is the latency or response time and IO queue depths for the given number of IOPS.
    • Let us not forget if the storage systems (and servers) were busy with other work or not.
    • If there is a cost per IOP, is that list price or discount (hint, if discount start negotiations from there)
    • What was the number of threads or workers, along with how many servers?
    • What tool was used, its configuration, as well as raw or cooked (aka file system) IO?
    • Was the IOP’s number with one worker or multiple workers on a single or multiple servers?
    • Did the IOP’s number come from a single storage system or total of multiple systems?
    • Fast storage needs fast serves and networks, what was their configuration?
    • Was the performance a short burst, or long sustained period?
    • What was the size of the test data used; did it all fit into cache?
    • Were short stroking for IOPS or long stroking for bandwidth techniques used?
    • Data footprint reduction (DFR) techniques (thin provisioned, compression or dedupe) used?
    • Were write data committed synchronously to storage, or deferred (aka lazy writes used)?

    The above are just a sampling and not all may be relevant to your particular needs, however they help to put IOP’s into more contexts. Another consideration around IOPS are the configuration of the environment, from an actual running application using some measurement tool, or are they generated from a workload tool such as IOmeter, IOrate, VDbench among others.

    Sure, there are more contexts and information that would be interesting as well, however learning to walk before running will help prevent falling down.

    Storage I/O trends

    Does size or age of vendors make a difference when it comes to context?

    Some vendors are doing a good job of going for out of this world record-setting marketing hero numbers.

    Meanwhile other vendors are doing a good job of adding context to their IOP or response time or bandwidth among other metrics that matter. There is a mix of startup and established that give context with their IOP’s or other metrics, likewise size or age does not seem to matter for those who lack context.

    Some vendors may not offer metrics or information publicly, so fine, go under NDA to learn more and see if the results are applicable to your environments.

    Likewise, if they do not want to provide the context, then ask some tough yet fair questions to decide if their solution is applicable for your needs.

    Storage I/O trends

    Where To Learn More

    View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    What this means is let us start putting and asking for metrics that matter such as IOP’s with context.

    If you have a great IOP metric, if you want it to matter than include some context such as what size (e.g. 4K, 8K, 16K, 32K, etc.), percentage of reads vs. writes, latency or response time, random or sequential.

    IMHO the most interesting or applicable metrics that matter are those relevant to your environment and application. For example if your main application that needs SSD does about 75% reads (random) and 25% writes (sequential) with an average size of 32K, while fun to hear about, how relevant is a million 64 byte read IOPS? Likewise when looking at IOPS, pay attention to the latency, particular if SSD or performance is your main concern.

    Get in the habit of asking or telling vendors or their surrogates to provide some context with them metrics if you want them to matter.

    So how about some context around them IOP’s (or latency and bandwidth or availability for that matter)?

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Web chat Thur May 30th: Hot Storage Trends for 2013 (and beyond)

    Storage I/O trends

    Join me on Thursday May 30, 2013 at Noon ET (9AM PT) for a live web chat at the 21st Century IT (21cit) site (click here to register, sign-up, or view earlier posts). This will be an online web chat format interactive conversation so if you are not able to attend, you can visit at your convenience to view and give your questions along with comments. I have done several of these web chats with 21cit as well as other venues that are a lot of fun and engaging (time flies by fast).

    For those not familiar, 21cIT is part of the Desum/UBM family of sites including Internet Evolution, SMB Authority, and Enterprise Efficiency among others that I do article posts, videos and live chats for.


    Sponsored by NetApp

    I like these types of sites in that while they have a sponsor, the content is generally kept separate between those of editors and contributors like myself and the vendor supplied material. In other words I coordinate with the site editors on what topics I feel like writing (or doing videos) about that align with the given sites focus and themes as opposed to following and advertorial calendar script.

    During this industry trends perspective web chat, one of the topics and themes planned for discussion include software defined storage (SDS). View a recent video blog post I did here about SDS. In addition to SDS, Solid State Devices (SSD) including nand flash, cloud, virtualization, object, backup and data protection, performance, management tools among others are topics that will be put out on the virtual discussion table.

    Storage I/O trends

    Following are some examples of recent and earlier industry trends perspectives posts that I have done over at 21cit:

    Video: And Now, Software-Defined Storage!
    There are many different views on what is or is not “software-defined” with products, protocols, preferences and even press releases. Check out the video and comments here.

    Big Data and the Boston Marathon Investigation
    How the human face of big-data will help investigators piece together all the evidence in the Boston bombing tragedy and bring those responsible to justice. Check out the post and comments here.

    Don’t Use New Technologies in Old Ways
    You can add new technologies to your data center infrastructure, but you won’t get the full benefit unless you update your approach with people, processes, and policies. Check out the post and comments here.

    Don’t Let Clouds Scare You, Be Prepared
    The idea of moving to cloud computing and cloud services can be scary, but it doesn’t have to be so if you prepare as you would for implementing any other IT tool. Check out the post and comments here.

    Storage and IO trends for 2013 (& Beyond)
    Efficiency, new media, data protection, and management are some of the keywords for the storage sector in 2013. Check out these and other trends, predictions along with comments here.

    SSD and Real Estate: Location, Location, Location
    You might be surprised how many similarities between buying real estate and buying SSDs.
    Location matters and it’s not if, rather when, where, why and how you will be using SSD including nand flash in the future, read more and view comments here.

    Everything Is Not Equal in the Data center, Part 3
    Here are steps you can take to give the right type of backup and protection to data and solutions, depending on the risks and scenarios they face. The result? Savings and efficiencies. Read more and view comments here.

    Everything Is Not Equal in the Data center, Part 2
    Your data center’s operations can be affected at various levels, by multiple factors, in a number of degrees. And, therefore, each scenario requires different responses. Read more and view comments here.

    Everything Is Not Equal in the Data center, Part 1
    It pays to check your data center Different components need different levels of security, storage, and availability. Read more and view comments here.

    Data Protection Modernizing: More Than Buzzword Bingo
    IT professionals and solution providers should put technologies such as disk based backup, dedupe, cloud, and data protection management tools as assets and resources to make sure they receive necessary funding and buy in. Read more and view comments here.

    Don’t Take Your Server & Storage IO Pathing Software for Granted
    Path managers are valuable resources. They will become even more useful as companies continue to carry out cloud and virtualization solutions. Read more and view comments here.

    SSD Is in Your Future: Where, When & With What Are the Questions
    During EMC World 2012, EMC (as have other vendors) made many announcements around flash solid-state devices (SSDs), underscoring the importance of SSDs to organizations future storage needs. Read more here about why SSD is in your future along with view comments.

    Changing Life cycles and Data Footprint Reduction (DFR), Part 2
    In the second part of this series, the ABCDs (Archive, Backup modernize, Compression, Dedupe and data management, storage tiering) of data footprint reduction, as well as SLOs, RTOs, and RPOs are discussed. Read more and view comments here.

    Changing Life cycles and Data Footprint Reduction (DFR), Part 1
    Web 2.0 and related data needs to stay online and readily accessible, creating storage challenges for many organizations that want to cut their data footprint. Read more and view comments here.

    No Such Thing as an Information Recession
    Data, even older information, must be protected and made accessible cost-effectively. Not to mention that people and data are living longer as well as getting larger. Read more and view comments here.

    Storage I/O trends

    These real-time, industry trends perspective interactive chats at 21cit are open forum format (however be polite and civil) as well as non vendor sales or marketing pitches. If you have specific questions you ‘d like to ask or points of view to express, click here and post them in the chat room at any time (before, during or after).

    Mark your calendar for this event live Thursday, May 30, at noon ET or visit after the fact.

    Ok, nuff said (for now)

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    SNIA’s new SPDEcon conference

    Now also available via

    This is a new episode in the continuing StorageIO industry trends and perspectives pod cast series (you can view more episodes or shows along with other audio and video content here) as well as listening via iTunes or via your preferred means using this RSS feed (https://storageio.com/StorageIO_Podcast.xml)

    StorageIO industry trends cloud, virtualization and big data

    In this episode from SNW Spring 2013 in Orlando Florida, Bruce Ravid (@BruceRave) and me visit with our guests SNIA Chairman Wayne Adams (@wma01606) and from SNIA Education SW Worth. Wayne was one of our first podcast guests in the episode titled Waynes World, SNIA and SNW that you can listen to here.

    SNIA image logo

    Our conversation centers around the new SNIA SPDEcon conference that will occur June 10th in Santa Clara California.

    SNIA SPDEcon image

    The tag line of the event is for experts by experts and those who want to become experts. Listen to our conversation and check out the snia.org and snia.org/spdecon websites to signup and take part in this new event.

    Click here (right-click to download MP3 file) or on the microphone image to listen to the conversation with Wayne and SW.

    StorageIO podcast

    Also available via

    Watch (and listen) for more StorageIO industry trends and perspectives audio blog posts pod casts and other upcoming events. Also be sure to heck out other related pod casts, videos, posts, tips and industry commentary at StorageIO.com and StorageIOblog.com.

    Enjoy this episode from SNW Spring 2013 with Wayne Adams and SW Worth of SNIA to learn about the new SPDEcon conference.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Oracle, Xsigo, VMware, Nicira, SDN and IOV: IO IO its off to work they go

    StorageIO industry trends and perspectives

    In case you missed it, VMware recently announced spending $1.05 billion USD acquiring startup Nicira for their virtualization and software technology that enables software defined networks (SDN). Also last week Oracle was in the news getting its hands slapped by for making misleading advertisement performance claims vs. IBM.

    On the heals of VMware buying Nicira for software defined networking (SDN) or what is also known as IO virtualization (IOV) and virtualized networking, Oracle is now claiming their own SDN capabilities with their announcement of intent to acquire Xsigo. Founded in 2004, Xsigo has a hardware platform combined with software to enable attachment of servers to different Fibre Channel (SAN) and Ethernet based (LAN) networks with their version of IOV.

    Now its Oracle who has announced that it will be acquiring IO, networking, virtualization hardware and software vendor Xsigo founded in 2004 for an undisclosed amount. Xsigo has made its name in the IO virtualization (IOV) and converged networking along with server and storage virtualization space over the past several years including partnerships with various vendors.

    Buzz word bingo

    Technology buzzwords and buzz terms can often be a gray area leaving plenty of room for marketers and PR folks to run with. Case in point AaaS, Big data, Cloud, Compliance, Green, IaaS, IOV, Orchestration, PaaS and Virtualization among other buzzword bingo or XaaS topics. Since Xsigo has been out front in messaging and industry awareness around IO networking convergence of Ethernet based Local Area Networks (LANs) and Fibre Channel (FC) based Storage Area Networks (SANs), along with embracing InfiniBand, it made sense for them to play to their strength which is IO virtualization (aka IOV).

    Too me and among others (here and here and here) it is interesting that Xsigo has not laid claims to being part of the software defined networking (SDN) movement or the affiliated OpenFlow networking initiatives as happens with Nicira (and Oracle for that matter). In the press release that the Oracle marketing and PR folks put out on a Monday morning, some of the media and press, both trade industry, financial and general news agency took the Oracle script hook line and sinker running with it.

    What was effective is how well many industry trade pubs and their analysts simply picked up the press release story and ran with it in the all too common race to see who can get the news or story out first, or before it actually happens in some cases.

    Image of media, news papers

    Too be clear, not all pubs jumped including some of those mentioned by Greg Knieriemen (aka @knieriemen) over at SpeakinginTech highlights. I know some who took the time to call, ask around, leverage their journalistic training to dig, research and find out what this really meant vs. simply taking and running with the script. An example of one of those calls that I had was with Beth Pariseu (aka @pariseautt) that you can read her story here and here.

    Interesting enough, the Xsigo marketers had not embraced the SDN term sticking with the more known (at least in some circles) VIO and VIO descriptions. What is also interesting is just last week Oracle marketing had their hands slapped by the Better Business Bureau (BBB) NAD after IBM complained about unfair performance based advertisements on ExaData.

    Oracle Exadata

    Hmm, I wonder if the SDN police or somebody else will lodge a similar complaint with the BBB on behalf of those doing SDN?

    Both Oracle and Xsigo along with other InfiniBand (and some Ethernet and PCIe) focused vendors are members of the Open Fabric initiative, not to be confused with the group working on OpenFlow.

    StorageIO industry trends and perspectives

    Here are some other things to think about:

    Oracle has a history of doing different acquisitions without disclosing terms, as well as doing them based on earn outs such as was the case with Pillar.

    Oracle use Ethernet in the servers and appliances as well as has been an adopter of InfiniBand primarily for node to node communication, however also for server to application.

    Oracle is also an investor in Mellanox the folks that make InfiniBand and Ethernet products.

    Oracle has built various stacks including ExaData (Database machine), Exalogic, Exalytics and Database Appliance in addition to their 7000 series of storage systems.

    Oracle has done earlier virtualization related acquisitions including Virtual Iron.

    Oracle has a reputation with some of their customers who love to hate them for various reasons.

    Oracle has a reputation of being aggressive, even by other market leader aggressive standards.

    Integrated solution stacks (aka stack wars) or what some remember as bundles continues and Oracle has many solutions.

    What will happen to Xsigo as you know it today (besides what the press releases are saying).

    While Xsigo was not a member of the Open Networking Forum (ONF), Oracle is.

    Xsigo is a member of the Open Fabric Alliance along with Oracle, Mellanox and others interested in servers, PCIe, InfiniBand, Ethernet, networking and storage.

    StorageIO industry trends and perspectives

    What’s my take?

    While there are similarities in that both Nicira and Xsigo are involved with IO Virtualization, what they are doing, how they are doing it, who they are doing it with along with where they can play vary.

    Not sure what Oracle paid however assuming that it was in the couple of million dollars or less, cash or combination of stock, both they and the investors as well as some of the employees, friends and family’s did ok.

    Oracle also gets some intellectual property that they can combine with other earlier acquisitions via Sun and Virtual Iron along with their investment in InfiniBand (also now Ethernet) vendor Mellanox

    Likewise, Oracle gets some extra technology that they can leverage in their various stacked or integrated (aka bundled) solutions for both virtual and physical environments.

    For Xsigo customers the good news is that you now know who will be buying the company, however and should be questions about the future beyond what is being said in press releases.

    Does this acquisition give Oracle a play in the software defined networking space like Nicira gives to VMware I would say no given their hardware dependency, however it does give Oracle some extra technology to play with.

    Likewise while important and a popular buzzword topic (e.g. SDN), since OpenFlow comes up in conversations, perhaps that should be more of the focus vs. if a solution is all software or hardware and software.

    StorageIO industry trends and perspectives

    I also find it entertaining how last week the Better Business Bureau (BBB) and NAD (National Advertising Division) slapped Oracles hands after IBM complaints of misleading performance claims about Oracle ExaData vs. IBM. The reason I find it entertaining is not that Oracle had its hands slapped or that IBM complained to the BBB, rather how the Oracle marketers and PR folks came up with a spin around what could be called a proprietary SDN (hmm, pSDN ?) story feed it to the press and media who then ran with it.

    Im not convinced that this is an all our launch of a war by Oracle vs. Cisco let alone any of the other networking vendors as some have speculated (makes for good headlines though). Instead Im seeing it as more of an opportunistic acquisition by Oracle most likely at a good middle of summer price. Now if Oracle really wanted to go to battle with Cisco (and others), then there are others to buy such as Brocade, Juniper, etc etc etc. However there are other opportunities for Oracle to be focused (or side tracked on right now).

    Oh, lets also see what Cisco has to say about all of this which should be interesting.

    Additional related links:
    Data Center I/O Bottlenecks Performance Issues and Impacts
    I/O, I/O, Its off to Virtual Work and VMworld I Go (or went)
    I/O Virtualization (IOV) Revisited
    Industry Trends and Perspectives: Converged Networking and IO Virtualization (IOV)
    The function of XaaS(X) Pick a letter
    What is the best kind of IO? The one you do not have to do
    Why FC and FCoE vendors get beat up over bandwidth?

    StorageIO industry trends and perspectives

    If you are interested in learning more about IOV, Xisgo, or are having trouble sleeping, click here, here, here, here, here, here, here, here, here, here, here, here, here, or here (I think that’s enough links for now ;).

    Ok, nuff said for now as I have probably requalified for being on the Oracle you know what list for not sticking to the story script, opps, excuse me, I mean press release message.

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    How can direct attached storage (DAS) make a comeback if it never left?

    Server and StorageIO industry trend and perspective DAS

    Have you seen or heard the theme that Direct Attached Storage (DAS), either dedicated or shared, internal or external is making a comeback?

    Wait, if something did not go away, how can it make a comeback?

    IMHO it is as simple as for the past decade or so, DAS has been overshadowed by shared networked storage including switched SAS, iSCSI, Fibre Channel (FC) and FC over Ethernet (FCoE) based block storage area networks (SAN) and file based (NFS and Windows SMB/CIFS) network attached storage (NAS) using IP and Ethernet networks. This has been particularly true by most of the independent storage vendors who have become focused on networked storage (SAN or NAS) solutions.

    However some of the server vendors have also jumped into the deep end of the storage pool with their enthusiasm for networked storage, even though they still sell a lot of DAS including internal dedicated, along with external dedicated and shared storage.

    Server and StorageIO industry trend and perspective DAS

    The trend for DAS storage has evolved with the interfaces and storage mediums including from parallel SCSI and IDE to SATA and more recently 3Gbs and 6Gbs SAS (with 12Gbs in first lab trials). Similarly the storage mediums include a mix of fast 10K and 15K hard disk drives (HDD) along with high-capacity HDDs and ultra-high performance solid state devices (SSD) moving from 3.5 to 2.5 inch form factors.

    While there has been a lot of industry and vendor marketing efforts around networked storage (e.g. SAN and NAS), DAS based storage was over shadowed so it should not be a surprise that those focused on SAN and NAS are surprised to hear DAS is alive and well. Not only is DAS alive and well, it’s also becoming an important scaling and convergence topic for adding extra storage to appliances as well as servers including those for scale out, big data, cloud and high density not to mention high performance and high productivity computing.

    Server and StorageIO industry trend and perspective DAS

    Consequently its becoming ok to talk about DAS again. Granted you might get some peer pressure from your trend setting or trend following friends to get back on the networked storage bandwagon. Keep this in mind, take a look at some of the cool trend setting big data and little data (database) appliances, backup, dedupe and archive appliances, cloud and scale out NAS and object storage systems among others and will likely find DAS on the back-end. On a smaller scale, or in high-density rack deployments in large cloud or similar environments you may also find DAS including switched shared SAS.

    Does that mean SANs are dead?
    No, not IMHO despite what some vendors marketers and their followers will claim which is ironic given how some of them were leading the DAS is dead campaign in favor of iSCSI or FC or NAS a few years ago. However simply comparing DAS to SAN or NAS in a competing way is like comparing apples to oranges, instead, look at how and where they can complement and enable each other. In other words, different tools for various tasks, various storage and interfaces for different needs.

    Thus IMHO DAS never left or went anywhere per say, it just was not fashionable or cool to talk about until now as it is cool and trend to discuss it again.

    Ok, nuff said for now.

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Why FC and FCoE vendors get beat up over bandwidth?

    Storage I/O Industry Trends and Perspectives

    Have you noticed how Fibre Channel (FC) and FC over Ethernet (FCoE) switch and adapter vendors and their followers focus around bandwidth vs. response time, latency or other performance activity? For example, 8Gb FC (e.g. 8GFC), or 10Gb as opposed to latency and response time, or IOPS and other activity indicators.

    When you look at your own environment, or that of a customers or prospects or hear of a conversation involving storage networks, is the focus on bandwidth, or lack of it, or perhaps throughput being a non-issue? For example, a customer says why go to 16GFC when they are barely using 8Gb with their current FC environment.

    This is not a new phenomenon and is something I saw when working for a storage-networking vendor who had SAN, MAN and WAN solutions (E.g. INRANGE). Those with networking backgrounds tended to focus on bandwidth when discussing storage networks while those with storage, server or applications background also look at latency or IO completion time (response time), queuing, message size, IOPs or frames and packets per second. Thus there are different storage and networking metrics that matter that are also discussed further in my first book Resilient Storage Networks: Designing Flexible Scalable Data Infrastructures.

    When I hear a storage networking vendor talk about their latest 16GFC based product I like to ask them what is the biggest benefit vs. 8GFC and not surprisingly, the usual response is like twice the bandwidth. When I ask them about what that means in terms of more IOPS in a given amount of time, or reduced IO completion time, lower latency, sometimes I often get the response along the lines of Yeah, that too, however it has twice the bandwidth.

    Ok, I get it, yes, bandwidth is important for some applications, however so too are activity measured in IOPS, transactions, packets, frames, pages, sequences and exchanges among other units of measure along with response time and latency (e.g. different storage and networking metrics that matter).

    What many storage networking vendors actually get, however they don’t talk about it for various reasons, perhaps because they are not be asked about it, or engaged in the conversation is that there is an improvement in response time in going from such as 8GFC to 16GFC. Likewise, there can be improvements in response time in addition to the more commonly discussed bandwidth.

    If you are a storage networking switch, adapter or other component vendor, var or channel partner expand your conversation to include activity and response time as part of your value proposition. Likewise, if you are a customer, ask your technology providers to expand on the conversation of how new technologies help in areas other than bandwidth.

    Ok, nuff said for now

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Spring (May) 2012 StorageIO news letter

    StorageIO News Letter Image
    Spring (May) 2012 News letter

    Welcome to the Spring (May) 2012 edition of the Server and StorageIO Group (StorageIO) news letter. This follows the Fall (December) 2011 edition.

    You can get access to this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions.

    Click on the following links to view the Spring May 2012 edition as an HTML or PDF or, to go to the news letter page to view previous editions.

    You can subscribe to the news letter by clicking here.

    Enjoy this edition of the StorageIO newsletter, let me know your comments and feedback.

    Nuff said for now

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved

    Is 14.4TBytes of data storage for $52,503 a good deal? It depends!

    A news story about the school board in Marshall Missouri approving data storage plans in addition to getting good news on health insurance rates just came into my in box.

    I do not live in or anywhere near Marshall Missouri as I live about 420 miles north in the Stillwater Minnesota area.

    What caught my eye about the story is the dollar amount ($52,503) and capacity amount (14.4TByte) for the new Marshall school district data storage solution to replace their old, almost full 4.8TByte system.

    That prompted me to wonder, if the school district are getting a really good deal (if so congratulations), paying too much, or if about right.

    Industry Trends and Perspectives

    Not knowing what type of storage system they are getting, it is difficult to know what type of value the Marshall School district is getting with their new solution. For example, what type of performance and availability in addition to capacity? What type of system and features such as snapshots, replication, data footprint reduction aka DFR capabilities (archive, compression, dedupe, thin provisioning), backup, cloud access, redundancy for availability, application agents or integration, virtualization support, tiering. Or if the 14.4TByte is total (raw) or usable storage capacity or if it includes two storage systems for replication. Or what type of drives (SSD, fast SAS HDD or high-capacity SAS or SATA HDDs), block (iSCSI, SAS or FC) or NAS (CIFS and NFS) or unified, management software and reporting tools among capabilities not to mention service and warranty.

    Sure there are less expensive solutions that might work, however since I do not know what their needs and wants are, saying they paid too much would not be responsible. Likewise, not knowing their needs vs. wants, requirements, growth and application concerns, given that there are solutions that cost a lot more with extensive capabilities, saying that they got the deal of the century would also not be fair. Maybe somewhere down the road we will hear some vendor and VAR make a press release announcement about their win in taking out a competitor from the Marshall school district, or perhaps that they upgraded a system they previously sold so we can all learn more.

    With school districts across the country trying to stretch their budgets to go further while supporting growth, it would be interesting to hear more about what type of value the Marshall school district is getting from their new storage solution. Likewise, it would also be interesting to hear what alternatives they looked at that were more expensive, as well as cheaper however with less functionality. I’m guessing some of the cloud crowd cheerleaders will also want to know why the school district is going the route they are vs. going to the cloud.

    IMHO value is not the same thing as less or lower cost or cheaper, instead its the benefit derived vs. what you pay. This means that something might cost more than something cheaper, however if I get more benefit from what might be more expensive, then it has more value.

    Industry Trends and Perspectives

    If you are a school district of similar size, what criteria or requirements would you want as opposed to need, and then what would you do or have you done?

    What if you are a commercial or SMB environment, again not knowing the feature functionality benefit being obtained, what requirements would you have including want to have (e.g. nice to have) vs. must or have to have (e.g. what you are willing to pay more for), what would you do or have done?

    How about if you were a cloud or managed service provider (MSP) or a VAR representing one of the many services, what would your pitch and approach be beyond simply competing on a cost per TByte basis?

    Or if you are a vendor or VAR facing a similar opportunity, again not knowing the requirements, what would you recommend a school district or SMB environment to do, why and how to cost justify it?

    What this all means to me is the importance of looking beyond lowest cost, or cost per capacity (e.g. cost per GByte or TByte) also factoring in value, feature functionality benefit.

    Ok, nuff said for now, I need to get my homework assignments done.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved