Microsoft Azure Elastic SAN from Cloud to On-Prem

What is Azure Elastic SAN

Azure Elastic SAN (AES) is a new (now GA) Azure Cloud native storage service that provides scalable, resilient, easy management with rapid provisioning, high performance, and cost-effective storage. AES (figure 1) supports many workloads and computing resources. Workloads that benefit from AES include tier 1 and tier 2, such as Mission Critical, Database, and VDI, among others traditionally relying upon consolidated Storage Area Network (SAN) shared storage.

Compute resources that can use AES, including bare metal (BM) physical machines (PM), virtual machines (VM), and containers, among others, using iSCSI for access. AES is accessible by computing resources and services within the Azure Cloud in various regions (check Azure Website for specific region availability) and from on-prem core and edge locations using iSCSI. The AES management experience and value proposition are similar to traditional hardware or software-defined shared SAN storage combined with Azure cloud-based management capabilities.

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 1 General Concept and Use of Azure Elastic SAN (AES)

While Microsoft Azure describes AES as a cloud-native storage solution, that does not mean that AES is only for containers and other cloud-native apps or DevOPS. Rather, AES has been built for and is native to the cloud (e.g., software-defined) that can be accessed by various compute and other resources (e.g., VMs, Containers, AKS, etc) using iSCSI.

How Azure Elastic SAN differs from other Azure Storage

AES differs from traditional Azure block storage (e.g., Azure Disks) in that the storage is independent of the host compute server (e.g., BM, PM, VM, containers). With AES, similar to a conventional software-defined or hardware-based shared SAN solution, storage is disaggregated from host servers for sharing and management using iSCSI for connectivity. By comparison, AES differs from traditional Azure VM-based storage typically associated with a given virtual machine in a DAS (Direct Attached Storage) type configuration. Likewise, similar to conventional on-prem environments, there is a mix of DAS and SAN, including some host servers that leverage both.

AES supports Azure VM, Azure Kubernetes Service (AKS), cloud-native, edge, and on-prem computing (BM, VM, etc.) via iSCSI. Support for Azure VMware Solution (AVS) is in preview; check the Microsoft Azure website for updates and new feature functionality enhancements.

Does this mean everything is moving to AES? Similar to traditional SANs, there are roles and needs for various storage options, including DAS, shared block, file, and object, among storage offerings. Likewise, Microsoft and Azure have expanded their storage offerings to include AES, DAS (azure disks, including Ultra, premium, and standard, among other options), append, block, and page blobs (objects), and files, including Azure file sync, tables, and Data Box, among other storage services.

Azure Elastic Storage Feature Highlights

AES feature highlights include, among others:

    • Management via Azure Portal and associated tools
    • Azure cloud-based shared scalable bock storage
    • Scalable capacity, low latency, and high performance (IOPs and throughput)
    • Space capacity-optimized without the need for data reduction
    • Accessible from within Azure cloud and from on-prem using iSCSI
    • Supports Azure compute  (VMs, Containers/AKS, Azure VMware Solution)
    • On-prem access via iSCSI from PM/BM, VM, and containers
    • Variable number of volumes and volume size per volume group
    • Flexible easy to use Azure cloud-based management
    • Encryption and network private endpoint security
    • Local (LRS) and Zone (ZRS) with replication resiliency
    • Volume snapshots and cluster support

Who is Azure Elastic SAN for

AES is for those who need cost-effective, shared, resilient, high capacity, high performance (IOPS, Bandwidth), and low latency block storage within Azure and from on-prem access. Others who can benefit from AES include those who need shared block storage for clustering app workloads, server and storage consolidation, and hybrid and migration. Another consideration is for those familiar with traditional hardware and software-defined SANs to facilitate hybrid and migration strategies.

How Azure Elastic SAN works

Azure Elastic SAN is a software-defined (cloud native if you prefer) block storage offering that presents a virtual SAN accessible within Azure Cloud and to on-prem core and edge locations currently via iSCSI. Using iSCSI, Azure VMs, Clusters, Containers, Azure VMware Solution among other compute and services, and on-prem BM/PM, VM, and containers, among others, can access AES storage volumes.

From the Azure Portal or associated tools (Azure CLI or PowerShell), create an AES SAN, giving it a 3 to 24-character name and specify storage capacity (base units with performance and any additional space capacity). Next, create a Volume Group, assigning it to a specific subscription and resource group (new or existing), then specify which Azure Region to use, type of redundancy (LRS or GRS), and Zone to use. LRS provides local redundancy, while ZRS provides enhanced zone resiliency, with highspeed synchronous resiliency without setting up multiple SAN systems and their associated replication configurations along with networking considerations (e.g., Azure takes care of that for you within their service).

The next step is to create volumes by specifying the volume name, volume group to use, volume size in GB, maximum IOPs, and bandwidth. Once you have made your AES volume group and volumes, you can create private endpoints, change security and access controls, and access the volumes from Azure or on-prem resources using iSCSI. Note that AES currently needs to be LRS (not ZRS) for clustered shared storage and that Key management includes using your keys with Azure key vault.

Using Azure Elastic SAN

Using AES is straightforward, and there are good easy to follow guides from Microsoft Azure, including the following:

The following images show what AES looks like from the Azure Portal, as well as from an Azure Windows Server VM and an onprem physical machine (e.g., Windows 10 laptop).

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 2 AES Azure Portal Big Picture

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 3 AES Volume Groups Portal View

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 4  AES Volumes Portal View

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 5 AES Volume Snapshot Views

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 6 AES Connected Volume Portal View

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 7 AES Volume iSCSI view from on-prem Windows Laptop

Microsoft Azure Elastic SAN from cloud to on-prem server storageioblog
Figure 8 AES iSCSI Volume attached to Azure VM

Azure Elastic SAN Cost Pricing

The cost of AES is elastic, depending on whether you scale capacity with performance (e.g., base unit) or add more space capacity. If you need more performance, add base unit capacity, increasing IOPS, bandwidth, and space. In other words, base capacity includes storage space and performance, which you can grow in various increments. Remember that AES storage resources get shared across volumes within a volume group.

Azure Elastic SAN is billed hourly based on a monthly per-capacity base unit rate, with a minimum of 1TB  provisioned capacity with minimum performance (e.g., 5,000 IOPs, 200MBps bandwidth). The base unit rate varies by region and type of redundancy, aka resiliency. For example, at the time of this writing, looking at US East, the Local Redundant Storage (LRS) base unit rate is 1TB with 5,000 IOPs and 200MBps bandwidth, costing $81.92 per unit per month.

The above example breaks down to a rate of $0.08 per GB per month, or $0.000110 per GB per hour (assumes 730 hours per month). An example of simply adding storage capacity without increasing base unit (e.g., performance) for US East is $61.44 per month. That works out to $0.06 per GB per month (no additional provisioned IOPs or Bandwidth) or $0.000083 per GB per hour.

Note that there are extra fees for Zone Redundant Storage (ZRS). Learn more about Azure Elastic SAN pricing here, as well as via a cost calculator here.

Azure Elastic SAN Performance

Performance for Azure Elastic SAN includes IOPs, Bandwidth, and Latency. AES IOPs get increased in increments of 5,000 per base TB. Thus, an AES with a base of 10TB would have 50,000 IOPs distributed (shared) across all of its volumes (e.g., volumes are not restricted). For example, if the base TB is increased from 10TB to 20TB, then the IOPs would increase from 50,000 to 100,000 IOPs.

On the other hand, if the base capacity (10TB) is not increased, only the storage capacity would increase from 10TB to 20TB, and the AES would have more capacity but still only have the 50,000 IOPs. AES bandwidth throughput increased by 200MBps per TB. For example, a 5TB AES would have 5 x 200MBps (1,000 MBps) throughput bandwidth shared across the volume groups volumes.

Note that while the performance gets shared across volumes, individual volume performance is determined by its capacity with a maximum of 80,000 IOPs and up to 1,024 MBps. Thus, to reach 80,000 IOPS and 1,024 MBps, an AES volume would have to be at least 107GB in space capacity. Also, note that the aggregate performance of all volumes cannot exceed the total of the AES. If you need more performance, then create another AES.

Will all VMs or compute resources see performance improvements with AES? Traditional Azure Disks associated with VMs have per-disk performance resource limits, including IOPs and Bandwidth. Likewise, VMs have storage limits based on their instance type and size, including the number of disks (HDD or SSD), performance (IOPS and bandwidth), and the number of CPUs and memory.

What this means is that an AES volume could have more performance than what a given VM is limited to. Refer to your VM instance sizing and configuration to determine its IOP and bandwidth limits; if needed, explore changing the size of your VM instance to leverage the performance of Azure Elastic SAN storage.

Additional Resources Where to learn more

The following links are additional resources to learn about Microsoft Azure Elastic SAN and related data infrastructures and tradecraft topics.

Azure AKS Storage Concepts 
Azure Elastic SAN (AES) Documentation and Deployment Guides
Azure Elastic SAN Microsoft Blog
Azure Elastic SAN Overview
Azure Elastic SAN Performance topics
Azure Elastic SAN Pricing calculator
Azure Products by Region (see where AES is currently available)
Azure Storage Offerings 
Azure Virtual Machine (VM) sizes
Azure Virtual Machine (VM) types
Azure Elastic SAN General Pricing
Azure Storage redundancy 
Azure Service Level Agreements (SLA) 
StorageIOBlog.com Data Box Family 
StorageIOBlog.com Data Box Review
StorageIOBlog.com Data Box Test Drive 
StorageIOblog.com Microsoft Hyper-V Alive Enhanced with Win Server 2025
StorageIOblog.com If NVMe is the answer, what are the questions?
StorageIOblog.com NVMe Primer (or refresh)
RTO Context Matters (Blog Post)

Additional learning experiences along with common questions (and answers), are found in my Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials (CRC Press) by Greg Schulz

What this all means

Azure Elastic SAN (AES) is a new and now generally available shared block storage offering that is accessible using iSCSI from within Azure Cloud and on-prem environments. Even with iSCSI, AES is relatively easy to set up and use for shared storage, mainly if you are used to or currently working with hardware or software-defined SAN storage solutions.

With NVMe over TCP fabrics gaining industry and customer traction, I’m hoping for Microsoft to adding that in the future. Currently, AES supports LRS and ZRS for redundancy, and an excellent future enhancement would be to add Geo Redundant Storage (GRS) capabilities for those who need it.

I like the option of elastic shared storage regarding performance, availability, capacity, and economic costs (PACE). Suppose you understand the value proposition of evolving from dedicated DAS to shared SAN (independent of the underlying fabric network); or are currently using some form of on-prem shared block storage. In that case, you will find AES familiar and easy to use. Granted, AES is not a solution for everything as there are roles for other block storage, including DAS such as Azure disks and VMs within Azure, along with on-prem DAS, as well as file, object, and blobs, tables, among others.

Wrap up

The notion that all cloud storage must be objects or blobs is tied those who only need, provide, or prefer those solutions. The reality is that everything is not the same. Thus, there is a need for various storage mediums, devices, tiers, access, and types of services. Microsoft and Azure have done an excellent job of providing. I like what Microsoft Azure is doing with Azure Elastic SAN.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Nine time Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of UnlimitedIO LLC.

Cloud File Data Storage Consolidation and Economic Comparison Model #blogtobertech

Cloud File Data Storage Consolidation and Economic Comparison Model #blogtobertech

Cloud File Data Storage Consolidation and Economic Comparison Model

The following is a new Industry Trends Perspective White Paper Report titled Cloud File Data Storage Consolidation and Economic Comparison Model.

Cloud File Data Storage Consolidation and Economic Comparison Model

This new report looks at Distributed File Server and Consolidated Cloud Storage Economic Comparison with a fundamental economic comparison model for remote (on-prem) distributed file-servers and cloud storage consolidation decision-making. IT data infrastructure resource (servers, storage, I/O network, hardware, software, services) decision-making involves evaluating and comparing technical attributes (speeds, feeds, features) of a solution or service. Another aspect of data infrastructure resource decision-making involves assessing how a solution or service will support and enable a given application workload from a Performance, Availability, Capacity, and Economic (PACE) perspective.

Cloud File Data Storage Consolidation and Economic Comparison Model

Keep in mind that all application workloads have some amount of PACE resource requirements that may be high, low or various permutations. Performance, Availability (including data protection along with security) as well as Capacity are addressed via technical speeds, feeds, functionality along with workload suitability analysis. The E in PACE resource decision-making is about the Economic analysis of various costs associated with different solution approaches.

Read more in this Server StorageIO Industry Trends and Perspective (ITP) Report.

Where to learn more

Learn more about Clouds and Data Infrastructure related trends, tools, technologies and topics via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means

When comparing and making data infrastructure resource decisions, consider the application workload PACE characteristics. Also keep in mind that PACE means Performance (productivity), Availability (data protection), Capacity and Economics. This includes making decisions from a technical feature, functionality (speeds and feeds) capacity as well as how the solution supports your application workload. Leverage resources including tools to perform analysis including Cloud File Data Storage Consolidation and Economic Comparison Model approaches.

Ok, nuff said, for now.

Cheers Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2018. Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Have you heard about the new CLOUD Act data regulation?

Have you heard about the new CLOUD Act data regulation?

new CLOUD Act data regulation

Have you heard about the new CLOUD Act data regulation?

The new CLOUD Act data regulation became law as part of the recent $1.3 Trillion (USD) omnibus U.S. government budget spending bill passed by Congress on March 23, 2018 and signed by President of the U.S. (POTUS) Donald Trump in March.

CLOUD Act is the acronym for Clarifying Lawful Overseas Use of Data, not to be confused with initiatives such as U.S. federal governments CLOUD First among others which are focused on using cloud, securing and complying (e.g. FedRAMP among others). In other words, the new CLOUD Act data regulation pertains to how data stored by cloud or other service providers can be accessed by law environment officials (LEO).

U.S. Supreme court
Supreme Court of the U.S. (SCOTUS) Image via https://www.supremecourt.gov/

CLOUD Act background and Stored Communications Act

After the signing into law of CLOUD Act, the US Department of Justice (DOJ) has asked the Supreme Court of the U.S. (SCOTUS) to dismiss the pending case against Microsoft (e.g., Azure Cloud). The case or question in front of SCOTUS pertained to whether LEO can search as well as seize information or data that is stored overseas or in foreign counties.

As a refresher, or if you had not heard, SCOTUS was asked to resolve if a service provider who is responding to a warrant based on probable cause under the 1986 era Stored Communications Act, is required to provide data in its custody, control or possession, regardless of if stored inside, or, outside the US.

Microsoft Azure Regions and software defined data infrastructures
Microsoft Azure Regions via Microsoft.com

This particular case in front of SCOTUS centered on whether Microsoft (a U.S. Technology firm) had to comply with a court order to produce emails (as part of an LEO drug investigation) even if those were stored outside of the US. In this particular situation, the emails were alleged to have been stored in a Microsoft Azure Cloud Dublin Ireland data center.

For its part, Microsoft senior attorney Hasan Ali said via FCW “This bill is a significant step forward in the larger global debate on what our privacy laws should look like, even if it does not go to the highest threshold". Here are some additional perspectives via Microsoft Brad Smith on his blog along with a video.

What is CLOUD Act

Clarifying Lawful Overseas Use of Data is the new CLOUD Act data regulation approved by Congress (House and Senate) details can be read here and here respectively with additional perspectives here.

The new CLOUD Act law allows for POTUS to enter into executive agreements with foreign governments about data on criminal suspects. Granted what is or is not a crime in a given country will likely open Pandora’s box of issues. For example, in the case of Microsoft, if an agreement between the U.S. and Ireland were in place, and, Ireland agreed to release the data, it could then be accessed.

Now, for some who might be hyperventilating after reading the last sentence, keep this in mind that if you are overseas, it is up to your government to protect your privacy. The foreign government must have an agreement in place with the U.S. and that a crime has or had been committed, a crime that both parties concur with.

Also, keep in mind that is also appeal processes for providers including that the customer is not a U.S. person and does not reside in the U.S. and the disclosure would put the provider at risk of violating foreign law. Also, keep in mind that various provisions must be met before a cloud or service provider has to hand over your data regardless of what country you reside, or where the data resides.

Where to learn more

Learn more about CLOUD Act, cloud, data protection, world backup day, recovery, restoration, GDPR along with related data infrastructure topics for cloud, legacy and other software defined environments via the following links:

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What this all means and wrap-up

Is the new CLOUD Act data regulation unique to Microsoft Azure Cloud?

No, it also applies to Amazon Web Services (AWS), Google, IBM Softlayer Cloud, Facebook, LinkedIn, Twitter and the long list of other service providers.

What about GDPR?

Keep in mind that the new Global Data Protection Regulations (GDPR) go into effect May 25, 2018, that while based out of the European Union (EU), have global applicability across organizations of all size, scope, and type. Learn more about GDPR, Data Protection and its global impact here.

Thus, if you have not heard about the new CLOUD Act data regulation, now is the time to become aware of it.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

AWS S3 Storage Gateway Revisited (Part I)

server storage I/O trends

AWS S3 Storage Gateway Revisited (Part I)

This Amazon Web Service (AWS) Storage Gateway Revisited posts is a follow-up to the AWS Storage Gateway test drive and review I did a few years ago (thus why it’s called revisited). As part of a two-part series, the first post looks at what AWS Storage Gateway is, how it has improved since my last review of AWS Storage Gateway along with deployment options. The second post in the series looks at a sample test drive deployment and use.

If you need an AWS primer and overview of various services such as Elastic Cloud Compute (EC2), Elastic Block Storage (EBS), Elastic File Service (EFS), Simple Storage Service (S3), Availability Zones (AZ), Regions and other items check this multi-part series (Cloud conversations: AWS EBS, Glacier and S3 overview (Part I) ).

AWS

As a quick refresher, S3 is the AWS bulk, high-capacity unstructured and object storage service along with its companion deep cold (e.g. inactive) Glacier. There are various S3 storage service classes including standard, reduced redundancy storage (RRS) along with infrequent access (IA) that have different availability durability, performance, service level and cost attributes.

Note that S3 IA is not Glacier as your data always remains on-line accessible while Glacier data can be off-line. AWS S3 can be accessed via its API, as well as via HTTP rest calls, AWS tools along with those from third-party’s. Third party tools include NAS file access such as S3FS for Linux that I use for my Ubuntu systems to mount S3 buckets and use similar to other mount points. Other tools include Cloudberry, S3 Motion, S3 Browser as well as plug-ins available in most data protection (backup, snapshot, archive) software tools and storage systems today.

AWS S3 Storage Gateway and What’s New

The Storage Gateway is the AWS tool that you can use for accessing S3 buckets and objects via your block volume, NAS file or tape based applications. The Storage Gateway is intended to give S3 bucket and object access to on-premises applications and data infrastructures functions including data protection (backup/restore, business continuance (BC), business resiliency (BR), disaster recovery (DR) and archiving), along with storage tiering to cloud.

Some of the things that have evolved with the S3 Storage Gateway include:

  • Easier, streamlined download, installation, deployment
  • Enhanced Virtual Tape Library (VTL) and Virtual Tape support
  • File serving and sharing (not to be confused with Elastic File Services (EFS))
  • Ability to define your own bucket and associated parameters
  • Bucket options including Infrequent Access (IA) or standard
  • Options for AWS EC2 hosted, or on-premises VMware as well as Hyper-V gateways (file only supports VMware and EC2)

AWS Storage Gateway Three Functions

AWS Storage Gateway can be deployed for three basic functions:

    AWS Storage Gateway File Architecture via AWS.com

  • File Gateway (NFS NAS) – Files, folders, objects and other items are stored in AWS S3 with a local cache for low latency access to most recently used data. With this option, you can create folders and subdirectory similar to a regular file system or NAS device as well as configure various security, permissions, access control policies. Data is stored in S3 buckets that you specify policies such as standard or Infrequent Access (IA) among other options. AWS hosted via EC2 as well as VMware Virtual Machine (VM) for on-premises file gateway.

    Also, note that AWS cautions on multiple concurrent writers to S3 buckets with Storage Gateway so check the AWS FAQs which may have changed by the time you read this. Current file share limits (subject to change) include 1 file gateway share per S3 bucket (e.g. a one to one mapping between file share and a bucket). There can be 10 file shares per gateway (e.g. multiple shares each with its own bucket per gateway) and a maximum file size of 5TB (same as maximum S3 object size). Note that you might hear about object storage systems supporting unlimited size objects which some may do, however generally there are some constraints either on their API front-end, or what is currently tested. View current AWS Storage Gateway resource and specification limits here.

  • AWS Storage Gateway Non-Cached Volume Architecture via AWS.com

    AWS Storage Gateway Cached Volume Architecture via AWS.com

  • Volume Gateway (Block iSCSI) – Leverages S3 with a point in time backup as an AWS EBS snapshot. Two options exist including Cached volumes with low-latency access to most recently used data (e.g. data is stored in AWS, with a local cache copy on disk or SSD). The other option is Stored Volumes (e.g. non-cached) where primary copy is local and periodic snapshot backups are sent to AWS. AWS provides EC2 hosted, as well as VMs for VMware and various Hyper-V Windows Server based VMs.

    Current Storage Gateway volume limits (subject to change) include maximum size of a cached volume 32TB, maximum size of a stored volume 16TB. Note that snapshots of cached volumes larger than 16TB can only be restored to a storage gateway volume, they can not be restored as an EBS volume (via EC2). There are a maximum of 32 volumes for a gateway with total size of all volumes for a gateway (cached) of 1,024TB (e.g. 1PB). The total size of all volumes for a gateway (stored volume) is 512TB. View current AWS Storage Gateway resource and specification limits here.

  • AWS Storage Gateway VTL Architecture via AWS.com

  • Virtual Tape Library Gateway (VTL) – Supports saving your data for backup/BC/DR/archiving into S3 and Glacier storage tiers. Being a Virtual Tape Library (e.g. VTL) you can specify emulation of tapes for compatibility with your existing backup, archiving and data protection software, management tools and processes.

    Storage Gateway limits for tape include minimum size of a virtual tape 100GB, maximum size of a virtual tape 2.5TB, maximum number of virtual tapes for a VTL is 1,500 and total size of all tapes in a VTL is 1PB. Note that the maximum number of virtual tapes in an archive is unlimited and total size of all tapes in an archive is also unlimited. View current AWS Storage Gateway resource and specification limits here.

    AWS

Where To Learn More

What This All Means

As to which gateway function and mode (cached or non-cached for Volumes) depends on what it is that you are trying to do. Likewise choosing between EC2 (cloud hosted) or on-premises Hyper-V and VMware VMs depends on what your data infrastructure support requirements are. Overall I like the progress that AWS has put into evolving the Storage Gateway, granted it might not be applicable for all usage cases. Continue reading more and view images from the AWS Storage Gateway Revisited test drive in part two located here.

Ok, nuff said (for now…).

Cheers
Gs

Greg Schulz – Multi-year Microsoft MVP Cloud and Data Center Management, VMware vExpert (and vSAN). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio.

Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO. All Rights Reserved.

EMCworld 2016 Getting Started on Dell EMC announcements

EMCworld 2016 Getting Started on Dell EMC announcements

server storage I/O trends

It’s the first morning of EMCworld 2016 here in Las Vegas with some items already announced today, and more in the wings. One of the underlying themes and discussions besides what’s new or who’s doing what, is that this is for all practical purpose the last EMCworld with the upcoming Dell acquisition. What’s not clear is will there be a renamed and repackaged Dell/EMCworld?

With current EMC President Jeremy Burton who used to be the Chief Marketing Officer (CMO) at EMC slated to become the CMO across all of Dell, my bet is that there will be some type of new event picking up and moving to a new level of where EMCworld and Dellworld have been. More on the future of EMC and Dell in future posts, however for now, lets see what has unfolded so far today.

Today’s EMCworld theme is modernize the data center which means a mix of hardware, software and services announcements spanning physical, virtual, cloud among others (e.g. how do you want your servers, storage and data infrastructure wrapped). While the themes are still EMC as the Dell acquisition has yet to be completed, however there is a Dell presence, including Michael Dell here in person (more on Dell later).

The first wave of announcements include:

  • Unity All Flash Array (AFA) for small, entry-level environments
  • EMC Enterprise Copy Data Management software tools portfolio
  • ViPR Version 3.0 Controller
  • Virtustream global hyper-scale Storage Cloud for data protection and cloud native object
  • MyService360

  • Datadomain virtual edition and long-term archive

What About The Dell Deal

Michael Dell who is here at EMCworld announced on the main stage that Dell Technologies will be the name of the families of business.

This family of business includes the joint Dell, EMC, VMware, Pivotal, Secureworks, RSA and Virtustream. The Dell client focused business will be called Dell leveraging

that Brand, while the new joint Dell and EMC enterprise business will be called Dell EMC leveraging both of those brands. As a reminder, the Dell servers business unit will be moving into the existing EMC business as part of the enterprise business unit.

Lets move onto the technology announcements from today.

Unity AFA (and Hybrid)

The new Unity all flash array (AFA) is a dual controller storage system optimized for Nonvolatile Memory (NVM) flash SSD, with unified (block and file) access. EMC is positioning Unity as an entry-level AFA starting around $18K USD for a 2U solutions (much capacity that includes is not yet known, more on that in a future post). As well as having a low entry cost, EMC is positioning Unity for a broad, mass market, volume distribution that can be leveraged by their partners, including Dell. More on Unity in future posts. While Unity is new and modern, it comes from the same group who has created the VNXe leveraging that knowledge and skills base.

Note that Unity is positioned for small, mid-sized, remote office branch office (ROBO), departmental and specialized AFA situations, where EMC NVMe based DSSD D5 is positioned for higher-end shared direct attached server flash, while XtremIO and VMAX also positioned for higher-end, higher performance and workload consolidation scenarios.

  • Simple, flexible, easy to use in a 2U packaging that scale up to 80TB of NVM flash SSD storage
  • Scalable up to 3PB of storage for larger expanded configurations
  • Affordable ($18K USD starting price, $10K entry-level hybrid)
  • Modern AFA storage for entry, small, mid-sized, workgroup, departments and specialized environments
  • Unified file, block, and VMware VVOL support for storage access
  • Also available in hybrid, as well as software defined virtual and converged configurations
  • Higher performance (EMC indicates 300,000 IOPs) for given entry-level systems
  • Available in all-flash array, hybrid array, software-defined and converged configurations
  • Native controller based encryption with synchronous and asynchronous replication
  • VMware VASA 2.0, VAAI, VVols and VMware integration
  • Tight integration with EMC Data Protection portfolio tools

Read more about Unity here.

Copy Data Management

Enterprise Copy Data Management (eCDM) spans data copies from data protection including backup, BC, DR as well as for operational, analytics, test, dev, devops among other uses. Another term is Enterprise Copy Data Analytics (eCDA) which includes monitoring and management along with insight, awareness and of course analytics. These new offerings and initiatives tie together various capabilities across storage platforms and software defined storage management. Watch for more activity in and around eCDM and general copy data management. Read more here.

ViPR Controller 3.0

ViPR controller enhancements build on previous announcements, include automation as well as fail over with native replication to a standby ViPR controller. Note that there can actually be two standby controllers that are synchronized asynchronous with software built-in to ViPR. This means that there is no need for RecoverPoint or other products to do the replication of the ViPR controllers. To be clear, this is for high availability of the ViPR controllers themselves and not a replacement for HA or replication of upper layer applications, storage servers or underlying storage services. Also note that ViPR is available via open source (CoprHD via Github here). Read more here.

MyService360

MyService360 is a cloud based dashboard and data infrastructure monitoring management platform. Read more here.

Virtustream Storage Cloud

Viutustream cloud services and software tools compliments EMC (and others) storage systems as back-end for cool, cold or other bulk data storage needs. Focus is to sell primary storage to customers, then leverage back-end public cloud services for backup, archive, copy data management and other applications. This also means that the Virtustream storage cloud is not just for data protection such as archiving, backup, BC, DR it’s also for other big fast data including cloud and object native applications. Does this mean Virtustream is an alternative to other cloud and object storage services such as AWS S3, Google GCS among others? Yup. Read more here.

Where To Learn More

  • Session Streaming For video of keynotes, general sessions, backstage sessions, and EMC TV coverage, click here
  • Social: Follow @EMCWorld,  @EMCCorp, @EMC_News and @EMCStorage, and join conversations with  #EMCWORLD, and like EMC on Facebook
  • Photos: Access event photos via  Flickr and EMC Pulse Blog or visit the special EMC World News microsite here
  • Reflections: Read Core Technologies President, Guy Churchward’s Reflections post on today’s announcements here
  • Visit the EMC Store, the EMC Community Network Site and The Core Blog

What This All Means

With the announcement of Unity and impending Dell deal, some of you might (or should) have a Dejavu moment of over a decade or so ago when Dell and EMC entered into OEM agreement around the then Clariion mid range storage arrays (e.g. predecessors of VNX and VNXe). Unity is being designed as a high performance, easy to use, flexible, scalable, cost-effective storage solutions for a broad high-volume sales and distribution channel market.

What does Unity mean for EMC VNX and VNXe as well as XtremIO? Unity will position near where the VNXe has been positioned, along with some of the competing solutions from Dell among others. There might be some overlap with other EMC solutions, however if executed properly, Unity should open up some new markets, perhaps at the hands of some of the newer popular startups that only offer AFA vs. hybrids. Likewise I would expect Unity to appear in future converged solutions such as those via the EMC Converged business unit (e.g. VCE).

Even with the upcoming Dell acquisition and integration, EMC continues to evolve and innovate in many areas.

Watch for more announcements later today and throughout the week

Ok, nuff said

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2023 Server StorageIO(R) and UnlimitedIO All Rights Reserved

Part 3 – Which HDD for content applicaitons – Test Configuration

Which HDD for content applications – HDD Test Configuration

HDD Test Configuration server storage I/O trends

Updated 1/23/2018

Which enterprise HDD to use with a content server platform hdd test configuratoin

Insight for effective server storage I/O decision making
Server StorageIO Lab Review

Which enterprise HDD to use for content servers

This is the third in a multi-part series (read part two here) based on a white paper hands-on lab report I did compliments of Servers Direct and Seagate that you can read in PDF form here. The focus is looking at the Servers Direct (www.serversdirect.com) converged Content Solution platforms with Seagate Enterprise Hard Disk Drive (HDD’s). In this post the focus expands to hardware and software defining as well as configuring the test environments along with applications workloads.

Defining Hardware Software Environment

Servers Direct content platforms are software defined and hardware defined to your specific solution needs. For my test-drive, I used a pair of 2U Content Solution platforms, one for a client System Test Initiator (STI) (3), the other as server SUT shown in figure-1 (next page). With the STI configured and SUT setup Seagate Enterprise class 2.5” 12Gbps SAS HDD’s were added to the configuration.

(Note 3) System Test Initiator (STI) was hardware defined with dual Intel Xeon E5-2695 v3 (2.30 GHz) processors, 32GB RAM running Windows Server 2012 R2 with two network connections to the SUT. Network connections from the STI to SUT included an Intel GbE X540-AT2 as well as an Intel XL710 Q2 40 GbE Converged Network Adapter (CNA). In addition to software defining the STI with Windows Server 2012 R2, Dell Benchmark Factory (V7.1 64b bit 496) part of the Database Administrators (DBA) Toad Tools (including free versions) was also used. For those familiar with HammerDB, Sysbench among others, Benchmark Factory is an alternative that supports various workloads and database connections with robust reporting, scripting and automation. Other installed tools included Spotlight on Windows, Iperf 2.0.5 for generating network traffic and reporting results, as well as Vdbench with various scripts.

SUT setup (4)  included four Enterprise 10K and two 15K Performance drives with enhanced performance caching feature enabled, along with two Enterprise Capacity 2TB HDD’s, all were attached to an internal 12Gbps SAS RAID controller. With the STI configured and SUT setup Seagate Enterprise class 2.5” 12Gbps SAS HDD’s were added to the configuration.

(Note 4) System Under Test (SUT) dual Intel Xeon E5-2697 v3 (2.60 GHz) providing 54 logical processors, 64GB of RAM (expandable to 768GB with 32GB DIMMs, or 3TB with 128GB DIMMs) and two network connections. Network connections from the STI to SUT consisting of an Intel 1 GbE X540-AT2 as well as an Intel XL710 Q2 40 GbE CNA. The GbE LAN connection was used for management purposes while the 40 GbE was used for data traffic. System disk was a 6Gbs SATA flash SSD. Seagate Enterprise class HDD’s were installed into the 16 available 2.5” small form factor (SFF) drive slots. Eight (left most) drive slots were connected to an Intel RMS3CC080 12 Gbps SAS RAID internal controller. The “Blue” drives in the middle were connected to both an NVMe PCIe card and motherboard 6 Gbps SATA controller using an SFF-8637 connector. The four right most drives were also connected to the motherboard 6 Gbps SATA controller.

System Test Configuration
Figure-1 STI and SUT hardware as well as software defined test configuration

This included four Enterprise 10K and two 15K Performance drives with enhanced performance caching feature enabled, along with two Enterprise Capacity 2TB HDD’s, all were attached to an internal 12Gbps SAS RAID controller. Five 6 Gbps SATA Enterprise Capacity 2TB HDD’s were setup using Microsoft Windows as a spanned volume. System disk was a 6Gbps flash SSD and an NVMe flash SSD drive was used for database temp space.

What About NVM Flash SSD?

NAND flash and other Non-Volatile Memory (NVM) memory and SSD complement content solution. A little bit of flash SSD in the right place can have a big impact. The focus for theses tests is HDD’s, however some flash SSDs were used as system boot and database temp (e.g. tempdb) space. Refer to StorageIO Lab reviews and visit www.thessdplace.com

Seagate Enterprise HDD’s Used During Testing

Various Seagate Enterprise HDD specifications use in the testing are shown below in table-1.

 

Qty

 

Seagate HDD’s

 

Capacity

 

RPM

 

Interface

 

Size

 

Model

Servers Direct Price Each

Configuration

4

Enterprise 10K
Performance

1.8TB

10K with cache

12 Gbps SAS

2.5”

ST1800MM0128
with enhanced cache

$875.00 USD

HW(5) RAID 10 and RAID 1

2

Enterprise
Capacity 7.2K

2TB

7.2K

12 Gbps SAS

2.5”

ST2000NX0273

$399.00 USD

HW RAID 1

2

Enterprise 15K
Performance

600GB

15K with cache

12 Gbps SAS

2.5”

ST600MX0082
with enhanced cache

$595.00 USD

HW RAID 1

5

Enterprise
Capacity 7.2K

2TB

7.2K

6 Gbps SATA

2.5”

ST2000NX0273

$399.00 USD

SW(6) RAID Span Volume

Table-1 Seagate Enterprise HDD specification and Servers Direct pricing

URLs for additional Servers Direct content platform information:
https://serversdirect.com/solutions/content-solutions
https://serversdirect.com/solutions/content-solutions/video-streaming
https://www.serversdirect.com/File%20Library/Data%20Sheets/Intel-SDR-2P16D-001-ds2.pdf

URLs for additional Seagate Enterprise HDD information:
https://serversdirect.com/Components/Drives/id-HD1558/Seagate_ST2000NX0273_2TB_Hard_Drive

https://serversdirect.com/Components/Drives/id-HD1559/Seagate_ST600MX0082_SSHD

Seagate Performance Enhanced Cache Feature

The Enterprise 10K and 15K Performance HDD’s tested had the enhanced cache feature enabled. This feature provides a “turbo” boost like acceleration for both reads and write I/O operations. HDD’s with enhanced cache feature leverage the fact that some NVM such as flash in the right place can have a big impact on performance (7).

In addition to their performance benefit, combing a best of or hybrid storage model (combing flash with HDD’s along with software defined cache algorithms), these devices are “plug-and-play”. By being “plug-and-play” no extra special adapters, controllers, device drivers, tiering or cache management software tools are required.

(Note 5) Hardware (HW) RAID using Intel server on-board LSI based 12 Gbps SAS RAID card, RAID 1 with two (2) drives, RAID 10 with four (4) drives. RAID configured in write-through mode with default stripe / chunk size.

(Note 6) Software (SW) RAID using Microsoft Windows Server 2012 R2 (span). Hardware RAID used write-through cache (e.g. no buffering) with read-ahead enabled and a default 256KB stripe/chunk size.

(Note 7) Refer to Enterprise SSHD and Flash SSD Part of an Enterprise Tiered Storage Strategy

The Seagate Enterprise Performance 10K and 15K with enhanced cache feature are a good example of how there is more to performance in today’s HDD’s than simply comparing RPM’s, drive form factor or interface.

Where To Learn More

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Careful and practical planning are key steps for testing various resources as well as aligning the applicable tools, configuration to meet your needs.

Continue reading part four of this multi-part series here where the focus expands to database application workloads.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

AWS adds Zocalo Enterprise File Sync Share and Collaboration

AWS adds Zocalo Enterprise File Sync Share and Collaboration

In case you missed it today, Amazon Web Services (AWS) announced Zocalo an enterprise class storage and file sharing service. As you might have guessed, by being file sync and share of cloud storage Zocalo can be seen as a competitor or option to other services including Box, Dropbox and Google among many others in the enterprise file sync and share (EFSS) space.

Amazon Zocalo enterprise storage and sharing service

AWS Enterprise File Sync Share (EFSS) Zocalo overview and summary:

  • Document collaboration (Comments and sharing) including available with AWS WorkSpaces
  • Central common hub for sharing documents along with those owned by a user
  • Select AWS regions where data is stored, along with set up users polices and audit trails
  • Sharing of various types of documents, worksheets, web pages, presentations, text and PDF among other files
  • Support for Windows and other PCs, Macs, tablets and other mobile devices
  • Cost effective (priced at $5 per user per month for 200GB of storage)
  • Free 30 day trial for up to 50 users each with 200GB (e.g. 10TB)
  • Secure leveraging existing AWS regions and tools (encryption in transit and while at rest)
  • Active directory credentials integration

Learn more in the Zocalo FAQ found here

Register for the limited free Zocalo trial here

Additional Zocalo product details can be found here

AWS also announced as part of its Mobile Services Cognito a mobile service for simple user identity and data synchronization, along with SNS, Mobile Analytics and other enhancements. Learn more about AWS Cognito here and Mobile Services here.

Check out other AWS updates, news and enhancements here

Ok, nuff said

Cheers
gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Cloud conversations: Has Nirvanix shutdown caused cloud confidence concerns?

Storage I/O trends

Cloud conversations: Has Nirvanix shutdown caused cloud confidence concerns?

Recently seven plus year old cloud storage startup Nirvanix announced that they were finally shutting down and that customers should move their data.

nirvanix customer message

Nirvanix has also posted an announcement that they have established an agreement with IBM Softlayer (read about that acquisition here) to help customers migrate to those services as well as to those of Amazon Web Services (AWS), (read more about AWS in this primer here), Google and Microsoft Azure.

Cloud customer concerns?

With Nirvanix shutting down there has been plenty of articles, blog posts, twitter tweets and other conversations asking if Clouds are safe.

Btw, here is a link to my ongoing poll where you can cast your vote on what you think about clouds.

IMHO clouds can be safe if used in safe ways which includes knowing and addressing your concerns, not to mention following best practices, some of which pre-date the cloud era, sometimes by a few decades.

Nirvanix Storm Clouds

More on this in a moment, however lets touch base on Nirvanix and why I said they were finally shutting down.

The reason I say finally shutting down is that there were plenty of early warning signs and storm clouds circling Nirvanix for a few years now.

What I mean by this is that in their seven plus years of being in business, there have been more than a few CEO changes, something that is not unheard of.

Likewise there have been some changes to their business model ranging from selling their software as a service to a solution to hosting among others, again, smart startups and establishes organizations will adapt over time.

Nirvanix also invested heavily in marketing, public relations (PR) and analyst relations (AR) to generate buzz along with gaining endorsements as do most startups to get recognition, followings and investors if not real customers on board.

In the case of Nirvanix, the indicator signs mentioned above also included what seemed like a semi-annual if not annual changing of CEOs, marketing and others tying into business model adjustments.

cloud storage

It was only a year or so ago that if you gauged a company health by the PR and AR news or activity and endorsements you would have believed Nirvanix was about to crush Amazon, Rackspace or many others, perhaps some actually did believe that, followed shortly there after by the abrupt departure of their then CEO and marketing team. Thus just as fast as Nirvanix seemed to be the phoenix rising in stardom their aura started to dim again, which could or should have been a warning sign.

This is not to solo out Nirvanix, however given their penchant for marketing and now what appears to some as a sudden collapse or shutdown, they have also become a lightning rod of sort for clouds in general. Given all the hype and fud around clouds when something does happen the distract ors will be quick to jump or pile on to say things like "See, I told you, clouds are bad".

Meanwhile the cloud cheerleaders may go into denial saying there are no problems or issues with clouds, or they may go back into a committee meeting to create a new stack, standard, API set marketing consortium alliance. ;) On the other hand, there are valid concerns with any technology including clouds that in general there are good implementations that can be used the wrong way, or questionable implementations and selections used in what seem like good ways that can go bad.

This is not to say that clouds in general whether as a service, solution or product on a public, private or hybrid bases are any riskier than traditional hardware, software and services. Instead what this should be is a wake up call for people and organizations to review clouds citing their concerns along with revisiting what to do or can be done about them.

Clouds: Being prepared

Ben Woo of Neuralytix posted this question comment to one of the Linked In groups Collateral Considerations If You Were/Are A Nirvanix Customer which I posted some tips and recommendations including:

1) If you have another copy of your data somewhere else (which you should btw), how will your data at Nirvanix be securely erased, and the storage it resides on be safely (and secure) decommissioned?

2) if you do have another copy of your data elsewhere, how current is it, can you bring it up to date from various sources (including update from Nirvanix while they stay online)?

3) Where will you move your data to short or near term, as well as long-term.

4) What changes will you make to your procurement process for cloud services in the future to protect against situations like this happening to you?

5) As part of your plan for putting data into the cloud, refine your strategy for getting it out, moving it to another service or place as well as having an alternate copy somewhere.

Fwiw any data I put into a cloud service there is also another copy somewhere else which even though there is a cost, there is a benefit, The benefit is that ability to decide which to use if needed, as well as having a backup/spare copy.

Storage I/O trends

Cloud Concerns and Confidence

As part of cloud procurement as services or products, the same proper due diligence should occur as if you were buying traditional hardware, software, networking or services. That includes checking out not only the technology, also the companies financial, business records, customer references (both good and not so good or bad ones) to gain confidence. Part of gaining that confidence also involves addressing ahead of time how you will get your data out of or back from that services if needed.

Keep in mind that if your data is very important, are you going to keep it in just one place? For example I have data backed-up as well as archived to cloud providers, however I also have local copies either on-site or off.

Likewise there is data I have local kept at alternate locations including cloud. Sure that is costly, however by not treating all of my data and applications the same, I’m able to balance those costs out, plus use cost advantages of different services as well as on-site to be effective. I may be spending no less on data protection, in fact I’m actually spending a bit more, however I also have more copies and versions of important data and in multiple locations. Data that is not changing often does not get protected as often, however there are multiple copies to meet different needs or threat risks.

Storage I/O trends

Don’t be scared of clouds, be prepared

While some of the other smaller cloud storage vendors will see some new customers, I suspect that near to mid-term, it will be the larger, more established and well funded providers that gain the most from this current situation. Granted some customers are looking for alternatives to the mega cloud providers such as Amazon, Google, HP, IBM, Microsoft and Rackspace among others, however there are a long list of others some of which who are not so well-known that should be such as Centurylink/Savvis, Verizon/Terremark, Sungurd, Dimension Data, Peak, Bluehost, Carbonite, Mozy (owned by EMC), Xerox ACS, Evault (owned by Seagate) not to mention a long list of many others.

Something to be aware of as part of doing your due diligence is determining who or what actually powers a particular cloud service. The larger providers such as Rackspace, Amazon, Microsoft, HP among others have their own infrastructure while some of the smaller service providers may in fact use one of the larger (or even smaller) providers as their real back-end. Hence understanding who is behind a particular cloud service is important to help decide the viability and stability of who it is you are subscribed to or working with.

Something that I have said for the past couple of years and a theme of my book Cloud and Virtual Data Storage Networking (CRC Taylor & Francis) is do not be scared of clouds, however be ready, do your homework.

This also means having cloud concerns is a good thing, again don’t be scared, however find what those concerns are along with if they are major or minor. From that list you can start to decide how or if they can be worked around, as well as be prepared ahead of time should you either need all of your cloud data back quickly, or should that service become un-available.

Also when it comes to clouds, look beyond lowest cost or for free, likewise if something sounds too good to be true, perhaps it is. Instead look for value or how much do you get per what you spend including confidence in the service, service level agreements (SLA), security, and other items.

Keep in mind, only you can prevent data loss either on-site or in the cloud, granted it is a shared responsibility (With a poll).

Additional related cloud conversation items:
Cloud conversations: AWS EBS Optimized Instances
Poll: What Do You Think of IT Clouds?
Cloud conversations: Gaining cloud confidence from insights into AWS outages
Cloud conversations: confidence, certainty and confidentiality
Cloud conversation, Thanks Gartner for saying what has been said
Cloud conversations: AWS EBS, Glacier and S3 overview (Part III)
Cloud conversations: Gaining cloud confidence from insights into AWS outages (Part II)
Don’t Let Clouds Scare You – Be Prepared
Everything Is Not Equal in the Datacenter, Part 3
Amazon cloud storage options enhanced with Glacier
What do VARs and Clouds as well as MSPs have in common?
How many degrees separate you and your information?

Ok, nuff said.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Cloud conversations: AWS EBS, Glacier and S3 overview (Part II S3)

Storage I/O industry trends image

Amazon Web Services (AWS) recently added EBS Optimized support for enhanced bandwidth EC2 instances (read more here). This industry trends and perspective cloud conversation is the second (looking at S3 object storage) in a three-part series companion to the AWS EBS optimized post found here. Part I is here (closer look at EBS) and part III is here (tying it all together).

AWS image via Amazon.com

For those not familiar, Simple Storage Services (S3), Glacier and Elastic Block Storage (EBS) are part of the AWS cloud storage portfolio of services. With S3, you specify a region where a bucket is created that will contain objects that can be written, read, listed and deleted. You can create multiple buckets in a region with unlimited number of objects ranging from 1 byte to 5 Tbytes in size per bucket. Each object has a unique, user or developer assigned access key. In addition to indicating which AWS region, S3 buckets and objects are provisioned using different levels of availability, durability, SLA’s and costs (view S3 SLA’s here).

AWS S3 example image

Cost will vary depending on the AWS region being used, along if Standard or Reduced Redundancy Storage (RSS) selected. Standard S3 storage is designed with 99.999999999% durability (how many copies exists) and 99.99% availability (how often can it be accessed) on an annual basis capable of two data centers becoming un-available.

As its name implies, for a lower fee and level of durability, S3 RRS has an annual durability of 99.999% and availability of 99.99% capable of a single data center loss. In the following figure durability is how many copies of data exist spread across different servers and storage systems in various data centers and availability zones.

cloud storage and object storage across availability zone image

What would you put in RRS vs. Standard S3 storage?

Items that need some level of persistence that can be refreshed, recreated or restored from some other place or pool of storage such as thumbnails or static content or read caches. Other items would be those that you could tolerant some downtime while waiting for data to be restored, recovered or rebuilt from elsewhere in exchange for a lower cost.

Different AWS regions can be chosen for regulatory compliance requirements, performance, SLA’s, cost and redundancy with authentication mechanisms including encryption (SSL and HTTPS) to make sure data is kept secure. Various rights and access can be assigned to objects including making them public or private. In addition to logical data protection (security, identity and access management (IAM), encryption, access control) policies also apply to determine level of durability and availability or accessibility of buckets and objects. Other attributes of buckets and objects include life-cycle management polices and logging of activity to the items. Also part of the objects are meta data containing information about the data being stored shown in a generic example below.

Cloud storage and object storage spread across availability zones figure

Access to objects is via standard REST and SOAP interfaces with an Application Programming Interface (API). For example default access is via HTTP along with a Bit Torrent interface with optional support via various gateways, appliances and software tools.

Cloud storage and object storage IO figure
Example cloud and object storage access

The above figure via Cloud and Virtual Data Storage Networking (CRC Press) shows a generic example applicable to AWS services including S3 being accessed in different ways. For example I access my S3 buckets and objects via Jungle Disk (one of the tools I use for data protection) that can also access my Rackspace Cloudfiles data. In the following figure there are examples of some of my S3 buckets and objects used by different applications and tools that I have in various AWS regions.

Image of AWS S3 usage
AWS S3 buckets and objects in different regions

Note that I sometimes use other AWS regions outside the US for testing purposes, for compliance purpose my production, business or personal data is only in the US regions.

The following figure is a generic example of how cloud and object storage are accessed using different tools, hardware, software and API’s along with gateways. AWS is an example of what is shown in the following figure as a Cloud Service and S3, EBS or Glacier as cloud storage. Common example API commands are also shown which will vary by different vendors, products or solution definitions or implementations. While Amazon S3 API which is REST HTTP based has become an industry de facto standard, there are other API’s including CDMI (Cloud Data Management Interface) developed by SNIA which has gained ISO accreditation.

Cloud storage and object storage I/O figure
Cloud and object storage access example via Cloud and Virtual Data Storage Networking

In addition to using Jungle Disk which manages my AWS keys and objects that it creates, I can also access my S3 objects via the AWS management console and web tools, also via third-party tools including Cyberduck.

Cyberduck tool.

Additional reading and related items:

Cloud conversation, Thanks Gartner for saying what has been said

StorageIO industry trends cloud, virtualization and big data

Thank you Gartner for your statements concurring and endorsing the notion of clouds can be viable, however do your homework, welcome to the club.

Why am I thanking Gartner?

Simple, I appreciate Gartner now saying what has been said for a couple of years hoping it will help to amplify the theme to the Gartner followers and faithful.

Gartner: Cloud storage viable option, but proceed carefully


Images licensed for use by StorageIO via Atomazul / Shutterstock.com

Sounds like Gartner has come to the same conclusion on what has been said for several years now in posts, articles, keynotes, presentations, webinars and other venues which is when it comes to IT clouds, don’t be scared. However do your homework, be prepared, do your due diligence, proof of concepts.

Image of clouds, cloud and virtual data storage networking book

Here are some related materials to prepare and plan for IT clouds (public and private):

What is your take on IT clouds? Click here to cast your vote and see what others are thinking about clouds.

Now for those who feel that free information or content is not worth its price, then feel free to go to Amazon and buy some Book copies here, or subscribing to the Kindle version of the StorageIOblog, or contact us for an advisory consultation or other project. For everybody else, enjoy and remember, don’t be scared of clouds, do your homework, be prepared and keep in mind that clouds are a shared responsibility.

Disclosure: I was a Gartner client when I working in an IT organization and then later as a vendor, however not anymore ;).

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Congratulations Imation and Nexsan, are there any independent storage vendors left?

StorageIO industry trends cloud, virtualization and big data

Last week Imation, the company that is known for making CDs, DVDs, magnetic tape and in the past floppy disk (diskettes) bought Nexsan, a company known for the SATA and SAS storage products.

Imation is also (or should be) owns the TDK and Memorex names (remember is it real or is it Memorex? If not Google it). They also have had for several years removable hard disk drive (RHDD) products including the Odyssey (I am in the process of retiring mine), as well as partnership with the former ProStor for RDX and having acquired some of the assets of ProStor namely their RDX based InifiVault storage appliance. Imation has also been involved in some other things including USB and other forms of flash-based solid state devices (SSD), as well as a couple of years (2007) they launched cloud backup with DataGuard before cloud backup had become a popular buzzword topic.

Imation has also divested parts of its business over past several years including some medical related (X-ray stuff) to Kodak who occupies part of the headquarter building in Oakdale MN, or at least last time I looked when driving by there on way from the airport. They also divested their SAN lab with some of the staff going to Glasshouse and other pieces going to Lion bridge (an independent test lab company). Beyond traditional of data protection, backup/restore and archiving media or mediums from consumer to large-scale enterprise, Imation has also been involved in other areas involving recording. Imation also has done some other recent acquisitions around dedupe (Nine Technologies).

For its part, Nexsan has extended their portfolio from SATA and SAS products, AutoMaid Intelligent Power Management (IPM) which gives benefits of variable power and performance without the penalties of first generation MAID type products. Read more about IPM and related themes here, here and here. Nexsan also supports NAS and iSCSI solutions in addition to their archive and content or object storage focused Assureon product they bought a few years ago.

This is a good acquisition for both companies as it gives Imation a new set of products to sell into their existing accounts and channels. It also can leverage Nexsan’s channel and solution selling skills giving them (Nexsan) a bigger brand and large parent for credibility (not that they did not have that in the past).

Here are is a link to a piece done by Dave Raffo that includes some comments and perspectives from me. To say that the synergy here is about archiving or selling SSD or storage would be too easy and miss a bigger potential. That potential is Imation has been in the business of selling consumable accessories for protecting and preserving data. Notice I said consumable accessories which in the past has meant manufacturing consumable media (e.g. Floppy disks or discs, CD, DVDs, magnetic tapes) as well as partnering around flash and HDDs.

In many environments from small to large to super-sized cloud and service providers, some types of storage systems including some of those that Nexsan sells can be considered a consumable media or medium taking over the role that tape, CDs or DVDs have been used in the pat. Instead of using tape or CDs or DVDs to protect the HDDs and SSDs based data, HDD based solutions are being used for disk-to-disk (D2D) protection (part of modernizing data protection). D2D is being done as appliances, or in conjunction with cloud and object storage system software stacks such as OpenStack swift, Basho Riak CS, CloudStack, Cleversafe, Ceph, Caringo and a list of others, in addition to appliances such as EMC ATMOS among others than can support 3rd party storage device as consumable mediums. Keep in mind that there is no such thing as a data or information recession, and people and data are living longer and getting larger, both for big data and little data.

The big if in this acquisition which IMHO is a fair price for both parties based on realistic valuations is if they can collective execute on it. This means that Imation and Nexsan need to leverage each other’s strengths, address any weakness, close gaps and expand into each other’s markets, channels and sell the entire portfolio as opposed to becoming singular focused on a particular area tool or technology. If Imation can execute on this and Nexsan leverages their new parent, the result should be moving from the roughly $85M USD sales to $100M+ then $125M then $150M and so forth over the next couple of years.

Even if Imation keeps maintains revenues or a slight increase, which would also be a good deal for them, granted the industry pundits may not agree, so let us see where this is in a few years. However if Imation can grow the Nexsan business, then it would become a very good deal. Thus, IMHO the price valuation for the deal has the risk built into, something like when NetApp bought the Engenio business unit from LSI back in 2011 for about $480M USD. At that time, Engenio was doing about $705M USD in revenue and seen by many industry pundits as being on the decline, thus a lower valuation. For its part, NetApp, has been executing maintaining the revenue of that business unit with some expansion, thus their execution so far is being rewarding for taking the risk.

Let us see if Imation can do the same thing.

Now, does that mean that Nexsan was the last of the independent storage vendors left?

Hardly, after all there is still Xiotech, excuse me, Xio as they changed their name as part of a repackaging, relaunch and downsizing. There is DotHill who supplies partners such HP, or Dothills former partner supplier InfoTrend. If you are an Apple fan then you might know about Promise, if not, you should. Lets not forget about Data Direct Networks (DDN) that is still independent and at around $200M (give or take several million) in revenue, are very much still around.

How about Xyratex, sure they make the enclosures and appliances that many others use in their solutions, however they also have a storage solutions business focused on scale out, clustered and grid NAS based on Lustre. There are some others that I am drawing a blank on now (if you read this and are one of them, chime in) in addition to all the new or current generation of startups (you can chime in as well to let people know who you are to be bought).

There is still consolidation taking place, both of smaller vendors by mid-sized vendors, mid-sized vendors by big vendors, big vendors by mega vendors, and startups by established.

Again congratulations to both Imation and Nexsan, let us see who or what is next on the 2013 mergers and acquisition list, as well as who will join the where are they now club.

Disclosure: Nexsan has been a StorageIO client in the past; however, Imation has not been a client, although they have bought me lunch before here in the Stillwater, MN area.

With Imation having their own brand name and identity, not to mention TDK and Memorex, now I have to wonder will Nexsan be real or Memorex or something else? ;)

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Cloud conversations: Gaining cloud confidence from insights into AWS outages (Part II)

StorageIO industry trends cloud, virtualization and big data

This is the second in a two-part industry trends and perspective looking at learning from cloud incidents, view part I here.

There is good information, insight and lessons to be learned from cloud outages and other incidents.

Sorry cynics no that does not mean an end to clouds, as they are here to stay. However when and where to use them, along with what best practices, how to be ready and configure for use are part of the discussion. This means that clouds may not be for everybody or all applications, or at least today. For those who are into clouds for the long haul (either all in or partially) including current skeptics, there are many lessons to be  learned and leveraged.

In order to gain confidence in clouds, some questions that I routinely am asked include are clouds more or less reliable than what you are doing? Depends on what you are doing, and how you will be using the cloud services. If you are applying HA and other BC or resiliency best practices, you may be able to configure and isolate from the more common situations. On the other hand, if you are simply using the cloud services as a low-cost alternative selecting the lowest price and service class (SLAs and SLOs), you might get what you paid for. Thus, clouds are a shared responsibility, the service provider has things they need to do, and the user or person designing how the service will be used have some decisions making responsibilities.

Keep in mind that high availability (HA), resiliency, business continuance (BC) along with disaster recovery (DR) are the sum of several pieces. This includes people, best practices, processes including change management, good design eliminating points of failure and isolating or containing faults, along with how the components  or technology used (e.g. hardware, software, networks, services, tools). Good technology used in goods ways can be part of a highly resilient flexible and scalable data infrastructure. Good technology used in the wrong ways may not leverage the solutions to their full potential.

While it is easy to focus on the physical technologies (servers, storage, networks, software, facilities), many of the cloud services incidents or outages have involved people, process and best practices so those need to be considered.

These incidents or outages bring awareness, a level set, that this is still early in the cloud evolution lifecycle and to move beyond seeing clouds as just a way to cut cost, and seeing the importance and value HA, resiliency, BC and DR. This means learning from mistakes, taking action to correct or fix errors, find and cut points of failure are part of a technology maturing or the use of it. These all tie into having services with service level agreements (SLAs) with service level objectives (SLOs) for availability, reliability, durability, accessibility, performance and security among others to protect against mayhem or other things that can and do happen.

Images licensed for use by StorageIO via
Atomazul / Shutterstock.com

The reason I mentioned earlier that AWS had another incident is that like their peers or competitors who have incidents in the past, AWS appears to be going through some growing, maturing, evolution related activities. During summer 2012 there was an AWS incident that affected Netflix (read more here: AWS and the Netflix Fix?). It should also be noted that there were earlier AWS outages where Netflix (read about Netflix architecture here) leveraged resiliency designs to try and prevent mayhem when others were impacted.

Is AWS a lightning rod for things to happen, a point of attraction for Mayhem and others?

Granted given their size, scope of services and how being used on a global basis AWS is blazing new territory and experiences, similar to what other information services delivery platforms did in the past. What I mean is that while taken for granted today, open systems Unix, Linux, Windows-based along with client-server, midrange or distributed systems, not to mention mainframe hardware, software, networks, processes, procedures, best practices all went through growing pains.

There are a couple of interesting threads going on over in various LinkedIn Groups based on some reporters stories including on speculation of what happened, followed with some good discussions of what actually happened and how to prevent recurrence of them in the future.

Over in the Cloud Computing, SaaS & Virtualization group forum, this thread is based on a Forbes article (Amazon AWS Takes Down Netflix on Christmas Eve) and involves conversations about SLAs, best practices, HA and related themes. Have a look at the story the thread is based on and some of the assertions being made, and ensuing discussions.

Also over at LinkedIn, in the Cloud Hosting & Service Providers group forum, this thread is based on a story titled Why Netflix’ Christmas Eve Crash Was Its Own Fault with a good discussion on clouds, HA, BC, DR, resiliency and related themes.

Over at the Virtualization Practice, there is a piece titled Is Amazon Ruining Public Cloud Computing? with comments from me and Adrian Cockcroft (@Adrianco) a Netflix Architect (you can read his blog here). You can also view some presentations about the Netflix architecture here.

What this all means

Saying you get what you pay for would be too easy and perhaps not applicable.

There are good services free, or low-cost, just like good free content and other things, however vice versa, just because something costs more, does not make it better.

Otoh, there are services that charge a premium however may have no better if not worse reliability, same with content for fee or perceived value that is no better than what you get free.

Additional related material

Some closing thoughts:

  • Clouds are real and can be used safely; however, they are a shared responsibility.
  • Only you can prevent cloud data loss, which means do your homework, be ready.
  • If something can go wrong, it probably will, particularly if humans are involved.
  • Prepare for the unexpected and clarify assumptions vs. realities of service capabilities.
  • Leverage fault isolation and containment to prevent rolling or spreading disasters.
  • Look at cloud services beyond lowest cost or for cost avoidance.
  • What is your organizations culture for learning from mistakes vs. fixing blame?
  • Ask yourself if you, your applications and organization are ready for clouds.
  • Ask your cloud providers if they are ready for you and your applications.
  • Identify what your cloud concerns are to decide what can be done about them.
  • Do a proof of concept to decide what types of clouds and services are best for you.

Do not be scared of clouds, however be ready, do your homework, learn from the mistakes, misfortune and errors of others. Establish and leverage known best practices while creating new ones. Look at the past for guidance to the future, however avoid clinging to, and bringing the baggage of the past to the future. Use new technologies, tools and techniques in new ways vs. using them in old ways.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Cloud conversations: Gaining cloud confidence from insights into AWS outages

StorageIO industry trends cloud, virtualization and big data

This is the first of a two-part industry trends and perspectives series looking at how to learn from cloud outages (read part II here).

In case you missed it, there were some public cloud outages during the recent Christmas 2012-holiday season. One incident involved Microsoft Xbox (view the Microsoft Azure status dashboard here) users were impacted, and the other was another Amazon Web Services (AWS) incident. Microsoft and AWS are not alone, most if not all cloud services have had some type of incident and have gone on to improve from those outages. Google has had issues with different applications and services including some in December 2012 along with a Gmail incident that received covered back in 2011.

For those interested, here is a link to the AWS status dashboard and a link to the AWS December 24 2012 incident postmortem. In the case of the recent AWS incident which affected users such as Netflix, the incident (read the AWS postmortem and Netflix postmortem) was tied to a human error. This is not to say AWS has more outages or incidents vs. others including Microsoft, it just seems that we hear more about AWS when things happen compared to others. That could be due to AWS size and arguably market leading status, diversity of services and scale at which some of their clients are using them.

Btw, if you were not aware, Microsoft Azure is more than just about supporting SQLserver, Exchange, SharePoint or Office, it is also an IaaS layer for running virtual machines such as Hyper-V, as well as a storage target for storing data. You can use Microsoft Azure storage services as a target for backing up or archiving or as general storage, similar to using AWS S3 or Rackspace Cloud files or other services. Some backup and archiving AaaS and SaaS providers including Evault partner with Microsoft Azure as a storage repository target.

When reading some of the coverage of these recent cloud incidents, I am not sure if I am more amazed by some of the marketing cloud washing, or the cloud bashing and uniformed reporting or lack of research and insight. Then again, if someone repeats a myth often enough for others to hear and repeat, as it gets amplified, the myth may assume status of reality. After all, you may know the expression that if it is on the internet then it must be true?

Images licensed for use by StorageIO via
Atomazul / Shutterstock.com

Have AWS and public cloud services become a lightning rod for when things go wrong?

Here is some coverage of various cloud incidents:

The above are a small sampling of different stories, articles, columns, blogs, perspectives about cloud services outages or other incidents. Assuming the services are available, you can Google or Bing many others along with reading postmortems to gain insight into what happened, the cause, effect and how to prevent in the future.

Do these recent incidents show a trend of increased cloud outages? Alternatively, do they say that the cloud services are being used more and on a larger basis, thus the impacts become more known?

Perhaps it is a mix of the above, and like when a magnetic storage tape gets lost or stolen, it makes for good news or copy, something to write about. Granted there are fewer tapes actually lost than in the past, and far fewer vs. lost or stolen laptops and other devices with data on them. There are probably other reasons such as the lightning rod effect given how much industry hype around clouds that when something does happen, the cynics or foes come out in force, sometimes with FUD.

Similar to traditional hardware or software based product vendors, some service providers have even tried to convince me that they have never had an incident, lost or corrupted or compromised any data, yeah, right. Candidly, I put more credibility and confidence in a vendor or solution provider who tells me that they have had incidents and taken steps to prevent them from recurring. Granted those steps might be made public while others might be under NDA, at least they are learning and implementing improvements.

As part of gaining insights, here are some links to AWS, Google, Microsoft Azure and other service status dashboards where you can view current and past situations.

What is your take on IT clouds? Click here to cast your vote and see what others are thinking about clouds.

Ok, nuff said for now (check out part II here )

Disclosure: I am a customer of AWS for EC2, EBS, S3 and Glacier as well as a customer of Bluehost for hosting and Rackspace for backups. Other than Amazon being a seller of my books (and my blog via Kindle) along with running ads on my sites and being an Amazon Associates member (Google also has ads), none of those mentioned are or have been StorageIO clients.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Ceph Day in Amsterdam and Sage Weil on Object Storage

Now also available via

This is a new episode in the continuing StorageIO industry trends and perspectives pod cast series (you can view more episodes or shows along with other audio and video content here) as well as listening via iTunes or via your preferred means using this RSS feed (https://storageio.com/StorageIO_Podcast.xml)

StorageIO industry trends cloud, virtualization and big data

In this episode, I am at the Ceph day in Amsterdam Holland event at the Tobacco Theatre. My guest for this episode is Ceph (Cephalanthera) creator Sage Weil who is also the founder of inktank.com that provides services and support for the open source based Ceph project.

For those not familiar with Ceph, it is an open source distributed object scale out software platform that can be used for deploying cloud and managed services, general purpose storage for research, commercial, scientific, high performance computing (HPC) or high productivity computing (commercial) along with backup or data protection and archiving destinations.

During our conversation Sage presents an overview of what Ceph is (e.g. Ceph for non Dummies), where and how it can be used, some history of the project and how it fits in with or provides an alternative to other solutions. Sage also talks about the business or commercial considerations for open source based projects, importance of community and having good business mentors and partners as well as staying busy with his young family.

If you are a Ceph fan, gain more insight into Sage along with Ceph day sponsors Inktank and 42on. On the other hand, if you new to object storage, open source storage software or cloud storage, listen in to gain perspectives of where technology such as Ceph fits for public, private, hybrid or traditional environments.

Click here (right-click to download MP3 file) or on the microphone image to listen to the conversation with Sage and myself.

StorageIO podcast

Also available via

Watch (and listen) for more StorageIO industry trends and perspectives audio blog posts pod casts and other upcoming events. Also be sure to heck out other related pod casts, videos, posts, tips and industry commentary at StorageIO.com and StorageIOblog.com.

Enjoy this episode Ceph Day in Amsterdam with Sage Weil.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2012 StorageIO and UnlimitedIO All Rights Reserved