March 2013 Server and StorageIO Update Newsletter

StorageIO News Letter Image
March 2013 News letter

Welcome to the March 2013 edition of the StorageIO Update news letter including a new format and added content.

You can get access to this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions.

Click on the following links to view the March 2013 edition as (HTML sent via Email) version, or PDF versions.

Visit the news letter page to view previous editions of the StorageIO Update.

You can subscribe to the news letter by clicking here.

Enjoy this edition of the StorageIO Update news letter, let me know your comments and feedback.

Nuff said for now

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Cloud conversations: AWS EBS, Glacier and S3 overview (Part III)

Storage I/O industry trends image

Amazon Web Services (AWS) recently added EBS Optimized support for enhanced bandwidth EC2 instances (read more here). This industry trends and perspective cloud conversation is the third (tying the posts together) in a three-part series companion to the AWS EBS optimized post found here. Part I is here (closer look at EBS) and part II is here (closer look at S3).

AWS image via Amazon.com

Cloud storage and object storage I/O figure
Cloud and object storage access example via Cloud and Virtual Data Storage Networking

AWS cloud storage gateway

In 2012 AWS released their Storage Gateway that you can use and try for free here using either an EC2 Amazon Machine Instance (AMI), or deployed locally on a hypervisor such as VMware vSphere/ESXi. About a year ago I did a storage gateway post (First, second and third impressions) when it was first released. I will do a new post soon following up with my later impressions and experiences of having used it recently. For now, my quick (fourth impressions can be found here in this AWS Marketplace review). In general, the gateway is an AWS alternative to using third product gateway, appliances of software tools for accessing AWS storage.

AWS Storage Gateway
Image courtesy of www.amazon.com

When deployed locally on a VM, the storage gateway communicates using the AWS API’s back to the S3 and EBS (depending on how configured) storage services. Locally, the storage gateway presents an iSCSI block access method for Windows or other servers to use.

There are two modes with one being Gateway-Stored and the other Gateway-Cached. Gateway-Stored uses your primary storage mapped to the storage gateway as primary storage and asynchronous (time delayed) snapshots (user defined) to S3 via EBS volumes. This is a handy way to have local storage for low latency access, yet use AWS for HA, BC and DR, along with a means for doing migration into or out of AWS. Gateway-cache mode places primary storage in AWS S3 with a local cached copy to reduce network overhead.

Storage I/O industry trends image

When I tried the gateway a month or so ago, using both modes, I was not able to view any of my data using standard S3 tools. For example if I looked in my S3 buckets the objects do not appear, something that AWS said had to do with where and how those buckets and objects are managed. Otoh, I was able to see EBS snapshots for the gateway-stored mode including using that as a means of moving data between local and AWS EC2 instances. Note that regardless of the AWS storage gateway mode, some local cache storage is needed, and likewise some EBS volumes will be needed depending on what mode is used.

When I used the gateway, a Windows Server mounted the iSCSI volume presented by the storage gateway and in turn served that to other systems as a shared folder. Thus while having block such as iSCSI is nice, a NAS (NFS or CIFS) presentation and access mode would also be useful. However more on the storage gateway in a future post. Also note that beyond the free trial period (you may have to pay for storage being used) for using the gateway, there are also fees for S3 and EBS storage volumes use.

AWS image via Amazon.com

What about Glacier?

Shortly after its release last year, I did this piece about Glacier and have since been doing some testing proof of concepts with it.

I like Glacier and its prospects for doing some various things, particular for inactive data including deep archives that will seldom if every be accessed, yet need to be retained. The business value proposition of Glacier is that it has a very high durability and low-cost assuming that you do not need to frequently access your data, and when you do, that you can wait 3 to 5 hours before retrieving it from your S3 buckets.

Access to Glacier is via API or AWS console so getting things into and out of it can be a challenge. For example I wanted to see if I could use AWS storage gateway to more easily bulk move things into Glacier via S3, however no luck, or at least today. Speaking of S3, by setting your policies you determine when objects get moved into Glacier as well as how long they will stay there, you can read more about Glacier here and via AWS here.

Storage I/O industry trends image

How much do these AWS services cost?

Fees vary depending on which region is selected, amount of space capacity, level or durability and availability, performance along with type of service. S3 pricing can be found here including a free trial tier along with optional fees. Other AWS fees for EC2 can be found here, EBS pricing here, Glacier here, and storage gateway costs are located here.

Note that there is a myth that cloud vendors have hidden fees which may be the case for some, however so far I have not seen that to be the case with AWS. However, as a consumer, designer or architect, doing your homework and looking at the above links among others you can be ready and understand the various fees and options. Hence like procuring traditional hardware, software or services, do your due diligence and be an informed shopper.

Amazon Web Services (AWS) image

Some more service cost notes include:

Note that with S3 Standard and RRS objects there is not a charge for deletion of objects, however there is a pro-rated charge per GByte of Glacier objects removed prior to 90 days. Glacier also allows up to 5% of your average monthly storage usage (pro-rated daily) to be restored with no charge, other fees apply for restoring larger amounts in a given period. Thus if you are planning on accessing and using data, analyze what your activity and usage will be as part of calculating your costs with Glacier. Read more about Glacier here.

Standard EBS volumes are changed by the amount of storage space capacity you provision in GB until released. For EBS snapshot copies there are fees for transferring data across regions, once moved, the rates of the new region apply for the snapshot.

Amazon Web Services (AWS) image

As with Standard volumes, volume storage for Provisioned IOPS volumes is charged by the amount you provision in GB per month. With Provisioned IOPS volumes, you are also charged by the amount you provision in IOPS pro-rated as a percentage of days you have it in use for the month.

Thus important for cloud storage planning to know not only your space requirements, also IOP’s, bandwidth, and level of availability as well as durability. so for Standard volumes, you will likely see a lower number of I/O requests on your bill than is seen by your application unless you sync all of your I/Os to disk. Thus pay attention to what your needs are in terms of availability (accessibility), durability (resiliency or survivability), space capacity, and performance.

Leverage AWS CloudWatch tools and API’s to monitoring that matter for timely insight and situational awareness into how EBS, EC2, S3, Glacier, Storage Gateway and other services are being used (or costing you). Also visit the AWS service health status dashboard to gain insight into how things are running to help gain confidence with cloud services and solutions.

Storage I/O industry trends image

When it comes to Cloud, Virtualization, Data and Storage Networking along with AWS among other services, tools and technologies including object storage, we are just scratching the surface here.

Hopefully this helps to fill in some gaps giving more information addressing questions, along with generating new ones to prepare for your journey with clouds. After all, don’t be scared of clouds. Be prepared, do your homework, identify your concerns and then address those to gain cloud confidence.

Additional reading and related items:

  • Cloud conversations: AWS EBS optimized instances
  • Cloud conversations: AWS EBS, Glacier and S3 overview (Part I)
  • Cloud conversations: AWS EBS, Glacier and S3 overview (Part II)
  • Cloud conversations: AWS Government Cloud (GovCloud)
  • Cloud conversations: Gaining cloud confidence from insights into AWS outages
  • AWS (Amazon) storage gateway, first, second and third impressions
  • Cloud conversations: Public, Private, Hybrid what about Community Clouds?
  • Amazon cloud storage options enhanced with Glacier
  • Amazon Web Services (AWS) and the NetFlix Fix?
  • Cloud conversation, Thanks Gartner for saying what has been said
  • Cloud and Virtual Data Storage Networking via Amazon.com
  • Seven Databases in Seven Weeks
  • www.objectstoragecenter.com
  • Ok, nuff said (for now).

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Cloud conversations: AWS EBS, Glacier and S3 overview (Part II S3)

    Storage I/O industry trends image

    Amazon Web Services (AWS) recently added EBS Optimized support for enhanced bandwidth EC2 instances (read more here). This industry trends and perspective cloud conversation is the second (looking at S3 object storage) in a three-part series companion to the AWS EBS optimized post found here. Part I is here (closer look at EBS) and part III is here (tying it all together).

    AWS image via Amazon.com

    For those not familiar, Simple Storage Services (S3), Glacier and Elastic Block Storage (EBS) are part of the AWS cloud storage portfolio of services. With S3, you specify a region where a bucket is created that will contain objects that can be written, read, listed and deleted. You can create multiple buckets in a region with unlimited number of objects ranging from 1 byte to 5 Tbytes in size per bucket. Each object has a unique, user or developer assigned access key. In addition to indicating which AWS region, S3 buckets and objects are provisioned using different levels of availability, durability, SLA’s and costs (view S3 SLA’s here).

    AWS S3 example image

    Cost will vary depending on the AWS region being used, along if Standard or Reduced Redundancy Storage (RSS) selected. Standard S3 storage is designed with 99.999999999% durability (how many copies exists) and 99.99% availability (how often can it be accessed) on an annual basis capable of two data centers becoming un-available.

    As its name implies, for a lower fee and level of durability, S3 RRS has an annual durability of 99.999% and availability of 99.99% capable of a single data center loss. In the following figure durability is how many copies of data exist spread across different servers and storage systems in various data centers and availability zones.

    cloud storage and object storage across availability zone image

    What would you put in RRS vs. Standard S3 storage?

    Items that need some level of persistence that can be refreshed, recreated or restored from some other place or pool of storage such as thumbnails or static content or read caches. Other items would be those that you could tolerant some downtime while waiting for data to be restored, recovered or rebuilt from elsewhere in exchange for a lower cost.

    Different AWS regions can be chosen for regulatory compliance requirements, performance, SLA’s, cost and redundancy with authentication mechanisms including encryption (SSL and HTTPS) to make sure data is kept secure. Various rights and access can be assigned to objects including making them public or private. In addition to logical data protection (security, identity and access management (IAM), encryption, access control) policies also apply to determine level of durability and availability or accessibility of buckets and objects. Other attributes of buckets and objects include life-cycle management polices and logging of activity to the items. Also part of the objects are meta data containing information about the data being stored shown in a generic example below.

    Cloud storage and object storage spread across availability zones figure

    Access to objects is via standard REST and SOAP interfaces with an Application Programming Interface (API). For example default access is via HTTP along with a Bit Torrent interface with optional support via various gateways, appliances and software tools.

    Cloud storage and object storage IO figure
    Example cloud and object storage access

    The above figure via Cloud and Virtual Data Storage Networking (CRC Press) shows a generic example applicable to AWS services including S3 being accessed in different ways. For example I access my S3 buckets and objects via Jungle Disk (one of the tools I use for data protection) that can also access my Rackspace Cloudfiles data. In the following figure there are examples of some of my S3 buckets and objects used by different applications and tools that I have in various AWS regions.

    Image of AWS S3 usage
    AWS S3 buckets and objects in different regions

    Note that I sometimes use other AWS regions outside the US for testing purposes, for compliance purpose my production, business or personal data is only in the US regions.

    The following figure is a generic example of how cloud and object storage are accessed using different tools, hardware, software and API’s along with gateways. AWS is an example of what is shown in the following figure as a Cloud Service and S3, EBS or Glacier as cloud storage. Common example API commands are also shown which will vary by different vendors, products or solution definitions or implementations. While Amazon S3 API which is REST HTTP based has become an industry de facto standard, there are other API’s including CDMI (Cloud Data Management Interface) developed by SNIA which has gained ISO accreditation.

    Cloud storage and object storage I/O figure
    Cloud and object storage access example via Cloud and Virtual Data Storage Networking

    In addition to using Jungle Disk which manages my AWS keys and objects that it creates, I can also access my S3 objects via the AWS management console and web tools, also via third-party tools including Cyberduck.

    Cyberduck tool.

    Additional reading and related items:

    Cloud conversations: AWS EBS, Glacier and S3 overview (Part I)

    Storage I/O industry trends image

    Amazon Web Services (AWS) recently added EBS Optimized support for enhanced bandwidth EC2 instances (read more here). This industry trends and perspective cloud conversation is the first (looking at EBS) in a three-part series companion to the AWS EBS optimized post found here. Part II is here (closer look at S3) and part III is here (tying it all together).

    AWS image via Amazon.com

    For those not familiar, Simple Storage Services (S3), Glacier and Elastic Block Storage (EBS) are part of the AWS cloud storage portfolio of services. There are several other storage and data related service for little data database (SQL and NoSql based) other offerings include compute, data management, application and networking for different needs shown in the following image.

    AWS services console image
    AWS Services Console via www.amazon.com

    Simple Storage Service (S3) is commonly used in the context of cloud storage and object storage accessed via its S3 API. S3 can be used externally from outside AWS as well as within or via other AWS services. For example with Elastic Cloud Compute (EC2) including via the Amazon Storage Gateway (read more here and about EC2 here). Glacier is the AWS cold or deep storage service for inactive data and is a companion to S3 that you can read more about here.

    S3 is well suited for both big and little data repositories of objects ranging from backup to archive to active video images and much more. In fact if you are using some of the different AaaS or SaaS services including backup or file and video sharing, those may be using S3 as its back-end storage repository. For example NetFlix leverages various AWS capabilities as part of its data and applications infrastructure (read more here).

    AWS basics

    AWS consists of multiple regions that contain multiple availability zones where data and applications are supported from.

    yyyy

    Note that objects stored in a region never leave that region, such as data stored in the EU west never leave Ireland, or data in the US East never leaves Virginia.

    AWS does support the ability for user controlled movement of data between regions for business continuance (BC), high availability (HA) and disaster recovery (DR). Read more here at the AWS Security and Compliance site and in this AWS white paper.

    What about EBS?

    That brings us to Elastic Block Storage (EBS) that is used by EC2 (read more about EC2 and instances here) as storage for cloud and virtual machines or compute instances. In addition to using S3 as a persistent backing store or target for holding snapshots EBS can be thought of as primary storage. You can provision and allocate EBS volumes in the different data centers of the various AWS availability zones. As part of allocating your EBS volume you indicate the type (standard) or provisioned IOP’s or the new EBS Optimized volumes. EBS Optimized volumes enables instances that support the feature to have better IO performance to storage.

    The following image shows an EC2 instance with EBS volumes (standard and provisioned IOPS’s) along with S3 volumes and snapshots. In the following example the instance and volumes are being served via the AWS US East region (Northern Virginia) using availability zone US East 1a. In addition, EBS optimized volumes are shown being used in the example to increase bandwidth or throughput performance between storage and the compute instance.

    xxxxxxx

    Using the above as a basis, you can build on that to leverage multiple availability zones or regions for HA, BC and DR combined with application, network load balancing and other capabilities. Note that EBS volumes are protected for durability by being spread across different servers and storage in an availability zone. Additional protection is provided by using snapshots combined with S3. Additional BC and DR or HA protection can be accomplished by replicating data across availability zones.

    SQL applications using cloud and object storage services

    The above is an example of tying various components and services together. For example using different AWS availability zones, instances, EBS, S3 and other tools including those from third parties. Here is a link to a free chapter download from Cloud and Virtual Data Storage Networking (CRC Press) pertaining to data protection, BC and DR (available at Amazon here and Kindle here). In addition here is an AWS white paper on using their services for BC, HA and DR.

    EBS volumes are created ranging in size from 1GByte to 1Tbyte in space capacity with multiple volumes being mapped or attached to an EC2 instances. EBS volumes appear as a virtual disk drive for block storage. From the EC2 instance and guest operating system you can mount, format and use the EBS volumes as any other block disk drive with your favorite tools and file systems. In addition to space capacity, EBS volumes are also provisioned with standard IO (e.g. disk based) performance or high performance Provisioned IOPS (e.g. SSD) for thousands of IOPS per instance. AWS states that a standard EBS volume should support about 100 IOP’s on average, with about 2,000 IOPS for a provisioned IOP volume. Need more than 2,000 IOPS, then the AWS recommendation is to use multiple IOP provisioned volumes with data spread across those. Following is an example of AWS EBS volumes seen via the EC2 management interface.

    Image of mapping AWS EBS to ECS instance
    AWS EC2 and EBS configuration status

    Note that there is a 10 to 1 ratio of space capacity to IOP’s being provisioned. If you try to play a game of 1,000 IOPS provisioned on a 10GByte EBS volume to keep your costs down you are out of luck. Thus to get 1,000 IOPS’s you would need to allocate at least a 100GByte EBS volume of which you will be billed for the actual space used on a monthly pro-rated basis. The following is an example of provisioning an AWS EBS volume using provisioned IOPS in the US East region in the 1a availability zone.

    Image of AWS EBS provisioned IOPs
    Provisioning IOPS with EBS volume

    Standard and Provisioned IOPS EBS volumes

    Standard EBS volumes are good for boot images or other application usage that are not IO performance intensive. For database or other active applications where more performance is needed, then EBS Provisioned IOPS volumes are your option. Note that the provisioned IOP rate is persistent for the specific volume during its life. Thus if you set it and forget it including not using it without turning it off, you will be billed for provisioning it.

    Additional reading and related items:

  • Cloud conversations: AWS EBS optimized instances
  • Cloud conversations: AWS EBS, Glacier and S3 overview (Part II S3)
  • Cloud conversations: AWS EBS, Glacier and S3 overview (Part III)
  • Cloud conversations: AWS Government Cloud (GovCloud)
  • Cloud conversations: Gaining cloud confidence from insights into AWS outages
  • AWS (Amazon) storage gateway, first, second and third impressions
  • Cloud conversations: Public, Private, Hybrid what about Community Clouds?
  • Amazon cloud storage options enhanced with Glacier
  • Amazon Web Services (AWS) and the NetFlix Fix?
  • Cloud conversation, Thanks Gartner for saying what has been said
  • Cloud and Virtual Data Storage Networking via Amazon.com
  • Seven Databases in Seven Weeks
  • www.objectstoragecenter.com
  • Continue reading part II (closer look at S3) here and part III (tying it all together) here.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Cloud conversations: AWS EBS Optimized Instances

    Storage I/O industry trends image

    Amazon Web Services (AWS) recently announced global availability of Elastic Block Storage (EBS) optimized support for four extra Elastic Cloud Computing (EC2) instance types. The support enables optimized performance between standard and provisioned IOP EBS volumes and EC2 instances to meet different bandwidth or throughput needs (learn more about AWS EBS, EC2, S3 and Glacier here).

    AWS image via Amazon.com

    The four EBS optimized instance types are m3.xlarge, m3.2xlarge, m2.2xlarge and c1.xlarge for dedicated bandwidth or throughput between the EC2 instances and EBS volumes. The performance or bandwidth ranges from 500 Mbits (500 / 8 = 62.5 MBytes) per second, to 1,000 Mbits (1,000 / 8 = 125MBytes) per second depending on the type of instance. As a refresher, EC2 instances (why by time you read this could change) vary in size and functionality with different amounts of EC2 Unit of Compute (ECU), number of virtual cores, amount of storage space included, 32 or 64 bit, storage and networking IO performance, and EBS Optimized or not. In addition to instances, different operating system images can be installed using those licensed from AWS such as various Windows and Unix or supply your own.

    Image of EC2 instance

    There are also different generations of instances such as M1 (first generation where one ECU = 1.0 to 1.2 Ghz of a 2007 era Opteron or Xeon processor), M3 (second generation with faster processors) along with Micro low-cost options. There are also other optimized instances including high or large amounts of memory, high CPU or compute processing, clustered compute, high memory clustered, clustered GPU (e.g. using Nivida Tesla GPUs), high IO and high storage space capacity needs.

    Here is the announcement from AWS:

    Dear Amazon Web Services Customer,

    We are delighted to announce the global availability of EBS-optimized support for four additional instance types: m3.xlarge, m3.2xlarge, m2.2xlarge, and c1.xlarge. EBS-optimized instances deliver dedicated throughput between Amazon EC2 and Amazon EBS, with options between 500 Megabits per second and 1,000 Megabits per second depending on the instance type used. The dedicated throughput minimizes contention between EBS I/O and other traffic from your Amazon EC2 instance, providing the best performance for your EBS volumes.

    EBS-optimized instances are designed for use with both Standard and Provisioned IOPS EBS volumes. Standard volumes deliver 100 IOPS on average with a best effort ability to burst to hundreds of IOPS, making them well-suited for workloads with moderate and bursty I/O needs. When attached to an EBS-optimized instance, Provisioned IOPS volumes are designed to consistently deliver up to 2000 IOPS from a single volume, making them ideal for I/O intensive workloads such as databases. You can attach multiple Amazon EBS volumes to a single instance and stripe your data across them for increased I/O and throughput performance.

    Amazon EBS-optimized support is now available for m3.xlarge, m3.2xlarge, m2.2xlarge, m2.4xlarge, m1.large, m1.xlarge, and c1.xlarge instance types, and is currently supported in the US-East (N. Virginia), US-West (N. California), US-West (Oregon), EU-West (Ireland), Asia Pacific (Singapore), Asia Pacific (Japan), Asia Pacific (Sydney), and South America (São Paulo) Regions.

    You can learn more by visiting the Amazon EC2 detail page.

    Sincerely,

    The Amazon EC2 Team

    What this means is that AWS is enabling customers to size their compute instances and storage volumes with more flexibility to meet different needs. For example, EC2 instances with various compute processing capabilities, amount of memory, network and storage I/O performance to volumes. In addition, storage volumes based on different space capacity size, standard or provisioned IOP’s, bandwidth or throughput performance between the instance and volume, along with data protection such as snapshots.

    This means that the cost per space capacity of an EBS volume varies based on which AWS availability zone it is in, standard (lower IOP performance) or provisioned IOP’s (faster), along with instance type. In other words, cloud storage is not just about the cost per GByte, it’s also about the cost for IOPS, bandwidth to use it, where it is located (e.g. with AWS which Availability Zone), type of service, level of availability and durability among other attributes.

    Additional reading and related items:

    Continue reading part I (closer look at EBS) here, part II (closer look at S3) here and part III (tying it all together) here.

    Ok, nuff said (for now)

    Cheers
    Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Welcome to the Cloud Bulk Object Storage Resources Center

    Updated 8/31/19

    Cloud Bulk Big Data Software Defined Object Storage Resources

    server storage I/O trends Object Storage resources

    Welcome to the Cloud, Big Data, Software Defined, Bulk and Object Storage Resources Center Page objectstoragecenter.com.

    This object storage resources, along with software defined, cloud, bulk, and scale-out storage page is part of the server StorageIOblog microsite collection of resources. Software-defined, Bulk, Cloud and Object Storage exist to support expanding and diverse application data demands.

    Other related resources include:

  • Software Defined, Cloud, Bulk and Object Storage Fundamentals
  • Software Defined Data Infrastructure Essentials book (CRC Press)
  • Cloud, Software Defined, Scale-Out, Object Storage News Trends
  •  Object storage SDDC SDDI
    Via Software Defined Data Infrastructure Essentials (CRC Press 2017)

    Bulk, Cloud, Object Storage Solutions and Services

    There are various types of cloud, bulk, and object storage including public services such as Amazon Web Services (AWS) Simple Storage Service (S3), Backblaze, Google, Microsoft Azure, IBM Softlayer, Rackspace among many others. There are also solutions for hybrid and private deployment from Cisco, Cloudian, CTERA, Cray, DDN, Dell EMC, Elastifile, Fujitsu, Vantera/HDS, HPE, Hedvig, Huawei, IBM, NetApp, Noobaa, OpenIO, OpenStack, Quantum, Rackspace, Rozo, Scality, Spectra, Storpool, StorageCraft, Suse, Swift, Virtuozzo, WekaIO, WD, among many others.

    Bulk Cloud Object storage SDDC SDDI
    Via Software Defined Data Infrastructure Essentials (CRC Press 2017)

    Cloud products and services among others, along with associated data infrastructures including object storage, file systems, repositories and access methods are at the center of bulk, big data, big bandwidth and little data initiatives on a public, private, hybrid and community basis. After all, not everything is the same in cloud, virtual and traditional data centers or information factories from active data to in-active deep digital archiving.

    Object Context Matters

    Before discussing Object Storage lets take a step back and look at some context that can clarify some confusion around the term object. The word object has many different meanings and context, both inside of the IT world as well as outside. Context matters with the term object such as a verb being a thing that can be seen or touched as well as a person or thing of action or feeling directed towards.

    Besides a person, place or physical thing, an object can be a software-defined data structure that describes something. For example, a database record describing somebody’s contact or banking information, or a file descriptor with name, index ID, date and time stamps, permissions and access control lists along with other attributes or metadata. Another example is an object or blob stored in a cloud or object storage system repository, as well as an item in a hypervisor, operating system, container image or other application.

    Besides being a verb, an object can also be a noun such as disapproval or disagreement with something or someone. From an IT context perspective, an object can also refer to a programming method (e.g. object-oriented programming [oop], or Java [among other environments] objects and classes) and systems development in addition to describing entities with data structures.

    In other words, a data structure describes an object that can be a simple variable, constant, complex descriptor of something being processed by a program, as well as a function or unit of work. There are also objects unique or with context to specific environments besides Java or databases, operating systems, hypervisors, file systems, cloud and other things.

    The Need For Bulk, Cloud and Object Storage

    There is no such thing as an information recession with more data being generated, moved, processed, stored, preserved and served, granted there are economic realities. Likewise as a society our dependence on information being available for work or entertainment, from medical healthcare to social media and all points in between continues to increase (check out the Human Face of Big Data).

    In addition, people and data are living longer, as well as getting larger (hence little data, big data and very big data). Cloud products and services along with associated object storage, file systems, repositories and access methods are at the center of big data, big bandwidth and little data initiatives on a public, private, hybrid and community basis. After all, not everything is the same in cloud, virtual and traditional data centers or information factories from active data to in-active deep digital archiving.

    Click here to view (and hear) more content including cloud and object storage fundamentals

    Click here to view software defined, bulk, cloud and object storage trend news

    cloud object storage

    Where to learn more

    The following resources provide additional information about big data, bulk, software defined, cloud and object storage.



    Via InfoStor: Object Storage Is In Your Future
    Via FujiFilm IT Summit: Software Defined Data Infrastructures (SDDI) and Hybrid Clouds
    Via MultiChannel: After ditching cloud business, Verizon inks Virtual Network Services deal with Amazon
    Via MultiChannel: Verizon Digital Media Services now offers integrated Microsoft Azure Storage
    Via StorageIOblog: AWS EFS Elastic File System (Cloud NAS) First Preview Look
    Via InfoStor: Cloud Storage Concerns, Considerations and Trends
    Via InfoStor: Object Storage Is In Your Future
    Via Server StorageIO: April 2015 Newsletter Focus on Cloud and Object storage
    Via StorageIOblog: AWS S3 Cross Region Replication storage enhancements
    Cloud conversations: AWS EBS, Glacier and S3 overview
    AWS (Amazon) storage gateway, first, second and third impressions
    Cloud and Virtual Data Storage Networking (CRC Book)

    View more news, trends and related cloud object storage activity here.

    Videos and podcasts at storageio.tv also available via Applie iTunes.

    Human Face of Big Data
    Human Face of Big Data (Book review)

    Seven Databases in Seven weeks Seven Databases in Seven Weeks (Book review)

    Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

    Software Defined Data Infrastructure Essentials Book SDDC

    What This All Means

    Object and cloud storage are in your future, the questions are when, where, with what and how among others.

    Watch for more content and links to be added here soon to this object storage center page including posts, presentations, pod casts, polls, perspectives along with services and product solutions profiles.

    Ok, nuff said, for now.

    Gs

    Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

    Open Data Center Alliance (ODCA) BMW Private Cloud Strategy

    Storage I/O cloud virtual and big data perspectives

    If your organization like StorageIO is a member of the Open Data Center Alliance (ODCA) you may be aware of the resources they make available about cloud, virtualization, security and more. Unlike so many other industry associates or trade groups dominated by vendors, the ODCA has an IT or customer focus including member developed best practices, strategies and templates.

    A good example is the recently released ODCA member BMW group private cloud strategy document.

    This 24 page document covers BMW groups private cloud strategy that sets stage for phased future hybrid. By being a phased approach, it seems that BMW is leveraging and transitioning for the future while maintaining support for their current environment (including Windows-based) as part of a paradigm shift. This is refreshing and good to see how organizations are looking to use cloud as part of a paradigm or IT service deliver model and not just as a new technology or platform focus.

    Topics covered include IaaS along with PaaS for DB, Web, SAP and CSaaS or Corporate Software as a Service based on the NIST cloud model. Also included are roles and integration of CMDB, ITSM, ITIL, orchestration in a business vs. technology driven model. Being business driven, that means there is a mission statement for the BMW cloud strategy, with objectives aligned to support organization enablement vs. using different tools, technologies or trends along with design criteria.

    What I like about the BMW strategy is that it is aligned to support the business as opposed to finding ways to use technology to support the business, or justify why a cloud is needed. In other words, something different from those needing for a technology, tool, product, standard or service to be adopted.

    Thus while having been a vendor, the ODCA customer focused angle appeals to me from when I was on that side of the table working in IT organizations. Otoh, for some of you reading through the BMW document might result in DejaVu from experiences of web-based, client-server, information utilities and other IT service delivery models or paradigms.

    Learn more at the ODCA newsroom

    If you have not done, check out and join the ODCA.

    Ok nuff said

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Where has the FCoE hype and FUD gone? (with poll)

    Storage I/O cloud virtual and big data perspectives

    A couple of years ago I did this post about if Is FCoE Struggling to Gain Traction, or on a normal adoption course?

    Fast forward to today, has anybody else noticed that there seems to be less hype and fud on Fibre Channel (FC) over Ethernet (FCoE) than a year or two or three ago?

    Does this mean that FCoE as the fud or detractors were predicting is in fact stillborn with no adoption, no deployment and dead on arrival?

    Does this mean that FCoE as its proponents have said is still maturing, quietly finding adoption and deployment where it fits?

    Does this mean that FCoE like its predecessors Fibre Channel and Ethernet are still evolving, expanding from early adopter to a mature technology?

    Does this mean that FCoE is simply forgotten with software defined networking (SDN) having over-shadowed it?

    Does this mean that FCoE has finally lost out and that iSCSI has finally stepped up and living up to what it was hyped to do ten years ago?

    Does this mean that FC itself at either 8GFC or 16GFC is holding its own for now?

    Does this mean that InfiniBand is on the rebound?

    Does this mean that FCoE is simply not fun or interesting, or a shiny new technology with vendors not spending marketing money so thus people not talking, tweeting or blogging?

    Does this mean that those who were either proponents pitching it or detractors despising it have found other things to talk about from SDN to OpenFlow to IOV to Software Defined Storage (what ever, or who ever definition your subscribe to) to cloud, big or little data and the list goes on?

    I continue hear of or talk with customers organizations deploying FCoE in addition to iSCSI, FC, NAS and other means of accessing storage for cloud, virtual and physical environments.

    Likewise I see some vendor discussions occurring not to mention what gets picked up via google alerts.

    However in general, the rhetoric both pro and against, hype and FUD seems to have subsided, or at least for now.

    So what gives, what’s your take on FCoE hype and FUD?

    Cast your vote and see results here.

     

    Ok, nuff said

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    A Pivotal or cloudy moment for EMC and VMware?

    Storage I/O cloud virtual and big data perspectives

    EMC and VMware (who is majority owned by EMC) have announced a new joint initiative called Pivotal (read more here and here) as part of their software defined data center strategies and architecture.

    Image of EMC and VMware Pivotal PaaS cloud

    Is this a pivotal moment for both EMC and VMware signaling that they will be going head to head (via their new initiative based company) with Amazon Web Services (AWS), Microsoft Azure, HP Cloud services, Rackspace and a long list of others?

    Part of the answer to that question would be based on what is meant by going head to head, and which aspects of those services. For Cloud Platform as a Service (PaaS) along with big data analytics related I would say yes. In terms of other Cloud AaaS or SaaS or IaaS probably not as much so at this time.

    On the surface Pivotal appears to at least initially be more of a Platform as a Service (PaaS) play vs. Software as a Service (SaaS) or Application as a Service (AaaS) or Infrastructure as a Service (IaaS) play. Thus it will be interesting to see how Pivotal pivots and evolves into other directions beyond first cloud and big data applications development assistance.

    This will not be the first initiative or company jointly formed with VMware following on the heals of VCE that also includes Cisco and Intel as partners.

    Pivotal will be headed up by Paul Maritz who has been EMC Chief Strategist and formerly CEO of VMware as well as having spent time at Microsoft. EMC will have 69% ownership with VMware having the balance, it is estimated that about $400 Million US dollars will need to be invested.

    The new company or initiative is slated to launch on or about April 1, 2013 (April Fools day) with target 2013 revenues of about $300 Million. Projections are for an annual revenue of around $1 Billion in five years. That revenue will come from the existing assets and business being brought together along with probably some net new business. Doing some quick back of the napkin based math shows an average straight line growth of about 36% over five years.

    VMware intellectual property and assets contributed:
    Cloudfoundry
    Spring source
    Cetas

    EMC intellectual property and assets continued:
    Pivotal labs
    Greenplum big data solutions

    Thus is this a Pivotal move signaling the entry into new areas that could further disrupt and cloud that status of VMware and EMC as technology suppliers?

    Or this clear the clouds a bit to bring clarity to what EMC and VMware are doing along with leveraging various acquisitions?

    By clarity, this in theory should help place both EMC and VMware with their customer, partners and prospects as technology (along with associated services) supplier (what some refer to as arms merchants) vs. competing with those entities.

    Storage I/O cloud virtual and big data perspectives

    IMHO this is pivotal in that it helps to bring clarity for some of the different technologies and business that EMC and VMware has acquired. That clarity will help its own sales teams along with partners avoid creation of revenue prevention teams impacting sales of other solutions.

    Likewise there should be good synergy around the various tools, technology and offerings around big data, little data and application development with pivotal. That synergy is a combination of tools, technologies, development techniques. The combination of the tools and new techniques should enable customers to leverage new technologies in new ways, vs. trying to use and deploy in old ways.

    Btw, anybody notice Mozy or the lack of that mention keeping in mind that technology was brought back into the EMC backup group fold, while still being operated as a service. Also keep in mind that Mozy was bought by EMC and then transferred to VMware a couple of years ago.

    Ok, nuff said (for now).

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Part II: XtremIO, XtremSW and XtremSF EMC flash ssd portfolio redefined

    Part one of this two-part post provided a summary of today’s EMC (@EMCflash) announcement around XtremIO and renaming VFCache to XtremSF and associated software as XtremSW.

    Storage I/O industry trends and perspectives

    Synopsis of announcement

    • Product rollout and selective availability of the new all flash SSD array XtremIO
    • Rename server-side PCIe ssd flash cards from VFCache to XtremSF
    • New XtremSF models including enhanced multi-level cell (eMLC) with larger capacities
    • Rename VFCache caching software to XtremSW (enables cache mode vs. target mode)

    Now lets take a closer look at what was announced along with what it means in terms of Industry Trends and Perspectives.

    XtremIO  has been in customer beta for some time and now those along with some other early customers are able to acquire the product. In addition, EMC is opening up XtremIO to more prospective customers (Directed Availability) who have requirements or needs that line up with the products target market capabilities.

    Storage I/O industry trends and perspectives

    What this means is that XtremIO is not being simply put out into the general product population for broad distribution. Instead, it is being put into a controlled release (Directed Availability) to help customers, partners and EMC sales decide where best to use it and thus risk revenue prevention in other areas. The criteria or target opportunity (at least initially) are little-data applications including OLTP, server virtualization (where aggregation can cause aggravation) along with virtual desktop or VDI. In other words, many of the traditional or legacy IOP focused SSD opportunities.

    In addition to XtremIO EMC has renamed their VFCache PCIe flash SSD cards (Launched February 2012) to XtremSF along with new models with both SLC and MLC nand flash. Also as part of today’s announcement EMC is renaming the cache software for XtremSF (e.g. VFCache) to be known as XtremSW. Now if that did not prompt the question of if you can now buy XtremSF as a target mode only card without the cache software the answer is yes.

    What is XtremIO?

    It is a new all flash SSD storage array. XtremIO is a Cluster, grid or collection of nodes called bricks with linear performance scaling providing block based all flash SSD storage. Data services consists of data footprint reduction (DFR) including inline global (across all nodes or bricks) dedupe on 4Kbyte chunks along with thin provisioning. Global dedupe is done on ingest using a combination of flash buffered meta-data (tables, index or dictionary) of what has been seen before along with multi-threaded software to leverage multi-core processors. Using the global dedupe at ingest; only new unique data is saved based on 4 Kbyte chunks.

    Performance per EMC scales from one single node to more second node or a fourth node. Note: architecturally more nodes can be added with EMC indicating added models will be available in the future.

    In addition to DFR, other data services including writable snapshots, and auto-load balancing when new bricks are added. Note that in a normal running XtremIO, data is automatically spread across the nodes for both performance and resiliency. Data only needs to be moved or load-balanced in the background when new bricks are added. Instant copy snapshots are supported along with writable snapshots. Currently replication is done via external EMC products such as VPLEX or RecoverPoint with statement of directions (SOD) for future enhancements.

    Additional attributes of XtremIO include:

    • Each node or brick (X-Brick) has up to 16 (16 was Gen 1 hardware platform, it is now 25 SSD drives)
    • All bricks are involved in IO and storage processing
    • Positioned by EMC as Software Defined (no proprietary hardware)
    • Four x 8Gb Fibre Channel (8GFC) and four x 10Gb Ethernet (iSCSI) per brick
    • Bricks communicate with each other via a separate interconnect network or fabric
    • Bricks have redundant processors (think of as controllers) with multiple sockets and cores
    • 4KB random read IOP’s scale from 250K (one brick), 500K (two bricks) and 1 Million (four bricks). For 4K random write IOPS, the numbers are 100K, 200K and 400K across one, two and four brick configurations with low latency and all data services running (EMC supplied numbers)

    In addition to 4K being a commonly used or referred to IO size, it is also the same size as the new industry standard Advanced Format (AF). Today the standard storage block, page or sector size is 512 bytes however AF moves that to a larger 4,096 bytes (e.g. 4KB) to closer align with larger IO sizes. Note that many HDD’s and some SSD’s today support AF and provide 512 byte emulation modes for compatibility.

    What is XtremSF?

    VFCache is renamed XtremSF with new models using eMLC as companion to existing SLC PCIe  cards and blade server mezzanine cards. EMC is emphasizing performance metrics that matter including IOPs that are relative to customer workloads such as 4K, 8K or larger with mix of reads and writes with low latency. In addition to IOPs with latency, size along with reads or writes for little data, EMC is also showing bandwidth or throughput numbers for big-data and big-bandwidth.

    Model
    Capacity
    Read Transfer GB/sec
    Write Transfer GB/sec
    Random 4K Read (IOPS)
    Random 4K Write (IOPS)

    Random 4K Mixed ( IOPS)

    Read latency (usec)
    Write latency (usec)
    2200 (eMLC)
    2.2 TB
    2.47
    1.1
    343K
    105K
    206K
    87us
    30us
    700 (SLC)
    700 GB
    2.9
    1.8
    712K
    197K
    411K
    50us
    13us
    550 (eMLC)
    550 GB
    1.36
    512 MB/s
    174K
    49K
    96K
    87us
    37us
    350 (SLC)
    350 GB
    2.9
    756 MB/s
    715K
    95K
    267K
    50us
    13us

    Sampling of SLC and eMLC XtremSF PCIe SSD cards performance characteristics (via EMC) including latency measured in microseconds). Note performance differences due to some cards being based on SLC and others on eMLC.

    Additional attributes, some new and some previously announced include:

    • 8X  PCIe bandwidth lanes for performance
    • No IO impact to applications during garbage collection
    • Supports multi-core processor workloads with parallel design
    • Low CPU overhead by off-loading functions to PCIe card
    • Half-height, half-length PCIe form factor
    • Wear-leveling for nand flash program/erase (P/E) cycle duration
    Other storage, server and systems vendors including Cisco, Dell, HP, IBM, NetApp and Oracle offer various PCIe nand flash SSD cards either as target, cache or mixed modes. Manufactures or suppliers of PCIe nand flash SSD cache and target cards include among others FusionIO, Intel, LSI, Micron , OCZ and Virident (who is partnered with Seagate).

    What is XtremSW?

    Server side flash software (not to be confused with FAST) for using XtremSF as a tier 0 (server-side) ssd cache or target. In target mode the XtremSF functions as a high performance persistent local dedicated direct attached storage (DAS) device. Cache mode enables frequently accessed data to be kept close to the applications off-loading underlying storage systems to be more effectively used. The XtremSW complements back-end storage systems for data protection and persistence along with investment protection of those assets.

    Storage I/O industry trends and perspectives

    What this all means

    SSD is in your future, question is where, when and with what.

    Why not just use SSD (DRAM and or nand flash) everywhere?

    Keep in mind that in the data center (traditional, virtual or cloud) everything is not the same. Thus the simple answer is that there is not enough of it available at a low enough price point (think closer to Hard Disk Drives (HDD) costs) to fit into customers budget. Sure SSDs provide better performance and productivity benefits, however while there is no such thing as a data or information recession, there are budget constraints.

    Another reason why SSD cant simply be used everywhere are physical (and logical) constraints such as amount of memory a server can directly access, or current DDR3 DIMMs (this could change with DDR4 according to Micron) can only address and work with DRAM, PCIe bus physical slot space, operating and hypervisor addressing limits among others.

    If SSD (DRAM and or nand flash) were priced were priced low enough (e.g. much closer to HDDs) and available SSD including both DRAM and nand flash (SLC, MLC, eMLC, TLC, etc) along with emerging Phase Change Memory (PCM) are at the convergence of traditional memory and data storage. While some storage (or server) professionals may not agree, storage is an extension of memory and thus part of the traditional server and storage memory hierarchy shown below.

    Storage I/O and cache locality of reference

    This brings up the locality of reference topic also shown in the following figure where the best IO is the one that does not have to be done. The second best is the one that can be done closest to application to a given level of service. Locality of reference which is important for servers and storage systems including caching refers to how close frequently accessed data is to where it is needed. For some applications this means as much DRAM main memory in a server as possible either clustered, with battery backup or other data persistency protection including onboard HDD or SSD (e.g. towards the top of the hierarchy).

    nand flash SSD and storage I/O location options

    There are other applications where localized SSD (DRAM or nand flash) are a benefit to compliment main memory or as a persistent cache and target such as PCIe cards or SAS and SATA drives. Further down the stack and for housing larger amounts of storage with performance (reads or writes, random or sequential) along with data services is where all SSD and hybrid (mix of SSD and HDD) fit. Even further down the stack and for a broader segment is where cloud storage services based on SSD such as those from Rackspace (Cloud Block Storage with SSD) and Amazon (provisioned IOPS for EBS) have a play. Lets not forget about SSD in laptop, tablets and workstations, for example I have a Samsung model 830 in my Lenvo X1.

    Storage I/O industry trends and perspectives

    Some general industry trends include:

    • SSD is like real estate, location can matter, a little can go a long way
    • SSD media options include DRAM and nand flash (SLC, MLC, eMLC, TLC)
    • Portfolios broadening with different products for various needs
    • SSD functionality in servers, appliances, storage systems and cloud services
    • All flash SSD arrays have not killed off all traditional or hybrid storage arrays
    • Focus expanding from Just a Bunch Of SSD (JBOS) to enterprise like functionality
    • Software needs hardware, hardware needs software, the two work better together
    • Comparing meaningful metrics that matter vs. industry marketing metrics

    Related items about nand flash, SSD and metrics related themes:

    Storage I/O industry trends and perspectives

    Some additional thoughts and perspectives

    Does this mean traditional storage arrays are now dead?

    IMHO, no, there will be some cannibalization of existing storage systems by XtremIO within EMC customers or prospects if not managed, as well as via those from others. Keep in mind that recently EMC announced enhancements to their VMAX including entry-level options for service providers. Some new opportunities opened up will be where traditional all SSD (flash or dram) systems have historically had success.

    Traditional SSD and new dedicated SSD systems include Texas Memory Systems (TMS) bought by IBM in 2012, and the recently announced NetApp EF540 (and future FlashRay) along with startups Solidfire, Violin, Whiptail among others. There will be environments where XtremIO may take care of all storage needs for a customer or specific application or piece of it. Then there will be other situations where XtremIO will go-exist with EMC or other vendor’s storage solutions as part of a data infrastructure.

    Storage I/O industry trends and perspectives

    Who will EMC be competing against with XtremIO?

    Certainly the startups or smaller players such as Violin, Whiptail, Purestorage, Solidfire along with IBM/TMS and NetApp EF540 (eventually FlashRay as well) among others.

    There will also be some competition with other hybrid storage array vendors that have a mix of HDD and SSD. XtremIO will also compete in some situations on its own vs. other PCIe flash target and cache cards such as FusionIO, however for the most part those will up against XtremSF and XtremSW.

    Why the slow or “Directed Availability” rollout?

    Why not? By taking a controlled rollout selecting and qualifying customers for XtremIO, EMC gets to manage how the product goes out into production and control how it is used to increase chances of success. Unlike a startup that would be forced to try to put their new technology anywhere, EMC has the luxury of selecting where it goes, not to mention needing to avoid introducing a revenue prevention play for its other products.

    Overall, I give an Atta boy and Atta girl to the EMC crew for a Product Defined Announcement (PDA) extending their flash portfolio to complement their different customers and prospects various environment needs. Now watch EMC, NetApp and others step up their flash dance moves to see who will out flash the others in the eXtreme flash games, not to mention emerging software defined marketing moves (SDMM) ;) .

    Ok, nuff said.

    Cheers Gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    VCE revisited, now & zen

    StorageIO Industry trends and perspectives image

    Yesterday VCE and their proud parents announced revenues had reached an annual run rate of a billion dollars. Today VCE announced some new products along with enhancements to others.

    Before going forward though, lets take go back for a moment to help set the stage to see where things might be going in the future. A little over a three years ago, back in November 2009 VCE was born and initially named ACADIA by its proud parents (Cisco, EMC, Intel and VMware). Here is a post that I did back then.

    Btw the reference to Zen might cause some to think that I don’t how to properly refer to the Xen hypervisor. It is really a play from Robert Plants album Now & Zen and its song Tall Cool One. For those not familiar, click on the link and listen (some will have DejaVu, others might think its new and cool) as it takes a look back as well as present, similar to VCE.

    Robert plant now & zen vs. Xen hypervisor

    On the other hand, this might prompt the question of when will Xen be available on a Vblock? For that I defer you to VCE CTO Trey Layton (@treylayton).

    VCE stands for Virtual Computing Environment and was launched as a joint initiative including products and a company (since renamed from Acadia to VCE) to bring all the pieces together. As a company, VCE is based in Plano (Richardson) Texas just north of downtown Dallas and down the road from EDS or what is now left of it after the HP acquisition  The primary product of VCE has been the Vblock. The Vblock is a converged solution comprising components from their parents such as VMware virtualization and management software tools, Cisco servers, EMC storage and software tools and Intel processors.

    Not surprisingly there are many ex-EDS personal at VCE along with some Cisco, EMC, VMware and many other people from other organizations in Plano as well as other cites. Also interesting to note that unlike other youngsters that grow up and stay in touch with their parents via technology or social media tools, VCE is also more than a few miles (try hundreds to thousands) from the proud parent headquarters on the San Jose California and Boston areas.

    As part of a momentum update, VCE and their parents (Cisco, EMC, VMware and Intel) announced annual revenue run rate of a billion dollars in just three years. In addition the proud parents and VCE announced that they have over 1,000 revenue shipped and installed Vblock systems (also here) based on Cisco compute servers, and EMC storage solutions.

    The VCE announcement consists of:

    • SAP HANA database application optimized Vblocks (two modes, 4 node and 8 node)
    • VCE Vision management tools and middleware or what I have refered to as Valueware
    • Entry level Vblock (100 and 200) with Cisco C servers and EMC (VNXe and VNX) storage
    • Performance and functionality enhancements to existing Vblock models 300 and 700
    • Statement of direction for more specialized Vblocks besides SAP HANA


    Images courtesy with permission of VCE.com

    While VCE is known for their Vblock converged, stack, integrated, data center in a box, private cloud or among other descriptors, there is more to the story. VCE is addressing convergence of common IT building blocks for cloud, virtual, and traditional physical environments. Common core building blocks include servers (compute or processors), networking (IO and connectivity), storage, hardware, software, management tools along with people, processes, metrics, policies and protocols.

    Storage I/O image of cloud and virtual IT building blocks

    I like the visual image that VCE is using (see below) as it aligns with and has themes common to what I have discussing in the past.


    Images courtesy with permission of VCE.com

    VCE Vision is software with APIs that collects information about Vblock hardware and software components to give insight to other tools and management frameworks. For example VMware vCenter plug-in and vCenter Operations Manager Adapter which should not be a surprise. Customers will also be able to write to the Vision API to meet their custom needs. Let us watch and see what VCE does to add support for other software and management tools, along with gain support from others.


    Images courtesy with permission of VCE.com

    Vision is more than just an information source feed for VMware vCenter or VASA or tools and frameworks from others. Vision is software developed by VCE that will enable insight and awareness into the Vblock and applications, however also confirm and give status of physical and logical component configuration. This means the basis for setting up automated or programmatic remediation such as determining what software or firmware to update based on different guidelines.


    Images courtesy with permission of VCE.com

    Initially VCE Vision provides (information) inventory and perspective of how those components are in compliance with firmware or software releases, so stay tuned. VCE is indicating that Vision will continue to evolve after all this is the V1.0 release with future enhancements targeted towards taking action, controlling or active management.

    StorageIO Industry trends and perspectives image

    Some trends, thoughts and perspectives

    The industry adoption buzz is around software defined X where X can be data center (SDDC), or storage (SDS) or networking (SDN), or marketing (SDM) or other things. The hype and noise around software defined which in the case of some technologies is good. On the marketing hype side, this has led to some Software Defined BS (SDBS).

    Thus, it was refreshing at least in the briefing session I was involved in to hear a minimum focus around software defined and more around customer and IT business enablement with technology that is shipping today.

    VCE Vision is a good example of adding value hence what I refer to as Valueware around converged components. For those vendors who have similar solutions, I urge them to streamline, simplify and more clearly articulate their value proposition if they have valueware.

    Vendors including VCE continue to evolve their platform based converged solutions by adding more valueware, management tools, interfaces, APIs, interoperability and support for more applications. The support for applications is also moving beyond simple line item ordering or part number skews to ease acquisition and purchasing. Some solutions include VCE Vblock, NetApp FlexPod that also uses Cisco compute servers, IBM PureSystems (PureFlex etc) and Dell vStart among others are extending their support and optimization for various software solutions. These software solutions range from SAP (including HANA), Microsoft (Exchange, SQLserver, Sharepoint), Citrix desktop (VDI), Oracle, OpenStack, Hadoop map reduce along with other little-data, big-data and big-bandwidth applications to name a few.

    Additional and related reading:
    Acadia VCE: VMware + Cisco + EMC = Virtual Computing Environment
    Cloud conversations: Public, Private, Hybrid what about Community Clouds?
    Cloud, virtualization, Storage I/O trends for 2013 and beyond
    Convergence: People, Processes, Policies and Products
    Hard product vs. soft product
    Hardware, Software, what about Valueware?
    Industry adoption vs. industry deployment, is there a difference?
    Many faces of storage hypervisor, virtual storage or storage virtualization
    The Human Face of Big Data, a Book Review
    Why VASA is important to have in your VMware CASA

    Congratulations to VCE, along with their proud parents, family, friends and partners, now how long will it take to reach your next billion dollars in annual run rate revenue. Hopefully it wont be three years until the next VCE revisited now and Zen ;).

    Disclosure: EMC and Cisco have been StorageIO clients, I am a VMware vExpert that gets me a free beer after I pay for VMworld and Intel has named two of my books listed on their Recommended Reading List for Developers.

    Ok, nuff said, time to head off to vBeers over in Minneapolis.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Cloud conversations: Public, Private, Hybrid what about Community Clouds?

    StorageIO Industry trends and perspectives image

    Have you heard of a community clouds?

    Cloud computing including cloud storage and services as products, solutions and services offer different functionality and enable benefits for various types of organizations, entities or individuals.

    various types of clouds image

    Public clouds, private clouds and hybrids leveraging public and private continue to evolve in technology, reliability, security and functionality along with the awareness around them.

    IT professionals tell me they are interested in clouds however they have concerns.

    Cloud concerns range from security, compliance, industry or government regulations, privacy and budgets among others with private, public or hybrid clouds. Peer, cooperative (co-op), consortium or community clouds can be a solution for those that traditional public, private, hybrid, AaaS, SaaS, PaaS or IaaS do not meet their needs.

    various types, layers and services of clouds image

    From a technology standpoint, there should have to be much if any difference between a community cloud and a public, private or hybrid. Instead, they community clouds are more about thinking outside of the box, or outside of common cloud thinking per say. This means thinking beyond what others are talking about or doing and looking at how cloud products, services and practices can be used in different ways to meet your concerns or requirements.

    cloud image

    What’s your take on clouds, click here to cast your vote and see results

    Read more about community clouds including common questions in part II here.

    Ok, nuff said (for now)…

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    Tape is still alive, or at least in conversations and discussions

    StorageIO Industry trends and perspectives image

    Depending on whom you talk to or ask, you will get different views and opinions, some of them stronger than others on if magnetic tape is dead or alive as a data storage medium. However an aspect of tape that is alive are the discussions by those for, against or that simply see it as one of many data storage mediums and technologies whose role is changing.

    Here is a link to an a ongoing discussion over in one of the Linked In group forums (Backup & Recovery Professionals) titled About Tape and disk drives. Rest assured, there is plenty of fud and hype on both sides of the tape is dead (or alive) arguments, not very different from the disk is dead vs. SSD or cloud arguments. After all, not everything is the same in data centers, clouds and information factories.

    Fwiw, I removed tape from my environment about 8 years ago, or I should say directly as some of my cloud providers may in fact be using tape in various ways that I do not see, nor do I care one way or the other as long as my data is safe, secure, protected and SLA’s are meet. Likewise, I consult and advice for organizations where tape still exists yet its role is changing, same with those using disk and cloud.

    Storage I/O data center image

    I am not ready to adopt the singular view that tape is dead yet as I know too many environments that are still using it, however agree that its role is changing, thus I am not part of the tape cheerleading camp.

    On the other hand, I am a fan of using disk based data protection along with cloud in new and creative (including for my use) as part of modernizing data protection. Although I see disk as having a very bright and important future beyond what it is being used for now, at least today, I am not ready to join the chants of tape is dead either.

    StorageIO Industry trends and perspectives image

    Does that mean I can’t decide or don’t want to pick a side? NO

    It means that I do not have to nor should anyone have to choose a side, instead look at your options, what are you trying to do, how can you leverage different things, techniques and tools to maximize your return on innovation. If that means that tape is, being phased out of your organization good for you. If that means there is a new or different role for tape in your organization co-existing with disk, then good for you.

    If somebody tells you that tape sucks and that you are dumb and stupid for using it without giving any informed basis for those comments then call them dumb and stupid requesting they come back when then can learn more about your environment, needs, and requirements ready to have an informed discussion on how to move forward.

    Likewise, if you can make an informed value proposition on why and how to migrate to new ways of modernizing data protection without having to stoop to the tape is dead argument, or cite some research or whatever, good for you and start telling others about it.

    StorageIO Industry trends and perspectives image

    Otoh, if you need to use fud and hype on why tape is dead, why it sucks or is bad, at least come up with some new and relevant facts, third-party research, arguments or value propositions.

    You can read more about tape and its changing role at tapeisalive.com or Tapesummit.com.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

    In the data center or information factory, not everything is the same

    StorageIO Industry trends and perspectives image

    Sometimes what should be understood, or that is common sense or that you think everybody should know needs to be stated. After all, there could be somebody who does not know what some assume as common sense or what others know for various reasons. At times, there is simply the need to restate or have a reminder of what should be known.

    Storage I/O data center image

    Consequently, in the data center or information factory, either traditional, virtual, converged, private, hybrid or public cloud, everything is not the same. When I say not everything is the same, is that different applications with various service level objectives (SLO’s) and service level agreements (SLA’s). These are based on different characteristics from performance, availability, reliability, responsiveness, cost, security, privacy among others. Likewise, there are different size and types of organizations with various requirements from enterprise to SMB, ROBO and SOHO, business or government, education or research.

    Various levels of HA, BC and DR

    There are also different threat risks for various applications or information services within in an organization, or across different industry sectors. Thus various needs for meeting availability SLA’s, recovery time objectives (RTO’s) and recovery point objectives (RPO’s) for data protection ranging from backup/restore, to high-availability (HA), business continuance (BC), disaster recovery (DR) and archiving. Let us not forget about logical and physical security of information, assets and people, processes and intellectual property.

    Storage IO RTO and RPO image

    Some data centers or information factories are compute intensive while others are data centric, some are IO or activity intensive with a mix of compute and storage. On the other hand, some data centers such as a communications hub may be network centric with very little data sticking or being stored.

    SLA and SLO image

    Even within in a data center or information factory, various applications will have different profiles, protection requirements for big data and little data. There can also be a mix of old legacy applications and new systems developed in-house, purchased, open-source based or accessed as a service. The servers and storage may be software defined (a new buzzword that has already jumped the shark), virtualized or operated in a private, hybrid or community cloud if not using a public service.

    Here are some related posts tied to everything is not the same:
    Optimize Data Storage for Performance and Capacity
    Is SSD only for performance?
    Cloud conversations: Gaining cloud confidence from insights into AWS outages
    Data Center Infrastructure Management (DCIM) and IRM
    Saving Money with Green IT: Time To Invest In Information Factories
    Everything Is Not Equal in the Datacenter, Part 1
    Everything Is Not Equal in the Datacenter, Part 2
    Everything Is Not Equal in the Datacenter, Part 3

    Storage I/O data center image

    Thus, not all things are the same in the data center, or information factories, both those under traditional management paradigms, as well as those supporting public, private, hybrid or community clouds.

    Ok, nuff said.

    Cheers gs

    Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

    twitter @storageio

    All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved