Where has the FCoE hype and FUD gone? (with poll)

Storage I/O cloud virtual and big data perspectives

A couple of years ago I did this post about if Is FCoE Struggling to Gain Traction, or on a normal adoption course?

Fast forward to today, has anybody else noticed that there seems to be less hype and fud on Fibre Channel (FC) over Ethernet (FCoE) than a year or two or three ago?

Does this mean that FCoE as the fud or detractors were predicting is in fact stillborn with no adoption, no deployment and dead on arrival?

Does this mean that FCoE as its proponents have said is still maturing, quietly finding adoption and deployment where it fits?

Does this mean that FCoE like its predecessors Fibre Channel and Ethernet are still evolving, expanding from early adopter to a mature technology?

Does this mean that FCoE is simply forgotten with software defined networking (SDN) having over-shadowed it?

Does this mean that FCoE has finally lost out and that iSCSI has finally stepped up and living up to what it was hyped to do ten years ago?

Does this mean that FC itself at either 8GFC or 16GFC is holding its own for now?

Does this mean that InfiniBand is on the rebound?

Does this mean that FCoE is simply not fun or interesting, or a shiny new technology with vendors not spending marketing money so thus people not talking, tweeting or blogging?

Does this mean that those who were either proponents pitching it or detractors despising it have found other things to talk about from SDN to OpenFlow to IOV to Software Defined Storage (what ever, or who ever definition your subscribe to) to cloud, big or little data and the list goes on?

I continue hear of or talk with customers organizations deploying FCoE in addition to iSCSI, FC, NAS and other means of accessing storage for cloud, virtual and physical environments.

Likewise I see some vendor discussions occurring not to mention what gets picked up via google alerts.

However in general, the rhetoric both pro and against, hype and FUD seems to have subsided, or at least for now.

So what gives, what’s your take on FCoE hype and FUD?

Cast your vote and see results here.

 

Ok, nuff said

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

February 2013 Server and StorageIO Update Newsletter

StorageIO News Letter Image
February 2013 News letter

Welcome to the February 2013 edition of the StorageIO Update news letter including a new format and added content.

You can get access to this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions.

Click on the following links to view the February 2013 edition as (HTML sent via Email) version, or PDF versions.

Visit the news letter page to view previous editions of the StorageIO Update.

You can subscribe to the news letter by clicking here.

Enjoy this edition of the StorageIO Update news letter, let me know your comments and feedback.

Nuff said for now

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Vote for top 2013 vblogs, thanks for your continued support

Eric Siebert (@Ericsiebert) author of the book Maximum vSphere (get your copy on Amazon.com here) has opened up voting for the annual top vBlog over at his site (vSphere-land).

While there is a focus on VMware and Virtualization blogs, there are also other categories such as Storage, Scripting, pod casting as well as independent for the non vendors and VARs.

VMware vExpert

It is an honor to be included in the polling along with my many 2012 fellow vExperts on the list.

Last year I made Eric’s 2012 top 50 list as well as appearing in the storage and some other categories in those rankings (thanks to all who voted last year).

This year I forgot to nominate myself (it’s a self nomination process) so while I am not on the storage, independent bloggers, pod cast sub-categories, I am however included in the general voting having made the top 50 list last year (#46).

A summary of Eric’s recommended voting criteria vs. basic popularity are:

  • Longevity: How long has somebody been blogging and posting for vs. starting and stopping.
  • Length: Short quick snippet posts vs more original content, time and effort vs. just posting.
  • Frequency: How often do posts appear, lots of short pieces vs. regular longer ones vs. an occasional post.
  • Quality: What’s in the post, original ideas, tips, information, insight, analysis, thought perspectives vs. reposting or reporting what others are doing.

Voting is now open (click here on the vote image) and closes on March 1, 2013 so if you read this or any of my other posts, comments and content or listen to our new pod casts at storageio.tv (also on iTunes).

Thank you in advance for your continued support and watch for more posts, comments, perspectives and pod casts about data and information infrastructure topics, trends, tools and techniques including servers, storage, IO networking, cloud, virtualization, backup/recovery, BC, DR and data protection along with big and little data (among other things).

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

NetApp EF540, something familiar, something new

StorageIO Industry trends and perspectives image

NetApp announced the other day a new all nand flash solid-state devices (SSD) storage system called the EF540 that is available now. The EF540 has something’s new and cool, along with some things familiar, tried, true and proven.

What is new is that the EF540 is an all nand flash multi-level cell (MLC) SSD storage system. What is old is that the EF540 is based on the NetApp E-Series (read more here and here) and SANtricity software with hundreds of thousands installed systems. As a refresher, the E-Series are the storage system technologies and solutions obtained via the Engenio acquisition from LSI in 2011.

Image of NetApp EF540 via ntapgeek.com
Image via www.ntapgeek.com

The EF540 expands the NetApp SSD flash portfolio which includes products such as FlashCache (read cache aka PAM) for controllers in ONTAP based storage systems. Other NetApp items in the NetApp flash portfolio include FlashPool SSD drives for persistent read and write storage in ONTAP based systems. Complimenting FlashCache and FlashPool is the server-side PCIe caching card and software FlashAccel. NetApp is claiming to have revenue shipped 36PB of flash complimenting over 3 Exabytes (EB) of storage while continuing to ship a large amount of SAS and SATA HDD’s.

NetApp also previewed its future FlashRay storage system that should appear in beta later in 2013 and general availability in 2014.

In addition to SSD and flash related announcements, NetApp also announced enhancements to its ONTAP FAS/V6200 series including the FAS/V6220, FAS/V6250 and FAS/V6290.

Some characteristics of the NetApp EF540 and SANtricity include:

  • Two models with 12 or 24 x 6Gbs SAS 800GB MLC SSD devices
  • Up to 9.6TB or 19.2TB physical storage in a 2U (3.5 inch) tall enclosure
  • Dual controllers for redundancy, load-balancing and availability
  • IOP performance of over 300,000 4Kbyte random 100% reads under 1ms
  • 6GByte/sec performance of 512Kbyte sequential reads, 5.5Gbyte/sec random reads
  • Multiple RAID levels (0, 1, 10, 3, 5, 6) and flexible group sizes
  • 12GB of DRAM cache memory in each controller (mirrored)
  • 4 x 8GFC host server-side ports per controller
  • Optional expansion host ports (6Gb SAS, 8GFC, 10Gb iSCSI, 40Gb IBA/SRP)
  • Snapshots and replication (synchronous and asynchronous) including to HDD systems
  • Can be used for traditional IOP intensive little-data, or bandwidth for big-data
  • Proactive SSD wear monitoring and notification alerts
  • Utilizes SANtricity version 10.84

Poll, Are large storage arrays day’s numbered?

EMC and NetApp (along with other vendors) continue to sell large numbers of HDD’s as well as large amounts of SSD. Both EMC and NetApp are taking similar approaches of leveraging PCIe flash cards as cache adding software functionality to compliment underlying storage systems. The benefit is that the cache approach is less disruptive for many environments while allowing improved return on investment (ROI) of existing assets.

EMC

NetApp

Storage systems with HDD and SSD

VMAX, VNX

FAS/V, E-Series

Storage systems with SSD cache

FastCache,

FlashCache

All SSD based storage

VMAX, VNX

EF540

All new SSD system in development

Project X

FlashRay

Server side PCIe SSD cache

VFCache

FlashAcell

Partner ecosystems

Yes

Yes

The best IO is the one that you do not have to do, however the next best are those that have the least cost or affect which is where SSD comes into play. SSD is like real estate in that location matters in terms of providing benefit, as well as how much space or capacity is needed.

What does this all mean?
The NetApp EF540 based on the E-Series storage system architecture is like one of its primary competitors (e.g. EMC VNX also available as an all-flash model). The similarity is that both have been competitors, as well as have been around for over a decade with hundreds of thousands of installed systems. The similarities are also that both continue to evolve their code base leveraging new hardware and software functionality. These improvements have resulted in improved performance, availability, capacity, energy effectiveness and cost reduction.

Whats your take on RAID still being relevant?

From a performance perspective, there are plenty of public workloads and benchmarks including Microsoft ESRP and SPC among others to confirm its performance. Watch for NetApp to release EF540 SPC results given their history of doing so with other E-Series based systems. With those or other results, compare and contrast to other solutions looking not just at IOPS or MB/sec (bandwidth), also latency, functionality and cost.

What does the EF540 compete with?
The EF540 competes with all flash-based SSD solutions (Violin, Solidfire, Purestorage, Whiptail, Kaminario, IBM/TMS, up-coming EMC Project “X” (aka XtremeIO)) among others. Some of those systems use general-purpose servers combined SSD drives, PCIe cards along with management software where others leverage customized platforms with software. To a lesser extent, competition will also be mixed mode SSD and HDD solutions along with some PCIe target SSD cards for some situations.

What to watch and look for:
It will be interesting to view and contrast public price performance results using SPC or Microsoft ESRP among others to see how the EF540 compares. In addition, it will be interesting to compare other storage based, as well as SSD systems beyond the number of IOPS. What will be interesting is to keep an eye on latency, as well as bandwidth, feature functionality and associated costs.

Given that the NetApp E-Series are OEM or sold by third parties, let’s see if something looking similar or identical to the EF540 appear at any of those or new partners. This includes traditional general purpose and little-data environments, along with cloud, managed service provider, high performance compute and high productivity compute (HPC), super computer (SC), big data and big bandwidth among others.

Poll, Have SSD been successful in traditional storage systems and arrays

The EF540 could also appear as a storage or IO accelerator for large-scale out, clustered, grid and object storage systems for meta data, indices, key value stores among other uses either direct attached to servers, or via shared iSCSI, SAS, FC and InfiniBand (IBA) SCSI Remote Protocol (SRP).

Keep an eye on how the startups that have been primarily Just a Bunch Of SSD (JBOS) in a box start talking about adding new features and functionality such as snapshots, replication or price reductions. Also, keep an eye and ear open to what EMC does with project “X” along with NetApp FlashRay among other improvements.

For NetApp customers, prospects, partners, E-Series OEMs and their customers with the need for IO consolidation, or performance optimization for big-data, little-data and related applications the EF540 opens up new opportunities and should be good news. For EMC competitors, they now have new competition which also signals an expanding market with new opportunities in adjacent areas for growth. This also further signals the need for diverse ssd portfolios and product options to meet different customer application needs, along with increased functionality vs. lowest cost for high capacity fast nand SSD storage.

Some related reading:

Disclosure: NetApp, Engenio (when LSI), EMC and TMS (now IBM) have been clients of StorageIO.

Ok, nuff said

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Speaking of SSDs (with poll)

StorageIO Industry trends and perspectives image

In the spirit of solid state devices (SSD) including DRAM and nand flash, not to mention emerging phase chance memory (PCM) among others that help to boost productivity and cut latency, here are a couple of quick notes and links.

Here are a some more pieces to have a quick look at:
SSD & Real Estate: Location, Location, Location matters
SSD Is in Your Future: Where, When & With What Are the Questions
Storage & IO trends for 2013 and beyond

SSD, flash and DRAM, DejaVu or something new?

Storage I/O ssd timeline image

Is SSD only for performance?
Have SSDs been unsuccessful with storage arrays (with poll)?
End the Hardware Numbers Game

Desum poll planned SSD use image
Image via 21cit (desum): The SSD hardware numbers game

What’s your take on SSD in storage arrays, cast your vote and see results here.

Also check out here what Micron has in mind with merging nand flash with the DDR4 (e.g. DRAM socket) memory bus for servers in a year or two.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Cloud conversations: Public, Private, Hybrid and Community Clouds? (Part II)

StorageIO Industry trends and perspectives image

This is the second of a two part series, read part I here.

Common community cloud conversation questions include among others:

Who defines the standards for community clouds?
The members or participants, or whoever they hire or get to volunteer to do it.

Who pays for the community cloud?
The members or participants do, think about a co-op or other resource sharing consortium with multi-tenant (shared) capabilities to isolate and keep members along with what they are doing separate.

cloud image

Who are community clouds for, when to use them?
If you cannot justify a private cloud for yourself, or, if you need more resiliency than what can be provided by your site and you know of a peer, partner, member or other with common needs, those could be a fit. Another variation is you are in an industry or agency or district where pooling of resources, yet operating separate has advantages or already being done. These range from medical and healthcare to education along with various small medium businesses (SMBs) that do not want to or cannot use a public facility for various reasons.

What technology is needed for building a community cloud?
Similar to deploying a public or private cloud, you will need various hard products including servers, storage, networking, management software tools for provisioning, orchestration, show back or charge back, multi-tenancy, security and authentication, data protection (backup, bc, dr, ha) along with various middleware and applications.

Storage I/O cloud building block image

What are community clouds used for?
Almost anything, granted there are limits and boundaries based tools, technologies, security and access controls among other constraints. Applications can range from big-data to little-data on all if not most points in between. On the other hand, if they are not safe or secure enough for your needs, then use a private cloud or whatever it is that you are currently using.

What about community cloud security, privacy and compliance regulations?
Those are topics and reasons why like-minded or affected groups might be able to leverage a community cloud. By being like-minded or affected groups, labs, schools, business, entities, agencies, districts, or other organizations that are under common mandates for security, compliance, privacy or other regulations can work together, yet keep their interests separate. What tools or techniques for achieving those goals and objectives would be dependent on those who offer services to those entities now?

data centers, information factories and clouds

Where can you get a community cloud?
Look around using Google or your favorite search tool; also watch the comments section to see how long it takes someone to jump in to say how he or she can help. Also talk with solution providers, business partners and VARs. Note that they may not know the term or phrases per say, so here is what to tell them. Tell them that you would like to deploy a private cloud at some place that will then be used in a multi-tenant way to safely and securely support different members of your consortium.

For those who have been around long enough, you can also just tell them that you want to do something like the co-op or consortium time-sharing type systems from past generations and they may know what you are looking for. If although they look at you with a blank deer in the head-light stare eyes glazed over, just tell them it’s a new lead-edge, software defined new and revolutionary (add some superlatives if you feel inclined) and then they might get excited.  If they still don’t know what to do or help you with, have them get in touch with me and I will explain it to them, or, I’ll put you in touch with those can help.

data centers, information factories and clouds

Where do you put a community cloud?
You could deploy them in your own facility, other member’s locations or both for resiliency. You could also use a safe secure co-lo facility already being used for other purposes.

Do community clouds have organizers?
Perhaps, however they are probably more along the lines of a coordinator, administrator, manager, controller as opposed to a community organizer per say. In other words, do not confuse a community cloud with a cloud community organized, aligned and activated for some particular cause. On the other hand, maybe there is value prop for some cloud activist to be  organized and take up the cause for community clouds in your area of interest ;).

data centers, information factories and clouds

Are community clouds more of a concept vs. a product?
If you have figured out that a community or peer cloud is nothing more than a different way of deploying, using and managing a combination of private, public and hybrid and putting a marketing name on them, congratulations, you are now thinking outside of the box, or outside of the usual cloud conversations.

What about public cloud services for selected audiences such as Amazons GovCloud? On one hand, I guess you could call or think of that as a semi-private public cloud, or a semi-public private cloud, or if you like superlatives an uber gallistic hybrid community cloud.

How you go about building, deploying and managing your community, coop, consortium, and agency, district or peer cloud will be how you leverage various hard and software products. The results of which will be your return on innovation (the new ROI) to address various needs and concerns or also known as valueware. Those results should be able to address or help close gaps and leverage clouds in general as a resource vs. simply as a tool, technology or technique.

Ok, nuff said…

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

January 2013 Server and StorageIO Update Newsletter

StorageIO News Letter Image
January 2013 News letter

Welcome to the January 2013 edition of the StorageIO Update news letter including a new format and added content.

You can get access to this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions.

Click on the following links to view the January 2013 edition as (HTML sent via Email) version, or PDF versions.

Visit the news letter page to view previous editions of the StorageIO Update.

You can subscribe to the news letter by clicking here.

Enjoy this edition of the StorageIO Update news letter, let me know your comments and feedback.

Nuff said for now

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Thanks for viewing StorageIO content and top 2012 viewed posts

StorageIO industry trends cloud, virtualization and big data

2012 was a busy year (it was our 7th year in business) along with plenty of activity on StorageIOblog.com as well as on the various syndicate and other sites that pickup our content feed (https://storageioblog.com/RSSfull.xml).

Excluding traditional media venues, columns, articles, web casts and web site visits (StorageIO.com and StorageIO.TV), StorageIO generated content including posts and pod casts have reached over 50,000 views per month (and growing) across StorageIOblog.com and our partner or syndicated sites. Including both public and private, there were about four dozen in-person events and activities not counting attending conferences or vendor briefing sessions, along with plenty of industry commentary. On the twitter front, plenty of activity there as well closing in on 7,000 followers.

Thank you to everyone who have visited the sites where you will find StorageIO generated content, along with industry trends and perspective comments, articles, tips, webinars, live in person events and other activities.

In terms of what was popular on the StorageIOblog.com site, here are the top 20 viewed posts in alphabetical order.

Amazon cloud storage options enhanced with Glacier
Announcing SAS SANs for Dummies book, LSI edition
Are large storage arrays dead at the hands of SSD?
AWS (Amazon) storage gateway, first, second and third impressions
EMC VFCache respinning SSD and intelligent caching
Hard product vs. soft product
How much SSD do you need vs. want?
Oracle, Xsigo, VMware, Nicira, SDN and IOV: IO IO its off to work they go
Is SSD dead? No, however some vendors might be
IT and storage economics 101, supply and demand
More storage and IO metrics that matter
NAD recommends Oracle discontinue certain Exadata performance claims
New Seagate Momentus XT Hybrid drive (SSD and HDD)
PureSystems, something old, something new, something from big blue
Researchers and marketers dont agree on future of nand flash SSD
Should Everything Be Virtualized?
SSD, flash and DRAM, DejaVu or something new?
What is the best kind of IO? The one you do not have to do
Why FC and FCoE vendors get beat up over bandwidth?
Why SSD based arrays and storage appliances can be a good idea

Moving beyond the top twenty read posts on StorageIOblog.com site, the list quickly expands to include more popular posts around clouds, virtualization and data protection modernization (backup/restore, HA, BC, DR, archiving), general IT/ICT industry trends and related themes.

I would like to thank the current StorageIOblog.com site sponsors Solarwinds (management tools including response time monitoring for physical and virtual servers) and Veeam (VMware and Hyper-V virtual server backup and data protection management tools) for their support.

Thanks again to everyone for reading and following these and other posts as well as for your continued support, watch for more content on the above and other related and new topics or themes throughout 2013.

Btw, if you are into Facebook, you can give StorageIO a like at facebook.com/storageio (thanks in advance) along with viewing our newsletter here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Summary, EMC VMAX 10K, high-end storage systems stayin alive

StorageIO industry trends cloud, virtualization and big data

This is a follow-up companion post to the larger industry trends and perspectives series from earlier today (Part I, Part II and Part III) pertaining to today’s VMAX 10K enhancement and other announcements by EMC, and the industry myth of if large storage arrays or systems are dead.

The enhanced VMAX 10K scales from a couple of dozen up to 1,560 HDDs (or mix of HDD and SSDs). There can be a mix of 2.5 inch and 3.5 inch devices in different drive enclosures (DAE). There can be 25 SAS based 2.5 inch drives (HDD or SSD) in the 2U enclosure (see figure with cover panels removed), or 15 3.5 inch drives (HDD or SSD) in a 3U enclosure. As mentioned, there can be all 2.5 inch (including for vault drives) for up to 1,200 devices, all 3.5 inch drives for up to 960 devices, or a mix of 2.5 inch (2U DAE) and 3.5 inch (3U DAE) for a total of 1,560 drives.

Image of EMC 2U and 3U DAE for VMAX 10K via EMC
Image courtesy EMC

Note carefully in the figure (courtesy of EMC) that the 2U 2.5 inch DAE and 3U 3.5 inch DAE along with the VMAX 10K are actually mounted in a 3rd cabinet or rack that is part of today’s announcement.

Also note that the DAE’s are still EMC; however as part of today’s announcement, certain third-party cabinets or enclosures such as might be found in a collocation (colo) or other data center environment can be used instead of EMC cabinets.  The VMAX 10K can however like the VMAX 20K and 40K support external storage virtualized similar to what has been available from HDS (VSP/USP) and HP branded Hitachi equivalent storage, or using NetApp V-Series or IBM V7000 in a similar way.

As mentioned in one of the other posts, there are various software functionality bundles available. Note that SRDF is a separate license from the bundles to give customers options including RecoverPoint.

Check out the three post industry trends and perspectives posts here, here and here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EMC VMAX 10K, looks like high-end storage systems are still alive (part III)

StorageIO industry trends cloud, virtualization and big data

This is the third in a multi-part series of posts (read first post here and second post here) looking at what else EMC announced today in addition to an enhanced VMAX 10K and dispelling the myth that large storage arrays are dead (or at least for now).

In addition to the VMAX 10K specific updates, EMC also announced the release of a new version of their Enginuity storage software (firmware, storage operating system). Enginuity is supported across all VMAX platforms and features the following:

  • Replication enhancements include TimeFinder clone refresh, restore and four site SRDF for the VMAX 10K, along with think or thin support. This capability enables functionality across VMAX 10K, 40K or 20K using synchronous or asynchronous and extends earlier 3 site to 4 site and mix modes. Note that larger VMAX systems had the extended replication feature support with VMAX 10K now on par with those. Note that the VMAX can be enhanced with VPLEX in front of storage systems (local or wide area, in region HA and out of region DR) and RecoverPoint behind the systems supporting bi-synchronous (two-way), synchronous and asynchronous data protection (CDP, replication, snapshots).
  • Unisphere for VMAX 1.5 manages DMX along with VMware VAAI UNMAP and space reclamation, block zero and hardware clone enhancements, IPV6, Microsoft Server 2012 support and VFCache 1.5.
  • Support for mix of 2.5 inch and 3.5 inch DAEs (disk array enclosures) along with new SAS drive support (high-performance and high-capacity, and various flash-based SSD or EFD).
  • The addition of a fourth dynamic tier within FAST for supporting third-party virtualized storage, along with compression of in-active, cold or stale data (manual or automatic) with 2 to 1 data footprint reduction (DFR) ratio. Note that EMC was one of early vendors to put compression into its storage systems on a block LUN basis in the CLARiiON (now VNX) along with NetApp and IBM (via their Storwize acquisition). The new fourth tier also means that third-party storage does not have to be the lowest tier in terms of performance or functionality.
  • Federated Tiered Storage (FTS) is now available on all EMC block storage systems including those with third-party storage attached in virtualization mode (e.g. VMAX). In addition to supporting tiering across its own products, and those of other vendors that have been virtualized when attached to a VMAX, ANSI T10 Data Integrity Field (DIF) is also supported. Read more about T10 DIF here, and here.
  • Front-end performance enhancements with host I/O limits (Quality of Service or QoS) for multi tenant and cloud environments to balance or prioritize IO across ports and users. This feature can balance based on thresholds for IOPS, bandwidth or both from the VMAX. Note that this feature is independent of any operating system based tool, utility, pathing driver or feature such as VMware DRS and Storage I/O control. Storage groups are created and mapped to specific host ports on the VMAX with the QoS performance thresholds applied to meet specific service level requirements or objectives.

For discussion (or entertainment) purpose, how about the question of if Enginuity qualifies or can be considered as a storage hypervisors (or storage virtualization or virtual storage)? After all, the VMAX is now capable of having third-party storage from other vendors attached to it, something that HDS has done for many years now. For those who feel a storage hypervisor, virtual storage or storage virtualization requires software running on Intel or other commodity based processors, guess what the VMAX uses for CPU processors (granted, you can’t simply download Enginuity software and run on a Dell, HP, IBM, Oracle or SuperMicro server).

I am guessing some of EMC competitors and their surrogates or others who like to play the storage hypervisor card game will be quick to tell you it is not based on various reasons or product comparisons, however you be the judge.

 

Back to the question of if, traditional high-end storage arrays are dead or dying (from part one in this series).

IMHO as mentioned not yet.

Granted like other technologies that have been declared dead or dying yet still in use (technology zombies), they continue to be enhanced, finding new customers, or existing customers using them in new ways, their roles are evolving, this still alive.

For some environments as has been the case over the past decade or so, there will be a continued migration from large legacy enterprise class storage systems to midrange or modular storage arrays with a mix of SSD and HDD. Thus, watch out for having a death grip not letting go of the past, while being careful about flying blind into the future. Do not be scared, be ready, do your homework with clouds, virtualization and traditional physical resources.

Likewise, there will be the continued migration for some from traditional mid-range class storage arrays to all flash-based appliances. Yet others will continue to leverage all the above in different roles aligned to where their specific features best serve the applications and needs of an organization.

In the case of high-end storage systems such as EMC VMAX (aka formerly known as DMX and Symmetrix before that) based on its Enginuity software, the hardware platforms will continue to evolve as will the software functionality. This means that these systems will evolve to handling more workloads, as well as moving into new environments from service providers to mid-range organizations where the systems were before out of their reach.

Smaller environments have grown larger as have their needs for storage systems while higher end solutions have scaled down to meet needs in different markets. What this means is a convergence of where smaller environments have bigger data storage needs and can afford the capabilities of scaled down or Right-sized storage systems such as the VMAX 10K.

Thus while some of the high-end systems may fade away faster than others, for those that continue to evolve being able to move into different adjacent markets or usage scenarios, they will be around for some time, at least in some environments.

Avoid confusing what is new and cool falling under industry adoption vs. what is productive and practical for customer deployment. Systems like the VMAX 10K are not for all environments or applications; however, for those who are open to exploring alternative solutions and approaches, it could open new opportunities.

If there is a high-end storage system platform (e.g. Enginuity) that continues to evolve, re-invent itself in terms of moving into or finding new uses and markets the EMC VMAX would be at or near the top of such list. For the other vendors of high-end storage system that are also evolving, you can have an Atta boy or Atta girl as well to make you feel better, loved and not left out or off of such list. ;)

Ok, nuff said for now.

Disclosure: EMC is not a StorageIO client; however, they have been in the past directly and via acquisitions that they have done. I am however a customer of EMC via my Iomega IX4 NAS (I never did get the IX2 that I supposedly won at EMCworld ;) ) that I bought on Amazon.com and indirectly via VMware products that I have, oh, and they did sent me a copy of the new book Human Face of Big Data (read more here).

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EMC VMAX 10K, looks like high-end storage systems are still alive

StorageIO industry trends cloud, virtualization and big data

This is the first in a multi-part series of posts looking at if large enterprise and legacy storage systems are dead, along with what todays EMC VMAX 10K updates means.

EMC has announced an upgrade, refresh or new version of their previously announced Virtual matrix (VMAX) 10,000 (10K), part of the VMAX family of enterprise class storage systems formerly known as DMX (Direct Matrix) and Symmetrix. I will get back to more coverage on the VMAX 10K and other EMC enhancements in a few moments in part two and three of this series.

Have you heard the industry myth about the demise or outright death of traditional storage systems? This has been particularly the case for high-end enterprise class systems, which by the way which were first, declared dead back in the mid-1990s then at the hands of emerging mid-range storage systems.

Enterprise class storage systems include EMC VMAX, Fujitsu Eternus DX8700, HDS, HP XP P9000 based on the HDS high-end product (OEM from HDS parent Hitachi Ltd.). Note that some HPers or their fans might argue that the P10000 (formerly known as 3PAR) declared as tier 1.5 should also be on the list; I will leave that up to you to decide.

Let us not forget the IBM DS8000 series (whose predecessors was known as the ESS and VSS before that); although some IBMers will tell you that XIV should also be in this list. High-end enterprise class storage systems such as those mentioned above are not alone in being declared dead at the hands of new all solid-state devices (SSD) and their startup vendors, or mixed and hybrid-based solutions.

Some are even declaring dead due to new SSD appliances or systems, and by storage hypervisor or virtual storage array (VSA) the traditional mid-range storage systems that were supposed to have killed off the enterprise systems a decade ago (hmm, DejaVu?).

The mid-range storage systems include among others block (SAN and DAS) and file (NAS) systems from Data Direct Networks (DDN), Dell Complement, EqualLogic and MD series (Netapp Engenio based), EMC VNX and Isilon, Fujitsu Eternus, and HDS HUS mid-range formerly known as AMS. Let us not forget about HP 3PAR or P2000 (DotHill based) or P6000 (EVA which is probably being put out to rest). Then there are the various IBM products (their own and what they OEM from others), NEC, NetApp (FAS and Engenio), Oracle and Starboard (formerly known as Reldata). Note that there are many startups that could be in the above list as well if they were not considering the above to be considered dead, thus causing themselves to also be extinct as well, how ironic ;).

What are some industry trends that I am seeing?

  • Some vendors and products might be nearing the ends of their useful lives
  • Some vendors, their products and portfolios continue to evolve and expand
  • Some vendors and their products are moving into new or adjacent markets
  • Some vendors are refining where and what to sell when and to who
  • Some vendors are moving up market, some down market
  • Some vendors are moving into new markets, others are moving out of markets
  • Some vendors are declaring others dead to create a new market for their products
  • One size or approach or technology does not fit all needs, avoid treating all the same
  • Leverage multiple tools and technology in creative ways
  • Maximize return on innovation (the new ROI) by using various tools, technologies in ways to boost productivity, effectiveness while removing complexity and cost
  • Realization that cutting cost can result in reduced resiliency, thus look for and remove complexity with benefit of removing costs without compromise
  • Storage arrays are moving into new roles, including as back-end storage for cloud, object and other software stacks running on commodity servers to replace JBOD (DejaVu anyone?).

Keep in mind that there is a difference between industry adoption (what is talked about) and customer deployment (what are actually bought and used). Likewise there is technology based on GQ (looks and image) and G2 (functionality, experience).

There is also an industry myth that SSD cannot or has not been successful in traditional storage systems which in some cases has been true with some products or vendors. Otoh, some vendors such as EMC, NetApp and Oracle (among others) are having good success with SSD in their storage systems. Some SSD startup vendors have been more successful on both the G2 and GQ front, while some focus on the GQ or image may not be as successful (or at least yet) in the industry adoption vs. customer deployment game.

For the above mentioned storage systems vendors and products (among others), or at least for most of them there is still have plenty of life in them, granted their role and usage is changing including in some cases being found as back-end storage systems behind servers running virtualization, cloud, object storage and other storage software stacks. Likewise, some of the new and emerging storage systems (hardware, software, valueware, services) and vendors have bright futures while others may end up on the where are they now list.

Are high-end enterprise class or other storage arrays and systems dead at the hands of new startups, virtual storage appliances (VSA), storage hypervisors, storage virtualization, virtual storage and SSD?

Are large storage arrays dead at the hands of SSD?

Have SSDs been unsuccessful with storage arrays (with poll)?

 

Here are links to two polls where you can cast your vote.

Cast your vote and see results of if large storage arrays and systems are dead here.

Cast your vote and see results of if SSD has not been successful in storage systems.

So what about it, are enterprise or large storage arrays and systems dead?

Perhaps in some tabloids or industry myths (or that some wish for) or in some customer environments, as well as for some vendors or their products that can be the case.

However, IMHO for many other environments (and vendors) the answer is no, granted some will continue to evolve from legacy high-end enterprise class storage systems to mid-range or to appliance or VSA or something else.

There is still life many of the storage systems architectures, platforms and products that have been declared dead for over a decade.

Continue reading about the specifics of the EMC VMAX 10K announcement in the next post in this series here. Also check out Chucks EMC blog to see what he has to say.

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Cloud conversation, Thanks Gartner for saying what has been said

StorageIO industry trends cloud, virtualization and big data

Thank you Gartner for your statements concurring and endorsing the notion of clouds can be viable, however do your homework, welcome to the club.

Why am I thanking Gartner?

Simple, I appreciate Gartner now saying what has been said for a couple of years hoping it will help to amplify the theme to the Gartner followers and faithful.

Gartner: Cloud storage viable option, but proceed carefully


Images licensed for use by StorageIO via Atomazul / Shutterstock.com

Sounds like Gartner has come to the same conclusion on what has been said for several years now in posts, articles, keynotes, presentations, webinars and other venues which is when it comes to IT clouds, don’t be scared. However do your homework, be prepared, do your due diligence, proof of concepts.

Image of clouds, cloud and virtual data storage networking book

Here are some related materials to prepare and plan for IT clouds (public and private):

What is your take on IT clouds? Click here to cast your vote and see what others are thinking about clouds.

Now for those who feel that free information or content is not worth its price, then feel free to go to Amazon and buy some Book copies here, or subscribing to the Kindle version of the StorageIOblog, or contact us for an advisory consultation or other project. For everybody else, enjoy and remember, don’t be scared of clouds, do your homework, be prepared and keep in mind that clouds are a shared responsibility.

Disclosure: I was a Gartner client when I working in an IT organization and then later as a vendor, however not anymore ;).

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Cloud conversations: Gaining cloud confidence from insights into AWS outages (Part II)

StorageIO industry trends cloud, virtualization and big data

This is the second in a two-part industry trends and perspective looking at learning from cloud incidents, view part I here.

There is good information, insight and lessons to be learned from cloud outages and other incidents.

Sorry cynics no that does not mean an end to clouds, as they are here to stay. However when and where to use them, along with what best practices, how to be ready and configure for use are part of the discussion. This means that clouds may not be for everybody or all applications, or at least today. For those who are into clouds for the long haul (either all in or partially) including current skeptics, there are many lessons to be  learned and leveraged.

In order to gain confidence in clouds, some questions that I routinely am asked include are clouds more or less reliable than what you are doing? Depends on what you are doing, and how you will be using the cloud services. If you are applying HA and other BC or resiliency best practices, you may be able to configure and isolate from the more common situations. On the other hand, if you are simply using the cloud services as a low-cost alternative selecting the lowest price and service class (SLAs and SLOs), you might get what you paid for. Thus, clouds are a shared responsibility, the service provider has things they need to do, and the user or person designing how the service will be used have some decisions making responsibilities.

Keep in mind that high availability (HA), resiliency, business continuance (BC) along with disaster recovery (DR) are the sum of several pieces. This includes people, best practices, processes including change management, good design eliminating points of failure and isolating or containing faults, along with how the components  or technology used (e.g. hardware, software, networks, services, tools). Good technology used in goods ways can be part of a highly resilient flexible and scalable data infrastructure. Good technology used in the wrong ways may not leverage the solutions to their full potential.

While it is easy to focus on the physical technologies (servers, storage, networks, software, facilities), many of the cloud services incidents or outages have involved people, process and best practices so those need to be considered.

These incidents or outages bring awareness, a level set, that this is still early in the cloud evolution lifecycle and to move beyond seeing clouds as just a way to cut cost, and seeing the importance and value HA, resiliency, BC and DR. This means learning from mistakes, taking action to correct or fix errors, find and cut points of failure are part of a technology maturing or the use of it. These all tie into having services with service level agreements (SLAs) with service level objectives (SLOs) for availability, reliability, durability, accessibility, performance and security among others to protect against mayhem or other things that can and do happen.

Images licensed for use by StorageIO via
Atomazul / Shutterstock.com

The reason I mentioned earlier that AWS had another incident is that like their peers or competitors who have incidents in the past, AWS appears to be going through some growing, maturing, evolution related activities. During summer 2012 there was an AWS incident that affected Netflix (read more here: AWS and the Netflix Fix?). It should also be noted that there were earlier AWS outages where Netflix (read about Netflix architecture here) leveraged resiliency designs to try and prevent mayhem when others were impacted.

Is AWS a lightning rod for things to happen, a point of attraction for Mayhem and others?

Granted given their size, scope of services and how being used on a global basis AWS is blazing new territory and experiences, similar to what other information services delivery platforms did in the past. What I mean is that while taken for granted today, open systems Unix, Linux, Windows-based along with client-server, midrange or distributed systems, not to mention mainframe hardware, software, networks, processes, procedures, best practices all went through growing pains.

There are a couple of interesting threads going on over in various LinkedIn Groups based on some reporters stories including on speculation of what happened, followed with some good discussions of what actually happened and how to prevent recurrence of them in the future.

Over in the Cloud Computing, SaaS & Virtualization group forum, this thread is based on a Forbes article (Amazon AWS Takes Down Netflix on Christmas Eve) and involves conversations about SLAs, best practices, HA and related themes. Have a look at the story the thread is based on and some of the assertions being made, and ensuing discussions.

Also over at LinkedIn, in the Cloud Hosting & Service Providers group forum, this thread is based on a story titled Why Netflix’ Christmas Eve Crash Was Its Own Fault with a good discussion on clouds, HA, BC, DR, resiliency and related themes.

Over at the Virtualization Practice, there is a piece titled Is Amazon Ruining Public Cloud Computing? with comments from me and Adrian Cockcroft (@Adrianco) a Netflix Architect (you can read his blog here). You can also view some presentations about the Netflix architecture here.

What this all means

Saying you get what you pay for would be too easy and perhaps not applicable.

There are good services free, or low-cost, just like good free content and other things, however vice versa, just because something costs more, does not make it better.

Otoh, there are services that charge a premium however may have no better if not worse reliability, same with content for fee or perceived value that is no better than what you get free.

Additional related material

Some closing thoughts:

  • Clouds are real and can be used safely; however, they are a shared responsibility.
  • Only you can prevent cloud data loss, which means do your homework, be ready.
  • If something can go wrong, it probably will, particularly if humans are involved.
  • Prepare for the unexpected and clarify assumptions vs. realities of service capabilities.
  • Leverage fault isolation and containment to prevent rolling or spreading disasters.
  • Look at cloud services beyond lowest cost or for cost avoidance.
  • What is your organizations culture for learning from mistakes vs. fixing blame?
  • Ask yourself if you, your applications and organization are ready for clouds.
  • Ask your cloud providers if they are ready for you and your applications.
  • Identify what your cloud concerns are to decide what can be done about them.
  • Do a proof of concept to decide what types of clouds and services are best for you.

Do not be scared of clouds, however be ready, do your homework, learn from the mistakes, misfortune and errors of others. Establish and leverage known best practices while creating new ones. Look at the past for guidance to the future, however avoid clinging to, and bringing the baggage of the past to the future. Use new technologies, tools and techniques in new ways vs. using them in old ways.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Cloud conversations: Gaining cloud confidence from insights into AWS outages

StorageIO industry trends cloud, virtualization and big data

This is the first of a two-part industry trends and perspectives series looking at how to learn from cloud outages (read part II here).

In case you missed it, there were some public cloud outages during the recent Christmas 2012-holiday season. One incident involved Microsoft Xbox (view the Microsoft Azure status dashboard here) users were impacted, and the other was another Amazon Web Services (AWS) incident. Microsoft and AWS are not alone, most if not all cloud services have had some type of incident and have gone on to improve from those outages. Google has had issues with different applications and services including some in December 2012 along with a Gmail incident that received covered back in 2011.

For those interested, here is a link to the AWS status dashboard and a link to the AWS December 24 2012 incident postmortem. In the case of the recent AWS incident which affected users such as Netflix, the incident (read the AWS postmortem and Netflix postmortem) was tied to a human error. This is not to say AWS has more outages or incidents vs. others including Microsoft, it just seems that we hear more about AWS when things happen compared to others. That could be due to AWS size and arguably market leading status, diversity of services and scale at which some of their clients are using them.

Btw, if you were not aware, Microsoft Azure is more than just about supporting SQLserver, Exchange, SharePoint or Office, it is also an IaaS layer for running virtual machines such as Hyper-V, as well as a storage target for storing data. You can use Microsoft Azure storage services as a target for backing up or archiving or as general storage, similar to using AWS S3 or Rackspace Cloud files or other services. Some backup and archiving AaaS and SaaS providers including Evault partner with Microsoft Azure as a storage repository target.

When reading some of the coverage of these recent cloud incidents, I am not sure if I am more amazed by some of the marketing cloud washing, or the cloud bashing and uniformed reporting or lack of research and insight. Then again, if someone repeats a myth often enough for others to hear and repeat, as it gets amplified, the myth may assume status of reality. After all, you may know the expression that if it is on the internet then it must be true?

Images licensed for use by StorageIO via
Atomazul / Shutterstock.com

Have AWS and public cloud services become a lightning rod for when things go wrong?

Here is some coverage of various cloud incidents:

The above are a small sampling of different stories, articles, columns, blogs, perspectives about cloud services outages or other incidents. Assuming the services are available, you can Google or Bing many others along with reading postmortems to gain insight into what happened, the cause, effect and how to prevent in the future.

Do these recent incidents show a trend of increased cloud outages? Alternatively, do they say that the cloud services are being used more and on a larger basis, thus the impacts become more known?

Perhaps it is a mix of the above, and like when a magnetic storage tape gets lost or stolen, it makes for good news or copy, something to write about. Granted there are fewer tapes actually lost than in the past, and far fewer vs. lost or stolen laptops and other devices with data on them. There are probably other reasons such as the lightning rod effect given how much industry hype around clouds that when something does happen, the cynics or foes come out in force, sometimes with FUD.

Similar to traditional hardware or software based product vendors, some service providers have even tried to convince me that they have never had an incident, lost or corrupted or compromised any data, yeah, right. Candidly, I put more credibility and confidence in a vendor or solution provider who tells me that they have had incidents and taken steps to prevent them from recurring. Granted those steps might be made public while others might be under NDA, at least they are learning and implementing improvements.

As part of gaining insights, here are some links to AWS, Google, Microsoft Azure and other service status dashboards where you can view current and past situations.

What is your take on IT clouds? Click here to cast your vote and see what others are thinking about clouds.

Ok, nuff said for now (check out part II here )

Disclosure: I am a customer of AWS for EC2, EBS, S3 and Glacier as well as a customer of Bluehost for hosting and Rackspace for backups. Other than Amazon being a seller of my books (and my blog via Kindle) along with running ads on my sites and being an Amazon Associates member (Google also has ads), none of those mentioned are or have been StorageIO clients.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved