Vote for top 2013 vblogs, thanks for your continued support

Eric Siebert (@Ericsiebert) author of the book Maximum vSphere (get your copy on Amazon.com here) has opened up voting for the annual top vBlog over at his site (vSphere-land).

While there is a focus on VMware and Virtualization blogs, there are also other categories such as Storage, Scripting, pod casting as well as independent for the non vendors and VARs.

VMware vExpert

It is an honor to be included in the polling along with my many 2012 fellow vExperts on the list.

Last year I made Eric’s 2012 top 50 list as well as appearing in the storage and some other categories in those rankings (thanks to all who voted last year).

This year I forgot to nominate myself (it’s a self nomination process) so while I am not on the storage, independent bloggers, pod cast sub-categories, I am however included in the general voting having made the top 50 list last year (#46).

A summary of Eric’s recommended voting criteria vs. basic popularity are:

  • Longevity: How long has somebody been blogging and posting for vs. starting and stopping.
  • Length: Short quick snippet posts vs more original content, time and effort vs. just posting.
  • Frequency: How often do posts appear, lots of short pieces vs. regular longer ones vs. an occasional post.
  • Quality: What’s in the post, original ideas, tips, information, insight, analysis, thought perspectives vs. reposting or reporting what others are doing.

Voting is now open (click here on the vote image) and closes on March 1, 2013 so if you read this or any of my other posts, comments and content or listen to our new pod casts at storageio.tv (also on iTunes).

Thank you in advance for your continued support and watch for more posts, comments, perspectives and pod casts about data and information infrastructure topics, trends, tools and techniques including servers, storage, IO networking, cloud, virtualization, backup/recovery, BC, DR and data protection along with big and little data (among other things).

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

VCE revisited, now & zen

StorageIO Industry trends and perspectives image

Yesterday VCE and their proud parents announced revenues had reached an annual run rate of a billion dollars. Today VCE announced some new products along with enhancements to others.

Before going forward though, lets take go back for a moment to help set the stage to see where things might be going in the future. A little over a three years ago, back in November 2009 VCE was born and initially named ACADIA by its proud parents (Cisco, EMC, Intel and VMware). Here is a post that I did back then.

Btw the reference to Zen might cause some to think that I don’t how to properly refer to the Xen hypervisor. It is really a play from Robert Plants album Now & Zen and its song Tall Cool One. For those not familiar, click on the link and listen (some will have DejaVu, others might think its new and cool) as it takes a look back as well as present, similar to VCE.

Robert plant now & zen vs. Xen hypervisor

On the other hand, this might prompt the question of when will Xen be available on a Vblock? For that I defer you to VCE CTO Trey Layton (@treylayton).

VCE stands for Virtual Computing Environment and was launched as a joint initiative including products and a company (since renamed from Acadia to VCE) to bring all the pieces together. As a company, VCE is based in Plano (Richardson) Texas just north of downtown Dallas and down the road from EDS or what is now left of it after the HP acquisition  The primary product of VCE has been the Vblock. The Vblock is a converged solution comprising components from their parents such as VMware virtualization and management software tools, Cisco servers, EMC storage and software tools and Intel processors.

Not surprisingly there are many ex-EDS personal at VCE along with some Cisco, EMC, VMware and many other people from other organizations in Plano as well as other cites. Also interesting to note that unlike other youngsters that grow up and stay in touch with their parents via technology or social media tools, VCE is also more than a few miles (try hundreds to thousands) from the proud parent headquarters on the San Jose California and Boston areas.

As part of a momentum update, VCE and their parents (Cisco, EMC, VMware and Intel) announced annual revenue run rate of a billion dollars in just three years. In addition the proud parents and VCE announced that they have over 1,000 revenue shipped and installed Vblock systems (also here) based on Cisco compute servers, and EMC storage solutions.

The VCE announcement consists of:

  • SAP HANA database application optimized Vblocks (two modes, 4 node and 8 node)
  • VCE Vision management tools and middleware or what I have refered to as Valueware
  • Entry level Vblock (100 and 200) with Cisco C servers and EMC (VNXe and VNX) storage
  • Performance and functionality enhancements to existing Vblock models 300 and 700
  • Statement of direction for more specialized Vblocks besides SAP HANA


Images courtesy with permission of VCE.com

While VCE is known for their Vblock converged, stack, integrated, data center in a box, private cloud or among other descriptors, there is more to the story. VCE is addressing convergence of common IT building blocks for cloud, virtual, and traditional physical environments. Common core building blocks include servers (compute or processors), networking (IO and connectivity), storage, hardware, software, management tools along with people, processes, metrics, policies and protocols.

Storage I/O image of cloud and virtual IT building blocks

I like the visual image that VCE is using (see below) as it aligns with and has themes common to what I have discussing in the past.


Images courtesy with permission of VCE.com

VCE Vision is software with APIs that collects information about Vblock hardware and software components to give insight to other tools and management frameworks. For example VMware vCenter plug-in and vCenter Operations Manager Adapter which should not be a surprise. Customers will also be able to write to the Vision API to meet their custom needs. Let us watch and see what VCE does to add support for other software and management tools, along with gain support from others.


Images courtesy with permission of VCE.com

Vision is more than just an information source feed for VMware vCenter or VASA or tools and frameworks from others. Vision is software developed by VCE that will enable insight and awareness into the Vblock and applications, however also confirm and give status of physical and logical component configuration. This means the basis for setting up automated or programmatic remediation such as determining what software or firmware to update based on different guidelines.


Images courtesy with permission of VCE.com

Initially VCE Vision provides (information) inventory and perspective of how those components are in compliance with firmware or software releases, so stay tuned. VCE is indicating that Vision will continue to evolve after all this is the V1.0 release with future enhancements targeted towards taking action, controlling or active management.

StorageIO Industry trends and perspectives image

Some trends, thoughts and perspectives

The industry adoption buzz is around software defined X where X can be data center (SDDC), or storage (SDS) or networking (SDN), or marketing (SDM) or other things. The hype and noise around software defined which in the case of some technologies is good. On the marketing hype side, this has led to some Software Defined BS (SDBS).

Thus, it was refreshing at least in the briefing session I was involved in to hear a minimum focus around software defined and more around customer and IT business enablement with technology that is shipping today.

VCE Vision is a good example of adding value hence what I refer to as Valueware around converged components. For those vendors who have similar solutions, I urge them to streamline, simplify and more clearly articulate their value proposition if they have valueware.

Vendors including VCE continue to evolve their platform based converged solutions by adding more valueware, management tools, interfaces, APIs, interoperability and support for more applications. The support for applications is also moving beyond simple line item ordering or part number skews to ease acquisition and purchasing. Some solutions include VCE Vblock, NetApp FlexPod that also uses Cisco compute servers, IBM PureSystems (PureFlex etc) and Dell vStart among others are extending their support and optimization for various software solutions. These software solutions range from SAP (including HANA), Microsoft (Exchange, SQLserver, Sharepoint), Citrix desktop (VDI), Oracle, OpenStack, Hadoop map reduce along with other little-data, big-data and big-bandwidth applications to name a few.

Additional and related reading:
Acadia VCE: VMware + Cisco + EMC = Virtual Computing Environment
Cloud conversations: Public, Private, Hybrid what about Community Clouds?
Cloud, virtualization, Storage I/O trends for 2013 and beyond
Convergence: People, Processes, Policies and Products
Hard product vs. soft product
Hardware, Software, what about Valueware?
Industry adoption vs. industry deployment, is there a difference?
Many faces of storage hypervisor, virtual storage or storage virtualization
The Human Face of Big Data, a Book Review
Why VASA is important to have in your VMware CASA

Congratulations to VCE, along with their proud parents, family, friends and partners, now how long will it take to reach your next billion dollars in annual run rate revenue. Hopefully it wont be three years until the next VCE revisited now and Zen ;).

Disclosure: EMC and Cisco have been StorageIO clients, I am a VMware vExpert that gets me a free beer after I pay for VMworld and Intel has named two of my books listed on their Recommended Reading List for Developers.

Ok, nuff said, time to head off to vBeers over in Minneapolis.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

NetApp EF540, something familiar, something new

StorageIO Industry trends and perspectives image

NetApp announced the other day a new all nand flash solid-state devices (SSD) storage system called the EF540 that is available now. The EF540 has something’s new and cool, along with some things familiar, tried, true and proven.

What is new is that the EF540 is an all nand flash multi-level cell (MLC) SSD storage system. What is old is that the EF540 is based on the NetApp E-Series (read more here and here) and SANtricity software with hundreds of thousands installed systems. As a refresher, the E-Series are the storage system technologies and solutions obtained via the Engenio acquisition from LSI in 2011.

Image of NetApp EF540 via ntapgeek.com
Image via www.ntapgeek.com

The EF540 expands the NetApp SSD flash portfolio which includes products such as FlashCache (read cache aka PAM) for controllers in ONTAP based storage systems. Other NetApp items in the NetApp flash portfolio include FlashPool SSD drives for persistent read and write storage in ONTAP based systems. Complimenting FlashCache and FlashPool is the server-side PCIe caching card and software FlashAccel. NetApp is claiming to have revenue shipped 36PB of flash complimenting over 3 Exabytes (EB) of storage while continuing to ship a large amount of SAS and SATA HDD’s.

NetApp also previewed its future FlashRay storage system that should appear in beta later in 2013 and general availability in 2014.

In addition to SSD and flash related announcements, NetApp also announced enhancements to its ONTAP FAS/V6200 series including the FAS/V6220, FAS/V6250 and FAS/V6290.

Some characteristics of the NetApp EF540 and SANtricity include:

  • Two models with 12 or 24 x 6Gbs SAS 800GB MLC SSD devices
  • Up to 9.6TB or 19.2TB physical storage in a 2U (3.5 inch) tall enclosure
  • Dual controllers for redundancy, load-balancing and availability
  • IOP performance of over 300,000 4Kbyte random 100% reads under 1ms
  • 6GByte/sec performance of 512Kbyte sequential reads, 5.5Gbyte/sec random reads
  • Multiple RAID levels (0, 1, 10, 3, 5, 6) and flexible group sizes
  • 12GB of DRAM cache memory in each controller (mirrored)
  • 4 x 8GFC host server-side ports per controller
  • Optional expansion host ports (6Gb SAS, 8GFC, 10Gb iSCSI, 40Gb IBA/SRP)
  • Snapshots and replication (synchronous and asynchronous) including to HDD systems
  • Can be used for traditional IOP intensive little-data, or bandwidth for big-data
  • Proactive SSD wear monitoring and notification alerts
  • Utilizes SANtricity version 10.84

Poll, Are large storage arrays day’s numbered?

EMC and NetApp (along with other vendors) continue to sell large numbers of HDD’s as well as large amounts of SSD. Both EMC and NetApp are taking similar approaches of leveraging PCIe flash cards as cache adding software functionality to compliment underlying storage systems. The benefit is that the cache approach is less disruptive for many environments while allowing improved return on investment (ROI) of existing assets.

EMC

NetApp

Storage systems with HDD and SSD

VMAX, VNX

FAS/V, E-Series

Storage systems with SSD cache

FastCache,

FlashCache

All SSD based storage

VMAX, VNX

EF540

All new SSD system in development

Project X

FlashRay

Server side PCIe SSD cache

VFCache

FlashAcell

Partner ecosystems

Yes

Yes

The best IO is the one that you do not have to do, however the next best are those that have the least cost or affect which is where SSD comes into play. SSD is like real estate in that location matters in terms of providing benefit, as well as how much space or capacity is needed.

What does this all mean?
The NetApp EF540 based on the E-Series storage system architecture is like one of its primary competitors (e.g. EMC VNX also available as an all-flash model). The similarity is that both have been competitors, as well as have been around for over a decade with hundreds of thousands of installed systems. The similarities are also that both continue to evolve their code base leveraging new hardware and software functionality. These improvements have resulted in improved performance, availability, capacity, energy effectiveness and cost reduction.

Whats your take on RAID still being relevant?

From a performance perspective, there are plenty of public workloads and benchmarks including Microsoft ESRP and SPC among others to confirm its performance. Watch for NetApp to release EF540 SPC results given their history of doing so with other E-Series based systems. With those or other results, compare and contrast to other solutions looking not just at IOPS or MB/sec (bandwidth), also latency, functionality and cost.

What does the EF540 compete with?
The EF540 competes with all flash-based SSD solutions (Violin, Solidfire, Purestorage, Whiptail, Kaminario, IBM/TMS, up-coming EMC Project “X” (aka XtremeIO)) among others. Some of those systems use general-purpose servers combined SSD drives, PCIe cards along with management software where others leverage customized platforms with software. To a lesser extent, competition will also be mixed mode SSD and HDD solutions along with some PCIe target SSD cards for some situations.

What to watch and look for:
It will be interesting to view and contrast public price performance results using SPC or Microsoft ESRP among others to see how the EF540 compares. In addition, it will be interesting to compare other storage based, as well as SSD systems beyond the number of IOPS. What will be interesting is to keep an eye on latency, as well as bandwidth, feature functionality and associated costs.

Given that the NetApp E-Series are OEM or sold by third parties, let’s see if something looking similar or identical to the EF540 appear at any of those or new partners. This includes traditional general purpose and little-data environments, along with cloud, managed service provider, high performance compute and high productivity compute (HPC), super computer (SC), big data and big bandwidth among others.

Poll, Have SSD been successful in traditional storage systems and arrays

The EF540 could also appear as a storage or IO accelerator for large-scale out, clustered, grid and object storage systems for meta data, indices, key value stores among other uses either direct attached to servers, or via shared iSCSI, SAS, FC and InfiniBand (IBA) SCSI Remote Protocol (SRP).

Keep an eye on how the startups that have been primarily Just a Bunch Of SSD (JBOS) in a box start talking about adding new features and functionality such as snapshots, replication or price reductions. Also, keep an eye and ear open to what EMC does with project “X” along with NetApp FlashRay among other improvements.

For NetApp customers, prospects, partners, E-Series OEMs and their customers with the need for IO consolidation, or performance optimization for big-data, little-data and related applications the EF540 opens up new opportunities and should be good news. For EMC competitors, they now have new competition which also signals an expanding market with new opportunities in adjacent areas for growth. This also further signals the need for diverse ssd portfolios and product options to meet different customer application needs, along with increased functionality vs. lowest cost for high capacity fast nand SSD storage.

Some related reading:

Disclosure: NetApp, Engenio (when LSI), EMC and TMS (now IBM) have been clients of StorageIO.

Ok, nuff said

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Speaking of SSDs (with poll)

StorageIO Industry trends and perspectives image

In the spirit of solid state devices (SSD) including DRAM and nand flash, not to mention emerging phase chance memory (PCM) among others that help to boost productivity and cut latency, here are a couple of quick notes and links.

Here are a some more pieces to have a quick look at:
SSD & Real Estate: Location, Location, Location matters
SSD Is in Your Future: Where, When & With What Are the Questions
Storage & IO trends for 2013 and beyond

SSD, flash and DRAM, DejaVu or something new?

Storage I/O ssd timeline image

Is SSD only for performance?
Have SSDs been unsuccessful with storage arrays (with poll)?
End the Hardware Numbers Game

Desum poll planned SSD use image
Image via 21cit (desum): The SSD hardware numbers game

What’s your take on SSD in storage arrays, cast your vote and see results here.

Also check out here what Micron has in mind with merging nand flash with the DDR4 (e.g. DRAM socket) memory bus for servers in a year or two.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

VMware buys virsto, is it about storage hypervisors?

StorageIO Industry trends and perspectives image

Yesterday VMware announced that it is acquiring the IO performance optimization and acceleration software vendor Virsto for an undisclosed amount.

Some may know Virsto due to their latching and jumping onto the Storage Hypervisor bandwagon as part of storage virtualization and virtual storage. On the other hand, some may know Virsto for their software that plugs into server virtualization Hypervisor  such as VMware and Microsoft Hyper-V. Then there are all of those who either did not or still don’t know of Virsto or their solutions yet they need to learn about it.

Unlike virtual storage arrays (VSAa), or virtual storage appliances, or storage virtualization software that aggregates storage, the Virsto software address the IO performance aggravation caused by aggregation.

Keep in mind that the best IO is the IO that you do not have to do. The second best IO is the one that has the least impact and that is cost effective. A common approach, or preached best practice by some vendors server virtualization and virtual desktop infrastructures (VDI) that result in IO bottlenecks is to throw more SSD or HDD hardware at the problem.

server virtualization aggregation causing aggravation

Turns out that the problem with virtual machines (VMs) is not just aggregation (consolidation) causing aggravation, it’s also the mess of mixed applications and IO profiles. That is where IO optimization and acceleration tools come into play that are plugged into applications, file systems, operating systems, hypervisor’s or storage appliances.

In the case of Virsto (read more about their solution here), their technology plugs into the hypervisor  (e.g. VMware vSphere/ESX or Hyper-V) to group and optimize IO operations.

By using SSD as a persistent cache, tools such as Virsto can help make better use of underlying storage systems including HDD and SSD, while also removing the aggravation as a result of aggregation.

What will be interesting to watch is to see if VMware continues to support other hypervisor’s such as Microsoft Hyper-V or close the technology to VMware only.

It will also be interesting to see how VMware and their parent EMC can leverage Virsto technology to complement virtual SANs as well as VSAs and underlying hardware from VFcache to storage arrays with SSD and SSD appliances as opposed to compete with them.

With the Virsto technology now part of VMware, hopefully there will be less time on talking about storage hypervisor’s and more around server IO optimization and enablement to create broader awareness for the technology.

Congratulations to VMware (and EMC) along with Virsto.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Cloud, virtualization, Storage I/O trends for 2013 and beyond

StorageIO Industry trends and perspectives image

It is still early in 2013, so I can make some cloud, virtualization, storage and IO related predictions, or more aptly, talk about some trends, in addition to those that I made in late 2012, looking forward and back. Common over-riding themes will continue to include convergence (people and technology), valueware, clouds (public, private, hybrid and community) among others.

cloud virtualization storage I/O data center image

Certainly, solid state drives (SSDs) will remain popular, both in terms of industry adoption, and industry deployment. Big-data (and little data) management tools and purpose-build storage systems or solutions continue to be popular, as are those for supporting little data applications. On the cloud storage front, there are many options for various use cases available. Watch for more emphasis on service-level agreements (SLA), service-level objectives (SLO), security, pricing transparency, and tiers of service.

storage I/O rto rpo dcim image

Cloud and object storage will continue to gain in awareness, functionality, and options from various providers in terms of products, solutions, and services. There will be a mix of large-scale solutions and smaller ones, with a mix of open-source and proprietary pieces. Some of these will be for archiving, some for backup or data protection. Others will be for big-data, high-performance computing, or cloud on a local or wide area basis, while others for general file sharing.

Ceph object storage architecture example

Along with cloud and object storage, watch for more options about how those products or services can be accessed using traditional NAS (NFS, CIFS, HDFS and others) along with block, such as iSCSI object API’s, including Amazon S3, REST, HTTP, JSON, XML, iOS and CDMI along with programmatic bindings.

Data protection modernization, including backup/restore, high-availability, business continuity, disaster recovery, archiving, and related technologies for cloud, virtual, and traditional environments will remain popular themes.

cloud and virtual data center image

Expect more Fibre Channel over Ethernet for networking with your servers and storage, PCIe Gen 3 to move data in and out of servers, and Serial-attached SCSI (SAS) as a means of attaching storage to servers or as the back-end storage for larger storage systems and appliances. For those who like to look out over the horizon, keep an eye and ear open for more discussion around PCI gen 3 deployment and gen 4 definitions, not to mention DDR4 and nand flash moving close to the processors.

With VMware buying Virsto, that should keep software defined marketing (SDM) and Storage hypervisors, storage virtualization, virtual storage, virtual storage arrays (VSA’s) active topic themes. Lets also keep in mind for storage space capacity optimization Data footprint reduction (DFR) including archiving, backup and data protection modernization, compression, consolidation, dedupe and data management.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Tape is still alive, or at least in conversations and discussions

StorageIO Industry trends and perspectives image

Depending on whom you talk to or ask, you will get different views and opinions, some of them stronger than others on if magnetic tape is dead or alive as a data storage medium. However an aspect of tape that is alive are the discussions by those for, against or that simply see it as one of many data storage mediums and technologies whose role is changing.

Here is a link to an a ongoing discussion over in one of the Linked In group forums (Backup & Recovery Professionals) titled About Tape and disk drives. Rest assured, there is plenty of fud and hype on both sides of the tape is dead (or alive) arguments, not very different from the disk is dead vs. SSD or cloud arguments. After all, not everything is the same in data centers, clouds and information factories.

Fwiw, I removed tape from my environment about 8 years ago, or I should say directly as some of my cloud providers may in fact be using tape in various ways that I do not see, nor do I care one way or the other as long as my data is safe, secure, protected and SLA’s are meet. Likewise, I consult and advice for organizations where tape still exists yet its role is changing, same with those using disk and cloud.

Storage I/O data center image

I am not ready to adopt the singular view that tape is dead yet as I know too many environments that are still using it, however agree that its role is changing, thus I am not part of the tape cheerleading camp.

On the other hand, I am a fan of using disk based data protection along with cloud in new and creative (including for my use) as part of modernizing data protection. Although I see disk as having a very bright and important future beyond what it is being used for now, at least today, I am not ready to join the chants of tape is dead either.

StorageIO Industry trends and perspectives image

Does that mean I can’t decide or don’t want to pick a side? NO

It means that I do not have to nor should anyone have to choose a side, instead look at your options, what are you trying to do, how can you leverage different things, techniques and tools to maximize your return on innovation. If that means that tape is, being phased out of your organization good for you. If that means there is a new or different role for tape in your organization co-existing with disk, then good for you.

If somebody tells you that tape sucks and that you are dumb and stupid for using it without giving any informed basis for those comments then call them dumb and stupid requesting they come back when then can learn more about your environment, needs, and requirements ready to have an informed discussion on how to move forward.

Likewise, if you can make an informed value proposition on why and how to migrate to new ways of modernizing data protection without having to stoop to the tape is dead argument, or cite some research or whatever, good for you and start telling others about it.

StorageIO Industry trends and perspectives image

Otoh, if you need to use fud and hype on why tape is dead, why it sucks or is bad, at least come up with some new and relevant facts, third-party research, arguments or value propositions.

You can read more about tape and its changing role at tapeisalive.com or Tapesummit.com.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

In the data center or information factory, not everything is the same

StorageIO Industry trends and perspectives image

Sometimes what should be understood, or that is common sense or that you think everybody should know needs to be stated. After all, there could be somebody who does not know what some assume as common sense or what others know for various reasons. At times, there is simply the need to restate or have a reminder of what should be known.

Storage I/O data center image

Consequently, in the data center or information factory, either traditional, virtual, converged, private, hybrid or public cloud, everything is not the same. When I say not everything is the same, is that different applications with various service level objectives (SLO’s) and service level agreements (SLA’s). These are based on different characteristics from performance, availability, reliability, responsiveness, cost, security, privacy among others. Likewise, there are different size and types of organizations with various requirements from enterprise to SMB, ROBO and SOHO, business or government, education or research.

Various levels of HA, BC and DR

There are also different threat risks for various applications or information services within in an organization, or across different industry sectors. Thus various needs for meeting availability SLA’s, recovery time objectives (RTO’s) and recovery point objectives (RPO’s) for data protection ranging from backup/restore, to high-availability (HA), business continuance (BC), disaster recovery (DR) and archiving. Let us not forget about logical and physical security of information, assets and people, processes and intellectual property.

Storage IO RTO and RPO image

Some data centers or information factories are compute intensive while others are data centric, some are IO or activity intensive with a mix of compute and storage. On the other hand, some data centers such as a communications hub may be network centric with very little data sticking or being stored.

SLA and SLO image

Even within in a data center or information factory, various applications will have different profiles, protection requirements for big data and little data. There can also be a mix of old legacy applications and new systems developed in-house, purchased, open-source based or accessed as a service. The servers and storage may be software defined (a new buzzword that has already jumped the shark), virtualized or operated in a private, hybrid or community cloud if not using a public service.

Here are some related posts tied to everything is not the same:
Optimize Data Storage for Performance and Capacity
Is SSD only for performance?
Cloud conversations: Gaining cloud confidence from insights into AWS outages
Data Center Infrastructure Management (DCIM) and IRM
Saving Money with Green IT: Time To Invest In Information Factories
Everything Is Not Equal in the Datacenter, Part 1
Everything Is Not Equal in the Datacenter, Part 2
Everything Is Not Equal in the Datacenter, Part 3

Storage I/O data center image

Thus, not all things are the same in the data center, or information factories, both those under traditional management paradigms, as well as those supporting public, private, hybrid or community clouds.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

January 2013 Server and StorageIO Update Newsletter

StorageIO News Letter Image
January 2013 News letter

Welcome to the January 2013 edition of the StorageIO Update news letter including a new format and added content.

You can get access to this news letter via various social media venues (some are shown below) in addition to StorageIO web sites and subscriptions.

Click on the following links to view the January 2013 edition as (HTML sent via Email) version, or PDF versions.

Visit the news letter page to view previous editions of the StorageIO Update.

You can subscribe to the news letter by clicking here.

Enjoy this edition of the StorageIO Update news letter, let me know your comments and feedback.

Nuff said for now

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Putting some VMware ESX storage tips together: (Part II)

In the first part of this post I showed how to use a tip from Dunacn Epping to fake VMware into thinking that a HHDD (Hybrid Hard Disk Drive) was a SSD.

Now lets look at using a tip from Dave Warburton to make an internal SATA HDD into an RDM for one of my Windows-based VMs.

My challenge was that I have a VM with a guest that I wanted to have a Raw Device Mapping (RDM) internal SATA HDD accessible to it, expect the device was an internal SATA device. Given that using the standard tools and reading some of the material available, it would have been easy to give up and quit since the SATA device was not attached to an FC or iSCSI SAN (such as my Iomega IX4 I bought from Amazon.com).

Image of internal RDM with vMware
Image of internal SATA drive being added as a RDM with vClient

Thanks to Dave’s great post that I found, I was able to create a RDM of an internal SATA drive, present it to the existing VM running Windows 7 ultimate and it is now happy, as am I.

Pay close attention to make sure that you get the correct device name for the steps in Dave’s post (link is here).

For the device that I wanted to use, the device name was:

From the ESX command line I found the device I wanted to use which is:

t10.ATA_____ST1500LM0032D9YH148_____Z110S6M5

Then I used the following ESX shell command per Dave’s tip to create an RDM of an internal SATA HDD:

vmkfstools -z /vmfs/devices/disks/ t10.ATA_____ST1500LM0032D9YH148_____Z110S6M5
 /vmfs/volumes/dat1/rdm_ST1500L.vmdk

Then the next steps were to update an existing VM using vSphere client to use the newly created RDM.

Hint, Pay very close attention to your device naming, along with what you name the RDM and where you find it. Also, recommend trying or practicing on a spare or scratch device first, if something is messed up. I practiced on a HDD used for moving files around and after doing the steps in Dave’s post, added the RDM to an existing VM, started the VM and accessed the HDD to verify all was fine (it was). After shutting down the VM, I removed the RDM from it as well as from ESX, and then created the real RDM.

As per Dave’s tip, vSphere Client did not recognize the RDM per say, however telling it to look at existing virtual disks, select browse the data stores, and low and behold, the RDM I was looking for was there. The following shows an example of using vSphere to add the new RDM to one of my existing VMs.

In case you are wondering, why I want to make a non SAN HDD as a RDM vs. doing something else? Simple, the HDD in question is a 1.5TB HDD that has backups on that I want to use as is. The HDD is also bit locker protected and I want the flexibility to remove the device if I have to being accessible via a non-VM based Windows system.


Image of my VMware server with internal RDM and other items

Could I have had accomplished the same thing using a USB attached device accessible to the VM?

Yes, and in fact that is how I do periodic updates to removable media (HDD using Seagate Goflex drives) where I am not as concerned about performance.

While I back up off-site to Rackspace and AWS clouds, I also have a local disk based backup, along with creating periodic full Gold or master off-site copies. The off-site copies are made to removable Seagate Goflex SATA drives using a USB to SATA Goflex cable. I also have the Goflex eSATA to SATA cable that comes in handy to quickly attach a SATA device to anything with an eSATA port including my Lenovo X1.

As a precaution, I used a different HDD that contained data I was not concerned about if something went wrong to test to the process before doing it with the drive containing backup data. Also as a precaution, the data on the backup drive is also backed up to removable media and to my cloud provider.

Thanks again to both Dave and Duncan for their great tips; I hope that you find these and other material on their sites as useful as I do.

Meanwhile, time to get some other things done, as well as continue looking for and finding good work a rounds and tricks to use in my various projects, drop me a note if you see something interesting.

Ok, nuff said for now.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Putting some VMware ESX storage tips together (Part I)

Have you spent time searching the VMware documentation, on-line forums, venues and books to decide how to make a local dedicated direct attached storage (DAS) type device (e.g. SATA or SAS) be Raw Device Mappings (RDM)? Part two of this post looks at how to make an RDM using an internal SATA HDD.

Or how about how to make a Hybrid Hard disk drive (HHDD) that is faster than a regular Hard Disk Drive (HDD) on reads, however more capacity and less cost than a Solid State Device (SSD) actually appear to VMware as a SSD?

Recently I had these and some other questions and spent some time looking around, thus this post highlights some great information I have found for addressing the above VMware challenges and some others.

VMware vExpert image

The SSD solution is via a post I found on fellow VMware vExpert  Duncan Epping’s yellow-brick site which if you are into VMware or server virtualization in general, and particular a fan of high-availability in general or virtual specific, add Duncan’s site to your reading list. Duncan also has some great books to add to your bookshelves including VMware vSphere 5.1 Clustering Deepdive (Volume 1) and VMware vSphere 5 Clustering Technical Deepdive that you can find at Amazon.com.

VMware vSphere 5 Clustering Technical Deepdive book image

Duncan’s post shows how to fake into thinking that a HDD was a SSD for testing or other purposes. Since I have some Seagate Momentus XT HHDDs that combine the capacity of a traditional HDD (and cost) with the read performance closer to a SSD (without the cost or capacity penalty), I was interested in trying Duncan’s tip (here is a link to his tip). Essential Duncan’s tip shows how to use esxcli storage nmp satp and esxcli storage core commands to make a non-SSD look like a SSD.

The commands that were used from the VMware shell per Duncan’s tip:

esxcli storage nmp satp rule add –satp VMW_SATP_LOCAL –device mpx.vmhba0:C0:T1:L0 –option “enable_local enable_ssd”
esxcli storage core claiming reclaim -d mpx.vmhba0:C0:T1:L0
esxcli storage core device list –device=mpx.vmhba0:C0:T1:L0

After all, if the HHDD is actually doing some of the work to boost and thus fool the OS or hypervisor that it is faster than a HDD, why not tell the OS or hypervisor in this case VMware ESX that it is a SSD. So far have not seen nor do I expect to notice anything different in terms of performance as that already occurred going from a 7,200RPM (7.2K) HDD to the HHDD.

If you know how to decide what type of a HDD or SSD a device is by reading its sense code and model number information, you will recognize the circled device as a Seagate Momentus XT HHDD. This particular model is Seagate Momentus XT II 750GB with 8GB SLC nand flash SSD memory integrated inside the 2.5-inch drive device.

Normally the Seagate HHDDs appear to the host operating system or whatever it is attached to as a Momentus 7200 RPM SATA type disk drive. Since there are not special device drivers, controllers, adapters or anything else, essentially the Momentus XT type HHDD are plug and play. After a bit of time they start learning and caching things to boost read performance (read more about boosting read performance including Windows boot testing here).

Image of VMware vSphere vClient storage devices
Screen shot showing Seagate Momentus XT appearing as a SSD

Note that the HHDD (a Seagate Momentus XT II) is a 750GB 2.5” SATA drive that boost read performance with the current firmware. Seagate has hinted that there could be a future firmware version to enable write caching or optimization however, I have waited for a year.

Disclosure: Seagate gave me an evaluation copy of my first HHDD a couple of years ago and I then went on to buy several more from Amazon.com. I have not had a chance to try any Western Digital (WD) HHDDs yet, however I do have some of their HDDs. Perhaps I will hear something from them sometime in the future.

For those who are SSD fans or that actually have them, yes, I know SSD’s are faster all around and that is why I have some including in my Lenovo X1. Thus for write intensive go with a full SSD today if you can afford them as I have with my Lenovo X1 which enables me to save large files faster (less time waiting). However if you want the best of both worlds for lab or other system that is doing more reads vs. writes as well as need as much capacity as possible without breaking the budget, check out the HHDDs.

Thanks for the great tip and information Duncan, in part II of this post, read how to make an RDM using an internal SATA HDD.

 

Ok, nuff said (for now)…

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Thanks for viewing StorageIO content and top 2012 viewed posts

StorageIO industry trends cloud, virtualization and big data

2012 was a busy year (it was our 7th year in business) along with plenty of activity on StorageIOblog.com as well as on the various syndicate and other sites that pickup our content feed (https://storageioblog.com/RSSfull.xml).

Excluding traditional media venues, columns, articles, web casts and web site visits (StorageIO.com and StorageIO.TV), StorageIO generated content including posts and pod casts have reached over 50,000 views per month (and growing) across StorageIOblog.com and our partner or syndicated sites. Including both public and private, there were about four dozen in-person events and activities not counting attending conferences or vendor briefing sessions, along with plenty of industry commentary. On the twitter front, plenty of activity there as well closing in on 7,000 followers.

Thank you to everyone who have visited the sites where you will find StorageIO generated content, along with industry trends and perspective comments, articles, tips, webinars, live in person events and other activities.

In terms of what was popular on the StorageIOblog.com site, here are the top 20 viewed posts in alphabetical order.

Amazon cloud storage options enhanced with Glacier
Announcing SAS SANs for Dummies book, LSI edition
Are large storage arrays dead at the hands of SSD?
AWS (Amazon) storage gateway, first, second and third impressions
EMC VFCache respinning SSD and intelligent caching
Hard product vs. soft product
How much SSD do you need vs. want?
Oracle, Xsigo, VMware, Nicira, SDN and IOV: IO IO its off to work they go
Is SSD dead? No, however some vendors might be
IT and storage economics 101, supply and demand
More storage and IO metrics that matter
NAD recommends Oracle discontinue certain Exadata performance claims
New Seagate Momentus XT Hybrid drive (SSD and HDD)
PureSystems, something old, something new, something from big blue
Researchers and marketers dont agree on future of nand flash SSD
Should Everything Be Virtualized?
SSD, flash and DRAM, DejaVu or something new?
What is the best kind of IO? The one you do not have to do
Why FC and FCoE vendors get beat up over bandwidth?
Why SSD based arrays and storage appliances can be a good idea

Moving beyond the top twenty read posts on StorageIOblog.com site, the list quickly expands to include more popular posts around clouds, virtualization and data protection modernization (backup/restore, HA, BC, DR, archiving), general IT/ICT industry trends and related themes.

I would like to thank the current StorageIOblog.com site sponsors Solarwinds (management tools including response time monitoring for physical and virtual servers) and Veeam (VMware and Hyper-V virtual server backup and data protection management tools) for their support.

Thanks again to everyone for reading and following these and other posts as well as for your continued support, watch for more content on the above and other related and new topics or themes throughout 2013.

Btw, if you are into Facebook, you can give StorageIO a like at facebook.com/storageio (thanks in advance) along with viewing our newsletter here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Summary, EMC VMAX 10K, high-end storage systems stayin alive

StorageIO industry trends cloud, virtualization and big data

This is a follow-up companion post to the larger industry trends and perspectives series from earlier today (Part I, Part II and Part III) pertaining to today’s VMAX 10K enhancement and other announcements by EMC, and the industry myth of if large storage arrays or systems are dead.

The enhanced VMAX 10K scales from a couple of dozen up to 1,560 HDDs (or mix of HDD and SSDs). There can be a mix of 2.5 inch and 3.5 inch devices in different drive enclosures (DAE). There can be 25 SAS based 2.5 inch drives (HDD or SSD) in the 2U enclosure (see figure with cover panels removed), or 15 3.5 inch drives (HDD or SSD) in a 3U enclosure. As mentioned, there can be all 2.5 inch (including for vault drives) for up to 1,200 devices, all 3.5 inch drives for up to 960 devices, or a mix of 2.5 inch (2U DAE) and 3.5 inch (3U DAE) for a total of 1,560 drives.

Image of EMC 2U and 3U DAE for VMAX 10K via EMC
Image courtesy EMC

Note carefully in the figure (courtesy of EMC) that the 2U 2.5 inch DAE and 3U 3.5 inch DAE along with the VMAX 10K are actually mounted in a 3rd cabinet or rack that is part of today’s announcement.

Also note that the DAE’s are still EMC; however as part of today’s announcement, certain third-party cabinets or enclosures such as might be found in a collocation (colo) or other data center environment can be used instead of EMC cabinets.  The VMAX 10K can however like the VMAX 20K and 40K support external storage virtualized similar to what has been available from HDS (VSP/USP) and HP branded Hitachi equivalent storage, or using NetApp V-Series or IBM V7000 in a similar way.

As mentioned in one of the other posts, there are various software functionality bundles available. Note that SRDF is a separate license from the bundles to give customers options including RecoverPoint.

Check out the three post industry trends and perspectives posts here, here and here.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EMC VMAX 10K, looks like high-end storage systems are still alive (part III)

StorageIO industry trends cloud, virtualization and big data

This is the third in a multi-part series of posts (read first post here and second post here) looking at what else EMC announced today in addition to an enhanced VMAX 10K and dispelling the myth that large storage arrays are dead (or at least for now).

In addition to the VMAX 10K specific updates, EMC also announced the release of a new version of their Enginuity storage software (firmware, storage operating system). Enginuity is supported across all VMAX platforms and features the following:

  • Replication enhancements include TimeFinder clone refresh, restore and four site SRDF for the VMAX 10K, along with think or thin support. This capability enables functionality across VMAX 10K, 40K or 20K using synchronous or asynchronous and extends earlier 3 site to 4 site and mix modes. Note that larger VMAX systems had the extended replication feature support with VMAX 10K now on par with those. Note that the VMAX can be enhanced with VPLEX in front of storage systems (local or wide area, in region HA and out of region DR) and RecoverPoint behind the systems supporting bi-synchronous (two-way), synchronous and asynchronous data protection (CDP, replication, snapshots).
  • Unisphere for VMAX 1.5 manages DMX along with VMware VAAI UNMAP and space reclamation, block zero and hardware clone enhancements, IPV6, Microsoft Server 2012 support and VFCache 1.5.
  • Support for mix of 2.5 inch and 3.5 inch DAEs (disk array enclosures) along with new SAS drive support (high-performance and high-capacity, and various flash-based SSD or EFD).
  • The addition of a fourth dynamic tier within FAST for supporting third-party virtualized storage, along with compression of in-active, cold or stale data (manual or automatic) with 2 to 1 data footprint reduction (DFR) ratio. Note that EMC was one of early vendors to put compression into its storage systems on a block LUN basis in the CLARiiON (now VNX) along with NetApp and IBM (via their Storwize acquisition). The new fourth tier also means that third-party storage does not have to be the lowest tier in terms of performance or functionality.
  • Federated Tiered Storage (FTS) is now available on all EMC block storage systems including those with third-party storage attached in virtualization mode (e.g. VMAX). In addition to supporting tiering across its own products, and those of other vendors that have been virtualized when attached to a VMAX, ANSI T10 Data Integrity Field (DIF) is also supported. Read more about T10 DIF here, and here.
  • Front-end performance enhancements with host I/O limits (Quality of Service or QoS) for multi tenant and cloud environments to balance or prioritize IO across ports and users. This feature can balance based on thresholds for IOPS, bandwidth or both from the VMAX. Note that this feature is independent of any operating system based tool, utility, pathing driver or feature such as VMware DRS and Storage I/O control. Storage groups are created and mapped to specific host ports on the VMAX with the QoS performance thresholds applied to meet specific service level requirements or objectives.

For discussion (or entertainment) purpose, how about the question of if Enginuity qualifies or can be considered as a storage hypervisors (or storage virtualization or virtual storage)? After all, the VMAX is now capable of having third-party storage from other vendors attached to it, something that HDS has done for many years now. For those who feel a storage hypervisor, virtual storage or storage virtualization requires software running on Intel or other commodity based processors, guess what the VMAX uses for CPU processors (granted, you can’t simply download Enginuity software and run on a Dell, HP, IBM, Oracle or SuperMicro server).

I am guessing some of EMC competitors and their surrogates or others who like to play the storage hypervisor card game will be quick to tell you it is not based on various reasons or product comparisons, however you be the judge.

 

Back to the question of if, traditional high-end storage arrays are dead or dying (from part one in this series).

IMHO as mentioned not yet.

Granted like other technologies that have been declared dead or dying yet still in use (technology zombies), they continue to be enhanced, finding new customers, or existing customers using them in new ways, their roles are evolving, this still alive.

For some environments as has been the case over the past decade or so, there will be a continued migration from large legacy enterprise class storage systems to midrange or modular storage arrays with a mix of SSD and HDD. Thus, watch out for having a death grip not letting go of the past, while being careful about flying blind into the future. Do not be scared, be ready, do your homework with clouds, virtualization and traditional physical resources.

Likewise, there will be the continued migration for some from traditional mid-range class storage arrays to all flash-based appliances. Yet others will continue to leverage all the above in different roles aligned to where their specific features best serve the applications and needs of an organization.

In the case of high-end storage systems such as EMC VMAX (aka formerly known as DMX and Symmetrix before that) based on its Enginuity software, the hardware platforms will continue to evolve as will the software functionality. This means that these systems will evolve to handling more workloads, as well as moving into new environments from service providers to mid-range organizations where the systems were before out of their reach.

Smaller environments have grown larger as have their needs for storage systems while higher end solutions have scaled down to meet needs in different markets. What this means is a convergence of where smaller environments have bigger data storage needs and can afford the capabilities of scaled down or Right-sized storage systems such as the VMAX 10K.

Thus while some of the high-end systems may fade away faster than others, for those that continue to evolve being able to move into different adjacent markets or usage scenarios, they will be around for some time, at least in some environments.

Avoid confusing what is new and cool falling under industry adoption vs. what is productive and practical for customer deployment. Systems like the VMAX 10K are not for all environments or applications; however, for those who are open to exploring alternative solutions and approaches, it could open new opportunities.

If there is a high-end storage system platform (e.g. Enginuity) that continues to evolve, re-invent itself in terms of moving into or finding new uses and markets the EMC VMAX would be at or near the top of such list. For the other vendors of high-end storage system that are also evolving, you can have an Atta boy or Atta girl as well to make you feel better, loved and not left out or off of such list. ;)

Ok, nuff said for now.

Disclosure: EMC is not a StorageIO client; however, they have been in the past directly and via acquisitions that they have done. I am however a customer of EMC via my Iomega IX4 NAS (I never did get the IX2 that I supposedly won at EMCworld ;) ) that I bought on Amazon.com and indirectly via VMware products that I have, oh, and they did sent me a copy of the new book Human Face of Big Data (read more here).

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press, 2011), The Green and Virtual Data Center (CRC Press, 2009), and Resilient Storage Networks (Elsevier, 2004)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved