Non Disruptive Updates, Needs vs. Wants

Storage I/O trends

Do you want non disruptive updates or do you need non disruptive upgrades?

First there is a bit of play on words going on here with needs vs. wants, as well as what is meant by non disruptive.

Regarding needs vs. wants, they are often used interchangeably particular in IT when discussing requirements or what the customer would like to have. The key differentiator is that a need is something that is required and somehow cost justified, or hopefully easier than a want item. A want or like to have item is simply that, its not a need however it could add value being a benefit although may be seen as discretionary.

There is also a bit of play on words with non disruptive updates or upgrades that can take on different meanings or assumptions. For example my Windows 7 laptop has automatic Microsoft updates enabled some of which can be applied while I work. On the other hand, some of those updates may be applied while I work however they may not take effect until I reboot or exit and restart an application.

This is not unique to Windows as my Ubuntu and Centos Linux systems can also apply updates, and in some cases a reboot might be required, same with my VMware environment. Lets not forget about applying new firmware to a server, or workstation, laptop or other device, along with networking routers, switches and related devices. Storage is also not immune as new software or firmware can be applied to a HDD or SSD (traditional or NVMe), either by your workstation, laptop, server or storage system. Speaking of storage systems, they too have new software or firmware that gets updated.

Storage I/O trends

The common theme here though is if the code (e.g. software, firmware, microcode, flash update, etc) can be applied non disruptive something known as non disruptive code load, followed by activation. With activation, the code may have been applied while the device or software was in use, however may need a reboot or restart. With non disruptive code activation, there should not be a disruption to what is being done when the new software takes effect.

This means that if a device supports non disruptive code load (NDCL) updates along with non disruptive code activation (NDCA), the upgrade can occur without disruption or having to wait for a reboot.

Which is better?

That depends, I want NDCA, however for many things I only need NDCL.

On the other hand, depending on what you need, perhaps it is both NDCL and NDCA, however also keep in mind needs vs. wants.

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

As the platters spin, HDD’s for cloud, virtual and traditional storage environments

HDDs for cloud, virtual and traditional storage environments

Storage I/O trends

Updated 1/23/2018

As the platters spin is a follow-up to a recent series of posts on Hard Disk Drives (HDD’s) along with some posts about How Many IOPS HDD’s can do.

HDD and storage trends and directions include among others

HDD’s will continue to be declared dead into the next decade, just as they have been for over a decade, meanwhile they are being enhanced, continued to be used in evolving roles.

hdd and ssd

SSD will continue to coexist with HDD, either as separate or converged HHDD’s. Where, where and how they are used will also continue to evolve. High IO (IOPS) or low latency activity will continue to move to some form of nand flash SSD (PCM around the corner), while storage capacity including some of which has been on tape stays on disk. Instead of more HDD capacity in a server, it moves to a SAN or NAS or to a cloud or service provider. This includes for backup/restore, BC, DR, archive and online reference or what some call active archives.

The need for storage spindle speed and more

The need for faster revolutions per minute (RPM’s) performance of drives (e.g. platter spin speed) is being replaced by SSD and more robust smaller form factor (SFF) drives. For example, some of today’s 2.5” SFF 10,000 RPM (e.g. 10K) SAS HDD’s can do as well or better than their larger 3.5” 15K predecessors can for both IOPS and bandwidth. This is also an example where the RPM speed of a drive may not be the only determination for performance as it has been in the past.


Performance comparison of four different drive types, click to view larger image.

The need for storage space capacity and areal density

In terms of storage enhancements, watch for the appearance of Shingled Magnetic Recording (SMR) enabled HDD’s to help further boost the space capacity in the same footprint. Using SMR HDD manufactures can put more bits (e.g. areal density) into the same physical space on a platter.


Traditional vs. SMR to increase storage areal density capacity

The generic idea with SMR is to increase areal density (how many bits can be safely stored per square inch) of data placed on spinning disk platter media. In the above image on the left is a representative example of how traditional magnetic disk media lays down tracks next to each other. With traditional magnetic recording approaches, the tracks are placed as close together as possible for the write heads to safely write data.

With new recording formats such as SMR along with improvements to read/write heads, the tracks can be more closely grouped together in an overlapping way. This overlapping way (used in a generic sense) is like how the shingles on a roof overlap, hence Shingled Magnetic Recording. Other magnetic recording or storage enhancements in the works include Heat Assisted Magnetic Recording (HAMR) and Helium filed drives. Thus, there is still plenty of bits and bytes room for growth in HDD’s well into the next decade to co-exist and complement SSD’s.

DIF and AF (Advanced Format), or software defining the drives

Another evolving storage feature that ties into HDD’s is Data Integrity Feature (DIF) that has a couple of different types. Depending on which type of DIF (0, 1, 2, and 3) is used; there can be added data integrity checks from the application to the storage medium or drive beyond normal functionality. Here is something to keep in mind, as there are different types or levels of DIF, when somebody says they support or need DIF, ask them which type or level as well as why.

Are you familiar with Advanced Format (AF)? If not you should be. Traditionally outside of special formats for some operating systems or controllers, that standard open system data storage block, page or sector has been 512 bytes. This has served well in the past, however; with the advent of TByte and larger sized drives, a new mechanism is needed. The need is to support both larger average data allocation sizes from operating systems and storage systems, as well as to cut the overhead of managing all the small sectors. Operating systems and file systems have added new partitioning features such as GUID Partition Table (GPT) to support 1TB and larger SSD, HDD and storage system LUN’s.

These enhancements are enabling larger devices to be used in place of traditional Master Boot Record (MBR) or other operating system partition and allocation schemes. The next step, however, is to teach operating systems, file systems, and hypervisors along with their associated tools or drives how to work with 4,096 byte or 4 Kbyte sectors. The advantage will be to cut the overhead of tracking all of those smaller sectors or file system extents and clusters. Today many HDD’s support AF however by default may have 512-byte emulation mode enabled due to lack of operating system or other support.

Intelligent Power Management, moving beyond drive spin down

Intelligent Power Management (IPM) is a collection of techniques that can be applied to vary the amount of energy consumed by a drive, controller or processor to do its work. These include in the case of an HDD slowing the spin rate of platters, however, keep in mind that mass in motion tends to stay in motion. This means that HDD’s once up and spinning do not need as much relative power as they function like a flywheel. Where their power draw comes in is during reading and write, in part to the movement of reading/write heads, however also for running the processors and electronics that control the device. Another big power consumer is when drives spin up, thus if they can be kept moving, however at a lower rate, along with disabling energy used by read/write heads and their electronics, you can see a drop in power consumption. Btw, a current generation 3.5” 4TB 6Gbs SATA HDD consumes about 6-7 watts of power while in active use, or less when in idle mode. Likewise a current generation high performance 2.5” 1.2TB HDD consumes about 4.8 watts of energy, a far cry from the 12-16 plus watts of energy some use as HDD fud.

Hybrid Hard Disk Drives (HHDD) and Solid State Hybrid Drives (SSDHD)

Hybrid HDD’s (HHDD’s) also known as Solid State Hybrid Drives (SSHD) have been around for a while and if you have read my earlier posts, you know that I have been a user and fan of them for several years. However one of the drawbacks of the HHDD’s has been lack of write acceleration, (e.g. they only optimize for reads) with some models. Current and emerging HDDD’s are appearing with a mix of nand flash SLC (used in earlier versions), MLC and eMLC along with DRAM while enabling write optimization. There are also more drive options available as HHDD’s from different manufactures both for desktop and enterprise class scenarios.

The challenge with HHDD’s is that many vendors either do not understand how they fit and compliment their tiering or storage management software tools or simply do not see the value proposition. I have had vendors and others tell me that the HHDD’s don’t make sense as they are too simple, how can they be a fit without requiring tiering software, controllers, SSD and HDD’s to be viable?

Storage I/O trends

I also see a trend similar to when the desktop high-capacity SATA drives appeared for enterprise-class storage systems in the early 2000s. Some of the same people did not see where or how a desktop class product or technology could ever be used in an enterprise solution.

Hmm, hey wait a minute, I seem to recall similar thinking when SCSI drives appeared in the early 90s, funny how some things do not change, DejaVu anybody?

Does that mean HHDD’s will be used everywhere?

Not necessarily, however, there will be places where they make sense, others where either an HDD or SSD will be more practical.

Networking with your server and storage

Drive native interfaces near-term will remain as 6Gbs (going to 12Gbs) SAS and SATA with some FC (you might still find a parallel SCSI drive out there). Likewise, with bridges or interface cards, those drives may appear as USB or something else.

What about SCSI over PCIe, will that catch on as a drive interface? Tough to say however I am sure we can find some people who will gladly try to convince you of that. FC based drives operating at 4Gbs FC (4GFC) are still being used for some environments however most activity is shifting over to SAS and SATA. SAS and SATA are switching over from 3Gbs to 6Gbs with 12Gbs SAS on the roadmaps.

So which drive is best for you?

That depends; do you need bandwidth or IOPS, low latency or high capacity, small low profile thin form factor or feature functions? Do you need a hybrid or all SSD or a self-encrypting device (SED) also known as Instant Secure Erase (ISE), these are among your various options.

Disk drives

Why the storage diversity?

Simple, some are legacy soon to be replaced and disposed of while others are newer. I also have a collection so to speak that get used for various testing, research, learning and trying things out. Click here and here to read about some of the ways I use various drives in my VMware environment including creating Raw Device Mapped (RDM) local SAS and SATA devices.

Other capabilities and functionality existing or being added to HDD’s include RAID and data copy assist; securely erase, self-encrypting, vibration dampening among other abilities for supporting dense data environments.

Where To Learn More

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Do not judge a drive only by its interface, space capacity, cost or RPM alone. Look under the cover a bit to see what is inside in terms of functionality, performance, and reliability among other options to fit your needs. After all, in the data center or information factory not everything is the same.

From a marketing and fun to talk about new technology perspective, HDD’s might be dead for some. The reality is that they are very much alive in physical, virtual and cloud environments, granted their role is changing.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Seagate provides proof of life: Enterprise HDD enhancements

Storage I/O trends

Proof of life: Enterprise Hard Disk Drives (HDD’s) are enhanced

Last week while hard disk drive (HDD) competitor Western Digital (WD) was announcing yet another (Velobit) in a string of acquisitions ( e.g. earlier included Stec, Arkeia) and investments (Skyera), Seagate announced new enterprise class HDD’s to their portfolio. Note that it was only two years ago that WD acquired Hitachi Global Storage Technologies (HGST) the disk drive manufacturing business of Hitachi Ltd. (not to be confused with HDS).

Seagate

Similar to WD expanding their presence in the growing nand flash SSD market, Seagate also in May of this year extended their existing enterprise class SSD portfolio. These enhancements included new drives with 12Gbs SAS interface, along with a partnership (and investment) with PCIe flash card startup vendor Virident. Other PCIe flash SSD card vendors (manufacturers and OEMs) include Cisco, Dell, EMC, FusionIO, HP, IBM, LSI, Micron, NetApp and Oracle among others.

These new Seagate enterprise class HDD’s are designed for use in cloud and traditional data center servers and storage systems. A month or two ago Seagate also announced new ultra-thin (5mm) client (aka desktop) class HDD’s along with a 3.5 inch 4TB video optimized HDD. The video optimized HDD’s are intended for Digital Video Recorders (DVR’s), Set Top Boxes (STB’s) or other similar applications.

What was announced?

Specifically what Seagate announced were two enterprise class drives, one for performance (e.g. 1.2TB 10K) and the other for space capacity (e.g. 4TB).

 

Enterprise High Performance 10K.7 (aka formerly known as Savio)

Enterprise Terascale (aka formerly known as constellation)

Class/category

Enterprise / High Performance

Enterprise High Capacity

Form factor

2.5” Small Form Factor (SFF)

3.5”

Interface

6Gbs SAS

6Gbs SATA

Space capacity

1,200GB (1.2TB)

4TB

RPM speed

10,000

5,900

Average seek

2.9 ms

12 ms

DRAM cache

64MB

64MB

Power idle / operating

4.8 watts

5.49 / 6.49 watts

Intelligent Power Management (IPM)

Yes – Seagate PowerChoice

Yes – Seagate PowerChoice

Warranty

Limited 5 years

Limited 3 years

Instant Secure Erase (ISE)

Yes

Optional

Other features

RAID Rebuild assist, Self-Encrypting Device (SED)

Advanced Format (AF) 4K block in addition to standard 512 byte sectors

Use cases

Replace earlier generation 3.5” 15K SAS and Fibre Channel HDD’s for higher performance applications including file systems, databases where SSD are not practical fit.

Backup and data protection, replication, copy operations for erasure coding and data dispersal, active in dormant archives, unstructured NAS, big data, data warehouse, cloud and object storage.

Note the Seagate Terascale has a disk rotation speed of 5,900 (5.9K RPM) which is not a typo given the more traditional 5.4K RPM drives. This slight increase in performance from 5.4K to 5.9K should give when combined with other enhancements (e.g. firmware, electronics) to boost performance for higher capacity workloads.

Let us watch for some performance numbers to be published by Seagate or others. Note that I have not had a chance to try these new drives yet, however look forward to getting my hands on them (among others) sometime in the future for a test drive to add to the growing list found here (hey Seagate and WD, that’s a hint ;) ).

What this all means?

Storage I/O trends

Wait, weren’t HDD’s supposed to be dead or dying?

Some people just like new and emerging things and thus will declare anything existing or that they have lost interest in (or their jobs need it) as old, boring or dead.

For example if you listen to some, they may say nand flash SSD are also dead or dying. For what it is worth, imho nand flash-based SSDs still have a bright future in front of them even with new technologies emerging as they will take time to mature (read more here or listen here).

However, the reality is that for at least the next decade, like them or not, HDD’s will continue to play a role that is also evolving. Thus, these and other improvements with HDD’s will be needed until current nand flash or emerging PCM (Phase Change Memory) among other forms of SSD are capable of picking up all the storage workloads in a cost-effective way.

Btw, yes, I am also a fan and user of nand flash-based SSD’s, in addition to HDD’s and see roles for both as being viable complementing each other for traditional, virtual and cloud environments.

In short, HDD’s will keep spinning (pun intended) for some time granted their roles and usage will also evolve similar to that of tape summit resources.

Storage I/O trends

With this announcement by Seagate along with other enhancements from WD show that the HDD will not only see its 60th birthday, (and here), it will probably also easily see its 70th and not from the comfort of a computer museum. The reason is that there is yet another wave of HDD improvements just around the corner including Shingled Magnetic Recording (SMR) (more info here) along with Heat Assisted Magnetic Recording (HAMR) among others. Watch for more on HAMR and SMR in future posts. With these and other enhancements, we should be able to see a return to the rapid density improvements with HDD’s observed during the mid to late 2000 era when Perpendicular recording became available.

What is up with this ISE stuff is that the same as what Xiotech (e.g. XIO) had?

Is this the same technology that Xiotech (now Xio) referred to the ISE the answer is no. This Seagate ISE is for fast secure erase of data on disk. The benefit of Instant Secure Erase (ISE) is to cut from hours or days the time required to erase a drive for secure disposal to seconds (or less). For those environments that already factor drives erase time as part of those overall costs, this can increase the useful time in service to help improve TCO and ROI.

Wait a minute, aren’t slower RPM’s supposed to be lower performance?

Some of you might be wondering or asking the question of wait, how can a 10,000 revolution per minute (10K RPM) HDD be considered fast vs. a 15K HDD, let alone SSD?

Storage I/O trends

There is a trend occurring with HDD’s that the old rules of IOPS or performance being tied directly to the size and rotational speed (RPM’s) of drives, along with their interfaces. This comes down to being careful to judge a book or in this case a drive by its cover. While RPM’s do have an impact on performance, new generation drives at 10K such as some 2.5” models are delivering performance equal to or better than earlier generation 3.5” 15K device’s.

Likewise, there are similar improvements with 5.4K devices vs. previous generation 7.2K models. As you will see in some of the results found here, not all the old rules of thumbs when it comes to drive performance are still valid. Likewise, keep those metrics that matter in the proper context.


Click on above image to see various performance results

For example as seen in the results (above), the more DRAM or DDR cache on the drives has a positive impact on sequential reads which can be good news if that is what your applications need. Thus, do your homework and avoid judging a device simply by its RPM, interface or form factor.

Other considerations, temperature and vibration

Another consideration is that with increased density of more drives being placed in a given amount of space, some of which may not have the best climate controls, humidity and vibration are concerns. Thus, the importance of drives having vibration dampening or safeguards to keep up performance are important. Likewise, even though drive heads and platters are sealed, there are also considerations that need to be taken care of for humidity in data center or cloud service providers in hot environments near the equator.

If this is not connecting with you, think about how close parts of Southeast Asia and the India subcontinent are to the equator along with the rapid growth and low-cost focus occurring there. Your data center might be temperature and humidity controlled, however others who very focused on cost cutting may not be as concerned with normal facilities best practices.

What type of drives should be used for cloud, virtual and traditional storage?

Good question and one where the answer should be it depends upon what you are trying or need to do (e.g. see previous posts here or here and here (via Seagate)).For example here are some tips for big data storage and storage making decisions in general.

Disclosure

Seagate recently invited me along with several other industry analysts to their cloud storage analyst summit in San Francisco where they covered roundtrip coach airfare, lodging, airport transfers and a nice dinner at the Epic Roast house.

hdd image

I also have received in the past a couple of Momentus XT HHDD (aka SSHD) from Seagate. These are in addition to those that I bought including various Seagate, WD along with HGST, Fujitsu, Toshiba and Samsung (SSD and HDD’s) that I use for various things.

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Can we get a side of context with them IOPS server storage metrics?

Can we get a side of context with them server storage metrics?

Whats the best server storage I/O network metric or benchmark? It depends as there needs to be some context with them IOPS and other server storage I/O metrics that matter.

There is an old saying that the best I/O (Input/Output) is the one that you do not have to do.

In the meantime, let’s get a side of some context with them IOPS from vendors, marketers and their pundits who are tossing them around for server, storage and IO metrics that matter.

Expanding the conversation, the need for more context

The good news is that people are beginning to discuss storage beyond space capacity and cost per GByte, TByte or PByte for both DRAM or nand flash Solid State Devices (SSD), Hard Disk Drives (HDD) along with Hybrid HDD (HHDD) and Solid State Hybrid Drive (SSHD) based solutions. This applies to traditional enterprise or SMB IT data center with physical, virtual or cloud based infrastructures.

hdd and ssd iops

This is good because it expands the conversation beyond just cost for space capacity into other aspects including performance (IOPS, latency, bandwidth) for various workload scenarios along with availability, energy effective and management.

Adding a side of context

The catch is that IOPS while part of the equation are just one aspect of performance and by themselves without context, may have little meaning if not misleading in some situations.

Granted it can be entertaining, fun to talk about or simply make good press copy for a million IOPS. IOPS vary in size depending on the type of work being done, not to mention reads or writes, random and sequential which also have a bearing on data throughout or bandwidth (Mbytes per second) along with response time. Not to mention block, file, object or blob as well as table.

However, are those million IOP’s applicable to your environment or needs?

Likewise, what do those million or more IOPS represent about type of work being done? For example, are they small 64 byte or large 64 Kbyte sized, random or sequential, cached reads or lazy writes (deferred or buffered) on a SSD or HDD?

How about the response time or latency for achieving them IOPS?

In other words, what is the context of those metrics and why do they matter?

storage i/o iops
Click on image to view more metrics that matter including IOP’s for HDD and SSD’s

Metrics that matter give context for example IO sizes closer to what your real needs are, reads and writes, mixed workloads, random or sequential, sustained or bursty, in other words, real world reflective.

As with any benchmark take them with a grain (or more) of salt, they key is use them as an indicator then align to your needs. The tool or technology should work for you, not the other way around.

Here are some examples of context that can be added to help make IOP’s and other metrics matter:

  • What is the IOP size, are they 512 byte (or smaller) vs. 4K bytes (or larger)?
  • Are they reads, writes, random, sequential or mixed and what percentage?
  • How was the storage configured including RAID, replication, erasure or dispersal codes?
  • Then there is the latency or response time and IO queue depths for the given number of IOPS.
  • Let us not forget if the storage systems (and servers) were busy with other work or not.
  • If there is a cost per IOP, is that list price or discount (hint, if discount start negotiations from there)
  • What was the number of threads or workers, along with how many servers?
  • What tool was used, its configuration, as well as raw or cooked (aka file system) IO?
  • Was the IOP’s number with one worker or multiple workers on a single or multiple servers?
  • Did the IOP’s number come from a single storage system or total of multiple systems?
  • Fast storage needs fast serves and networks, what was their configuration?
  • Was the performance a short burst, or long sustained period?
  • What was the size of the test data used; did it all fit into cache?
  • Were short stroking for IOPS or long stroking for bandwidth techniques used?
  • Data footprint reduction (DFR) techniques (thin provisioned, compression or dedupe) used?
  • Were write data committed synchronously to storage, or deferred (aka lazy writes used)?

The above are just a sampling and not all may be relevant to your particular needs, however they help to put IOP’s into more contexts. Another consideration around IOPS are the configuration of the environment, from an actual running application using some measurement tool, or are they generated from a workload tool such as IOmeter, IOrate, VDbench among others.

Sure, there are more contexts and information that would be interesting as well, however learning to walk before running will help prevent falling down.

Storage I/O trends

Does size or age of vendors make a difference when it comes to context?

Some vendors are doing a good job of going for out of this world record-setting marketing hero numbers.

Meanwhile other vendors are doing a good job of adding context to their IOP or response time or bandwidth among other metrics that matter. There is a mix of startup and established that give context with their IOP’s or other metrics, likewise size or age does not seem to matter for those who lack context.

Some vendors may not offer metrics or information publicly, so fine, go under NDA to learn more and see if the results are applicable to your environments.

Likewise, if they do not want to provide the context, then ask some tough yet fair questions to decide if their solution is applicable for your needs.

Storage I/O trends

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

What this means is let us start putting and asking for metrics that matter such as IOP’s with context.

If you have a great IOP metric, if you want it to matter than include some context such as what size (e.g. 4K, 8K, 16K, 32K, etc.), percentage of reads vs. writes, latency or response time, random or sequential.

IMHO the most interesting or applicable metrics that matter are those relevant to your environment and application. For example if your main application that needs SSD does about 75% reads (random) and 25% writes (sequential) with an average size of 32K, while fun to hear about, how relevant is a million 64 byte read IOPS? Likewise when looking at IOPS, pay attention to the latency, particular if SSD or performance is your main concern.

Get in the habit of asking or telling vendors or their surrogates to provide some context with them metrics if you want them to matter.

So how about some context around them IOP’s (or latency and bandwidth or availability for that matter)?

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Part II: EMC Evolves Enterprise Data Protection with Enhancements

Storage I/O trends

This is the second part of a two-part series on recent EMC backup and data protection announcements. Read part I here.

What about the products, what’s new?

In addition to articulating their strategy for modernizing data protection (covered in part I here), EMC announced enhancements to Avamar, Data Domain, Mozy and Networker.

Data protection storage systems (e.g. Data Domain)

Building off of previously announced Backup Recovery Solutions (BRS) including Data Domain operating system storage software enhancements, EMC is adding more application and software integration along with new platform (systems) support.

Data Domain (e.g. Protection Storage) enhancements include:

  • Application integration with Oracle, SAP HANA for big data backup and archiving
  • New Data Domain protection storage system models
  • Data in place upgrades of storage controllers
  • Extended Retention now available on added models
  • SAP HANA Studio backup integration via NFS
  • Boost for Oracle RMAN, native SAP tools and replication integration
  • Support for backing up and protecting Oracle Exadata
  • SAP (non HANA) support both on SAP and Oracle

Data in place upgrades of controllers for 4200 series models on up (previously available on some larger models). This means that controllers can be upgraded with data remaining in place as opposed to a lengthy data migration.

Extended Retention facility is a zero cost license that enables more disk drive shelves to be attached to supported Data Domain systems. Thus there is a not a license fee, however you do pay for the storage shelves and drives to increase the available storage capacity. Note that this feature increases the storage capacity by adding more disk drives and does not increase the performance of the Data Domain system. Extended Retention has been available in the past however is now supported via more platform models. The extra storage capacity is essentially placed into a different tier that an archive policy can then migrate data into.

Boost for accelerating data movement to and from Data Domain systems is only available using Fibre Channel. When asked about FC over Ethernet (FCoE) or iSCSI EMC indicated its customers are not asking for this ability yet. This has me wondering if it is that the current customer focus is around FC, or if those customers are not yet ready for iSCSI or FCoE, or, if there were iSCSI or FCoE support, more customers would ask for it?

With the new Data Domain protection storage systems EMC is claiming up to:

  • 4x faster performance than earlier models
  • 10x more scalable and 3x more backup/archive streams
  • 38 percent lower cost per GB based on holding price points and applying improvements


EMC Data Domain data protection storage platform family


Data Domain supporting both backup and archive

Expanding Data Domain from backup to archive

EMC continues to evolve the Data Domain platform from just being a backup target platform with dedupe and replication to a multi-function, multi-role solution. In other words, one platform with many uses. This is an example of using one tool or technology for different purposes such as backup and archiving, however with separate polices. Here is a link to a video where I discuss using common tools for backup and archiving, however with separate polices. In the above figure EMC Data Domain is shown as being used for backup along with storage tiering and archiving (file, email, Sharepoint, content management and databases among other workloads).


EMC Data Domain supporting different functions and workloads

Also shown are various tools from other vendors such as Commvault Simpana that can be used as both a backup or archiving tool with Data Domain as a target. Likewise Dell products acquired via the Quest acquisition are shown along with those from IBM (e.g. Tivoli), FileTek among others. Note that if you are a competitor of EMC or simply a fan of other technology you might come to the conclusion that the above may not be different from others. Then again others who are not articulating their version or vision of something like the above figure probably should be also stating the obvious vs. arguing they did it first.

Data source integration (aka data protection software tools)

It seems like just yesterday that EMC acquired Avamar (2006) and NetWorker aka Legato (2003), not to mention Mozy (2007) or Dantz (Retrospect, since divested) in 2004. With the exception of Dantz (Retrospect) which is now back in the hands of its original developers, EMC continues to enhance and evolve Avamar, Mozy and NetWorker including with this announcement.

General Avamar 7 and Networker 8.1 enhancements include:

  • Deeper integration with primary storage and protection storage tiers
  • Optimization for VMware vSphere virtual server environments
  • Improved visibility and control for data protection of enterprise applications

Additional Avamar 7 enhancements include:

  • More Data Domain integration and leveraging as a repository (since Avamar 6)
  • NAS file systems with NDMP accelerator access (EMC Isilon & Celera, NetApp)
  • Data Domain Boost enhancements for faster backup / recovery
  • Application integration with IBM (DB2 and Notes), Microsoft (Exchange, Hyper-V images, Sharepoint, SQL Server), Oracle, SAP, Sybase, VMware images

Note that Avamar dat is still used mainly for ROBO and desktop, laptop type backup scenarios that do not yet support Data Domain. Also see Mozy enhancements below).

Avamar supports VMware vSphere virtual server environments using granular change block tracking (CBT) technology as well as image level backup and recovery with vSphere plugins. This includes an Instant Access recovery when images are stored on Data Domain storage.

Instant Access enables a VM that has been protected using Avamar image level technology on Data Domain to be booted via an NFS VMware Dat. VMware sees the VM and is able to power it on and boot directly from the Data Domain via the NFS Dat. Once the VM is active, it can be Storage vMotion to a production storage VMware Dat while active (e.g. running) for recovery on the fly capabilities.


Instant Access to a VM on Data Domain storage

EMC NetWorker 8.1 enhancements include:

  • Enhanced visibility and control for owners of data
  • Collaborative protection for Oracle environments
  • Synchronize backup and data protection between DBA and Backup admin’s
  • Oracle DBAs use native tools (e.g. RMAN)
  • Backup admin implements organizations SLA’s (e.g. using Networker)
  • Deeper integration with EMC primary storage (e.g. VMAX, VNX, etc)
  • Isilon integration support
  • Snapshot management (VMAX, VNX, RecoverPoint)
  • Automation and wizards for integration, discovery, simplified management
  • Policy-based management, fast recovery from snapshots
  • Integrating snapshots into and as part of data protection strategy. Note that this is more than basic snapshot management as there is also the ability to roll over a snapshot into a Data Domain protection storage tier.
  • Deeper integration with Data Domain protection storage tier
  • Data Domain Boost over Fibre Channel for faster backups and restores
  • Data Domain Virtual Synthetics to cut impact of full backups
  • Integration with Avamar for managing image level backup recovery (Avamar services embedded as part of NetWorker)
  • vSphere Web Client enabling self-service recovery of VMware images
  • Newly created VMs inherit backup polices automatically

Mozy is being positioned for enterprise remote office branch office (ROBO) or distributed private cloud where Avamar, NetWorker or Data Domain solutions are not as applicable. EMC has mentioned that they have over 800 enterprises using Mozy for desktop, laptop, ROBO and mobile data protection. Note that this is a different target market than the Mozy consumer product focused which also addresses smaller SMBs and SOHOs (Small Office Home Offices).

EMC Mozy enhancements to be more enterprise grade:

  • Simplified management services and integration
  • Active Directory (AD) for Microsoft environments
  • New storage pools (multiple types of pools) vs. dedicated storage per client
  • Keyless activation for faster provisioning of backup clients

Note that EMC enhanced earlier this year Data Protection Advisor (DPA) with version 6.0.

What does this all mean?

Storage I/O trends

Data protection and backup discussions often focus around tape summit resources or cloud arguments, although this is changing. What is changing is growing awareness and discussion around how data protection storage mediums, systems and services are used along with the associated software management tools.

Some will say backup is broke often pointing a finger at a media or medium (e.g. tape and disk) about what is wrong. Granted in some environments the target medium (or media) destination is an easy culprit to point a finger to as the problem (e.g. the usual tape sucks or is dead) mantra. However, for many environments while there can be issues, it is more often than not the media, medium, device or target storage system that is broke, instead how it is being used or abused.

This means revisiting how tools are used along with media or storage systems allocated, used and retained with respect to different threat risk scenarios. After all, not everything is the same in the data center or information factory.

Thus modernizing data protection is more than swapping media or mediums including types of storage system from one to another. It is also more than swapping out one backup or data protection tool for another. Modernizing data protection means rethinking what different applications and data need to be protected against various threat risks.

Storage I/O trends

What this has to do with today’s announcement is that EMC is among others in the industry moving towards a holistic data protection modernizing thought model.

In my opinion what you are seeing out of EMC and some others is taking that step back and expanding the data protection conversation to revisit, rethink why, how, where, when and by whom applications and information get protected.

This announcement also ties into finding and removing costs vs. simply cutting cost at the cost of something elsewhere (e.g. service levels, performance, availability). In other words, finding and removing complexities or overhead associated with data protection while making it more effective.

Some closing points, thoughts and more links:

There is no such thing as a data or information recession
People and data are living longer while getting larger
Not everything is the same in the data center or information factory
Rethink data protection including when, why, how, where, with what and by whom
There is little data, big data, very big data and big fast data
Data protection modernization is more than playing buzzword bingo
Avoid using new technology in old ways
Data footprint reduction (DFR) can be help counter changing data life-cycle patterns
EMC continues to leverage Avamar while keeping Networker relevant
Data Domain evolving for both backup and archiving as an example of tool for multiple uses

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EMC Evolves Enterprise Data Protection with Enhancements (Part I)

Storage I/O trends

A couple of months ago at EMCworld there were announcements around ViPR, Pivotal along with trust and clouds among other topics. During the recent EMCworld event there were some questions among attendees what about backup and data protection announcements (or lack there of)?

Modernizing Data Protection

Today EMC announced enhancements to its Backup Recovery Solutions (BRS) portfolio (@EMCBackup) that continue to enable information and applications data protection modernizing including Avamar, Data Domain, Mozy and Networker.

Keep in mind you can’t go forward if you can’t go back, which means if you do not have good data protection to go to, you can’t go forward with your information.

EMC Modern Data Protection Announcements

As part of their Backup to the Future event, EMC announced the following:

  • New generation of data protection products and technologies
  • Data Domain systems: enhanced application integration for backup and archive
  • Data protection suite tools Avamar 7 and Networker 8.1
  • Enhanced Cloud backup capabilities for the Mozy service
  • Paradigm shift as part of data protection modernizing including revisiting why, when, where, how, with what and by whom data protection is accomplished.

What did EMC announce for data protection modernization?

While much of the EMC data protection announcement is around product, there is also the aspect of rethinking data protection. This means looking at data protection modernization beyond swapping out media (e.g. tape for disk, disk for cloud) or one backup software tool for another. Instead, revisiting why data protection needs to be accomplished, by whom, how to remove complexity and cost, enable agility and flexibility. This also means enabling data protection to be used or consumed as a service in traditional, virtual and private or hybrid cloud environments.

EMC uses as an example (what they refer to as Accidental Architecture) of how there are different group and areas of focus, along with silos associated with data protection. These groups span virtual, applications, database, server, storage among others.

The results are silos that need to be transformed in part using new technology in new ways, as well as addressing a barrier to IT convergence (people and processes). The theme behind EMC data protection strategy is to enable the needs and requirements of various groups (servers, applications, database, compliance, storage, BC and DR) while removing complexity.

Moving from Silos of data protection to a converged service enabled model

Three data protection and backup focus areas

This sets the stage for the three components for enabling a converged data protection model that can be consumed or used as a service in traditional, virtual and private cloud environments.


EMC three components of modernized data protection (EMC Future Backup)

The three main components (and their associated solutions) of EMC BRS strategy are:

  • Data management services: Policy and storage management, SLA, SLO, monitoring, discovery and analysis. This is where tools such as EMC Data Protection Advisor (aka via WysDM acquisition) fit among others for coordination or orchestration, setting and managing polices along with other activities.
  • Data source integration: Applications, Database, File systems, Operating System, Hypervisors and primary storage systems. This is where data movement tools such as Avamar and Networker among others fit along with interfaces to application tools such as Oracle RMAN.
  • Protection storage: Targets, destination storage system with media or mediums optimized for protecting and preserving data along with enabling data footprint reduction (DFR). DFR includes functionality such as compression and dedupe among others. Example of data protection storage is EMC Data Domain.

Read more about product items announced and what this all means here in the second of this two-part series.

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

HDS Mid Summer Storage and Converged Compute Enhancements

Storage I/O trends

Converged Compute, SSD Storage and Clouds

Hitachi Data Systems (HDS) announced today several enhancements to their data storage and unified compute portfolio as part of their Maximize I.T. initiative.

Setting the context

As part of setting the stage for this announcement, HDS has presented the following strategy vision as part their vision for IT transformation and cloud computing.

https://hds.com/solutions/it-strategies/maximize-it.html?WT.ac=us_hp_flash_r11

What was announced

This announcement builds on earlier ones around HDS Unified Storage (HUS) primary storage using nand flash MLC Solid State Devices (SSD) and Hard Disk Drives (HDD’s), along with unified block and file (NAS), as well Unified Compute Platform (UCP) also known as converged compute, networking, storage and software. These enhancements follow recent updates to the HDS Content Platform (HCP) for object, file and content storage.

There are three main focus areas of the announcement:

  • Flash SSD storage enhancements for HUS
  • Unified with enhanced file (aka BlueArc based)
  • Enhanced unified compute (UCP)

HDS Flash SSD acceleration

The question should not be if SSD is in your future, rather when, where, with what and how much will be needed.

As part of this announcement, HDS is releasing an all flash SSD based HUS enterprise storage system. Similar to what other vendors have done, HDS is attaching flash SSD storage to their HUS systems in place of HDD’s. Hitachi has developed their own SSD module announced in 2012 (read more here). The HDS SSD module use Multi Level Cell (MLC) nand flash chips (dies) that now supports 1.6TB of storage space capacity unit. This is different from other vendors who either use nand flash SSD drive form factor devices (e.g. Intel, Micron, Samsung, SANdisk, Seagate, STEC (now WD), WD among others) or, PCIe form factor cards (e.g. FusionIO, Intel, LSI, Micron, Virident among others) or, attach a third-party external SSD device (e.g. IBM/TMS, Violin, Whiptail etc.).

Like some other vendors, HDS has also done more than simply attach a SSD (drive, PCIe card, or external device) to their storage systems calling it an integrated solution. What this means is that HDS has implemented software or firmware changes into their storage systems to manage durability and extend flash duty cycles caused by program erase (P/E) cycle wear. In addition HDS has implemented performance optimization in their storage systems to leverage the faster SSD modules, after all, faster storage media or devices need fast storage systems or controllers.

While the new all flash storage system can be initially bought with just SSD, similar to other hybrid storage solutions, hard disk drives (HDD’s) can also be installed. For enabling full performance at low latency, HDS is addressing both the flash SSD modules as well as the storage systems they attach to including back-end, front-end and caching in-between.

The release enables 500,000 or half a million IOPS (no IOP size, reads or writes, random or sequential. Future firmware (non-disrupted) to enable higher performance that HDS is claiming will be 1,000,000 IOPS at under a millisecond) were indicated.

In addition to future performance improvements, HDS is also indicating increased storage space capacity of its MLC flash SSD modules (1.6TB today). Using 12 modules (1.6TB each), 154TB of flash SSD can be placed in a single rack.

HDS File and Network Attached Storage (NAS)

HUS unified NAS file system and gateway (BlueArc based) enhancements include:

  • New platforms leveraging faster processors (both Intel and Field Programmable Gate Arrays (FPGA’s))
  • Common management and software tools from 3000 to new 4000 series
  • Bandwidth doubled with faster connections and more memory
  • Four 10GbE NAS serving ports (front-end)
  • Four 8Gb Fibre Channel ports (back-end)
  • FPGA leveraged for off-loading some dedupe functions (faster performance)

HDS Unified Complete Platform (UCP)

As part of this announcement, HDS is enhancing the Unified Compute Platform (UCP) offerings. HDS re-entered the compute market in 2012 joining other vendors offering unified compute, storage and networking solutions. The HDS converged data infrastructure competes with AMD (Seamicro) SM15000, Dell vStart and VRTX (for lower end market), EMC and VCE vBlock, NetApp FlexPod along with those from HP (or Moonshot micro servers), IBM Puresystems, Oracle and others.

UCP Pro for VMware vSphere

  • Turnkey converged solution (Compute, Networking, Storage, Software)
  • Includes VMware vSphere pre-installed (OEM from VMware)
  • Flexible compute blade options
  • Three storage system options (HUS, HUS VM and VSP)
  • Cisco and Brocade IP networking
  • UCP Director 3.0 with enhanced automation and orchestration software

UCP Select for Microsoft Private Cloud

  • Supports Hyper-V 3.0 server virtualization
  • Live migration with DR and resynch
  • Microsoft Fast Track certified

UCP Select for Oracle RAC

  • HDS Flash SSD storage
  • SMP x86 compute for performance
  • 2x improvements for IOPS less than 1 millisecond
  • Common management with HiCommand suite
  • Integrated with Oracle RMAN and OVM

UCP Select for SAP HANA

  • Scale out to 8TBs memory (DRAM)
  • Tier 1 storage system certified for SAP HANA DR
  • Leverages SAP HANA SAP storage connector API

What this all means?

Storage I/O trends

With these announcements HDS is extending its storage centric hardware, software and services solution portfolio for block, file and object access across different usage tiers (systems, applications, mediums). HDS is also expanding their converged unified compute platforms to stay competitive with others including Dell, EMC, Fujitsu, HP, IBM, NEC, NetApp and Oracle among others. For environments with HDS storage looking for converged solutions to support VMware, Microsoft Hyper-V, Oracle or SAP HANA these UCP systems are worth checking out as part of evaluating vendor offerings. Likewise for those who have HDS storage exploring SSD offerings, these announcements give opportunities to enable consolidation as do the unified file (NAS) offerings.

Note that now HDS does not have a public formalized message or story around PCIe flash cards, however they have relationships with various vendors as part of their UCP offerings.

Overall a good set of incremental enhancements for HDS to stay competitive and leverage their field proven capabilities including management software tools.

Ok, nuff said

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Upgrading Lenovo X1 Windows 7 with a Samsung 840 SSD

Storage I/O trends

I recently upgraded my Lenovo X1 laptop from a Samsung 830 256GB Solid State Device (SSD) drive to a new Samsung 840 512GB SSD. The following are some perspectives, comments on my experience in using the Samsung SSD over the past year, along with what was involved in the upgrade.

Background

A little over a year ago I upgraded my then new Lenovo X1 replacing upon its arrival the factory supplied Hard Disk Drive (HDD) with a Solid State Device (SSD) drive. After setup and data migration the 2.5” 7,200 RPM 320GB Toshiba HDD was cloned to a SATA 256GB Samsung model 830 SSD. By first setting up and configuring, copying files, applications, going through Windows and other updates, when it came time to clone to the SSD, the HDD effectively became a backup.

Note that prior to using the Samsung SSD in my Lenovo X1, I was using Hybrid HDD (HHDD’s) as my primary storage to boost read performance and space capacity. These were in addition to other external SSD and HDD that I used along with NAS devices. Read more about my HHDD experiences in a series of post here.

Fast forward to the present and it is time to do yet another upgrade, not because there is anything wrong with the Samsung SSD other than I was running low on space capacity. Sure 256GB was a lot of space, however I also had become used to having a 500GB and 750GB HHDD before downsizing to the SSD. Granted some of the data I have on the SSD is more for convenience, as a cache or buffer when not connected to the network. Not to mention if you have VMware Workstation for running various Virtual Machines (VMs) you know how those VMs can add up quickly, not to mention videos and other items.

Stack of HDD, HHDD and SSDs

Over the past year, my return on investment (ROI) and return on innovation (the new ROI) was as low as three months, or worse case about six months. That was based on the amount of time I was able to not have to wait while saving data. Sure, I had some read and boot performance improvements, as well as being able to do more IOPs and other things. However those were not as significant due to having been using HHDDs vs. if had gone from HDD to SSD.

My productivity was saving 3 to 5 minutes per day when storing large files, documents, videos or other items as part of generating or working on content. Not to mention seeing faster snapshots and other copy functions for HA, BC, DR take less time enabling more productivity to occur vs. waiting.

Thus the ROI timeframe varies depends on what I value my time on or for a particular project among other things.

Sure IOPS are important, so to is simple wall clock or stop watch based timing to measure work being done or time spent waiting.

Upgrade Time

While this was replacing one SSD with another, the same things and steps would apply if going from an HDD to SSD.

Before upgrade
Free space and storage utilization before the upgrade

Make sure that you have a good full and consistent backup copy of your data.

If it is enabled, disable bit locker or other items that might interfere with the clone. Here is a post if you are interested in enabling Windows bitlocker on Windows 7 64 bit.

Run a quick cleanup, registry repair or other maintenance to make sure you have a good and consistent copy before cloning it.

Install any migration or clone software, in the past I have used Seagate Discwizard (Acronis) along with full Acronis in the past. This time I used the Samsung Data Migration powered by Clonix, which is an improvement IMHO vs. what they used to supply which was Norton Ghost.

Shutdown Time

Attach the new drive, for this upgrade I removed the existing Samsung 830 SSD from its internal bay and replaced it with the new Samsung 840. The Samsung 830 was then attached to Lenovo X1 laptop using a USB to SATA cable. Note that you could also do the opposite which is attach the new drive using the USB to SATA cable for the clone operation, then install that into the internal drive bay which would drop need for changing boot sequence.


Samsung 830, Samsung 840 and Lenovo X1


Old Samsung 830 removed, new 840 being installed


Samsung 840 goes in Lenovo X1, Samsung 830 with SATA to USB cable

Since I removed the old drive and attached that to the Lenovo X1 via a SATA to USB cable, and the new drive internal, I also had to change the boot sequence. Remember to change this boot sequence back after the upgrade is complete. On the other hand, if you leave the original drive internally and attach the new drive via a USB to SATA, or eSATA to SATA cable for the clone, you do not need to change the boot sequence.


Changing boot sequence , note one SSDs appears as USB cable being used

Before running the data migration software, I disabled my network connection to make sure the system was isolated during the upgraded and then run the data migration software tool.


Samsung Data Migration tool (powered by Clonix Ltd.) during clone operation

Unlike tools such as Seagate DiscWizard based on Acronis, the Samsung tool based on Clonix does not shutdown or performs upgrade off-line. There is a tradeoff here that I observed, the Acronis shutdown approach while being offline, seemed quicker, however that is subjective. The Samsung tool seemed longer, about 2.5 hours to clone 256G to 512G however, I was still able to do things on the PC (making screen shots).

Even though the Clonix powered Samsung data migration tool works on-line enabling things to be done, best to leave all applications shutdown.

Once the data migration tool is done and it says 100 percent complete DO NOT DO ANYTHING until you see a prompt telling you to do something.

WAIT, as there is some background things that occur after you get the 100 percent complete. When you see prompt screen, only then it will be ok to move forward.

At that point, shutdown window, remove old drive, change any setup boot sequence and reboot to verify all is ok.

Also, remember to turn bit locker back on if needed.

Post Mortem

How is the new SSD drive is running?

So far so good, as fast if not better than the old one.


About a month after the upgrade and the space is being put to use.

How about the Samsung 830?

That is now being used for various things in my test lab environment joining other SSD, HHDD and HDDs supporting various physical and virtual server activities including in some testing as part of this series (watch for more in this series soon).

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

June 2013 Server and StorageIO Update Newsletter

StorageIO News Letter Image
June 2013 News letter

Welcome to the June 2013 edition of the StorageIO Update. In this edition coverage includes data center infrastructure management (DCIM), metrics that matter, industry trends, IBM buying Softlayer for Cloud, IaaS and managed services. Other items include backup and data protection topics for SMBs, as well as big data storage topics. Also the EPA has announced a review session for Energy Star for Data Center storage that you can give your comments. Enjoy this edition of the StorageIO Update newsletter.

Click on the following links to view the June 2013 edition as (HTML sent via Email) version, or PDF versions.

Visit the news letter page to view previous editions of the StorageIO Update.

You can subscribe to the news letter by clicking here.

Enjoy this edition of the StorageIO Update news letter, let me know your comments and feedback.

Ok Nuff said, for now

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

EPA Energy Star Data Center Storage Draft Specification review

Storage I/O trends

For those of you interested in EPA Energy Star for Data Center Storage, here is an announcement for an upcoming conference call and review of the version 1.0 final draft specification.

There are a few attachments referenced in the following note from EPA that can be accessed here:

EPA_Version 1.0 Storage Final Draft Specification Cover Letter
EPA_Version 1.0 Storage Draft 4 Specification Comment Response Document
EPA_Version 1.0 Storage Final Draft Test Method
EPA_Version 1.0 Storage Final Draft Specification

Dear ENERGY STAR® Data Center Storage Partner or Other Interested Party:

Please see the attached important correspondence from the U.S. Environmental Protection Agency concerning the ENERGY STAR Version 1.0 Data Center Storage Final Draft Specification and Test Method. EPA will host a webinar on July 9, 2013 from 3:00 5:00 PM Eastern Time to discuss the documents with stakeholders.  Please RSVP for the webinar to storage@energystar.gov no later than July 5, 2013.

Thank you for your continued support of the ENERGY STAR program.

For more information, visit:www.energystar.gov

This message was sent to you on behalf of ENERGY STAR. Each ENERGY STAR partner organization must have at least one primary contact receiving e-mail to maintain partnership. If you are no longer working on ENERGY STAR, and wish to be removed as a contact, please update your contact status in your MESA account. If you are not a partner organization and wish to opt out of receiving e-mails, you may call the ENERGY STAR Hotline at 1-888-782-7937 and request to have your mass mail settings changed. Unsubscribing means that you will no longer receive program-wide or product-specific e-mails from ENERGY STAR.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

IBM buys Softlayer, for software defined infrastructures and clouds?

Storage I/O trends

IBM today announced that they are acquiring privately held Dallas Texas-based Softlayer and Infrastructure as a Service (IaaS) provider.

IBM is referring to this as Cloud without Compromise (read more about clouds, conversations and confidence here).

It’s about the management, flexibly, scale up, out and down, agility and valueware.

Is this IBM’s new software defined data center (SDDC) or software defined infrastructure (SDI) or software defined management (SDM), software defined cloud (SDC) or software defined storage (SDS) play?

This is more than a software defined marketing or software defined buzzword announcement.
buzzword bingo

If your view of software define ties into the theme of leveraging, unleashing resources, enablement, flexibility, agility of hardware, software or services, then you may see Softlayer as part of a software defined infrastructure.

On the other hand, if your views or opinions of what is or is not software defined align with a specific vendor, product, protocol, model or punditry then you may not agree, particular if it is in opposition to anything IBM.

Cloud building blocks

During today’s announcement briefing call with analysts there was a noticeable absence of software defined buzz talk which given its hype and usage lately, was a refreshing welcome relief. So with that, lets set the software defined conversation aside (for now).

Cloud image

Who is Softlayer, why is IBM interested in them?

Softlayer provide software and services to support both SMB, SME and other environments with bare metal (think traditional hosted servers), along with multi-tenant (shared) cloud virtual public and private cloud service offerings.

Softlayer supports various applications, environments from little data processing to big data analytics to little data processing, from social to mobile to legacy. This includes those app’s or environments that were born in the cloud, or legacy environments looking to leverage cloud in a complimentary way.

Some more information about Softlayer includes:

  • Privately held IaaS firm founded in 2005
  • Estimated revenue run rate of around $400 million with 21,000 customers
  • Mix of SMB, SME and Web-based or born in the cloud customers
  • Over 100,000 devices under management
  • Provides a common modularized management framework set of tools
  • Mix of customers from Web startups to global enterprise
  • Presence in 13 data centers across the US, Asia and Europe
  • Automation, interoperability, large number of API access and supported
  • Flexibility, control and agility for physical (bare metal) and cloud or virtual
  • Public, private and data center to data center
  • Designed for scale, durability and resiliency without complexity
  • Part of OpenStack ecosystem both leveraging and supporting it
  • Ability for customers to use OpenStack, Cloudstack, Citrix, VMware, Microsoft and others
  • Can be white or private labeled for use as a service by VARs

Storage I/O trends

What IBM is planning for Softlayer

Softlayer will report into IBM Global Technology Services (GTS) complimenting existing capabilities which includes ten cloud computing centers on five continents. IBM has created a new Cloud Services Division and expects cloud revenues could be $7 billion annually by the end of 2015. Amazon Web Services (AWS) is estimated to hit about $3.8 Billion by end of 2013. Note that in 2012 AWS target available market was estimated to be about $11 Billion which should become larger moving forward. Rackspace by comparison had recent earning announcements on May 8 2013 of $362 Million with most that being hosting vs. cloud services. That works out to an annualized estimated run rate of $1.448 Billion (or better depending on growth).

I mention AWS and Rackspace to illustrate the growth potential for IBM and Softlayer to discuss the needs of both cloud services customers such as those who use AWS (among other providers), as well as bare metal or hosting or dedicated servers such as with Rackspace among others.

Storage I/O trends

What is not clear at this time is if IBM is combing traditional hosting, managed services, new offerings, products and services in that $7 billion number. In other words if the $7 billion represents what the revenues of the new Cloud Services Division independent of other GTS or legacy offerings as well as excluding hardware, software products from STG (Systems Technology Group) among others, that would be impressive and a challenge to the likes of AWS.

IBM has indicated that it will leverage its existing Systems Technology Group (STG) portfolio of servers and storage extending the capabilities of Softlayer. While currently x86 based, one could expect IBM to leverage and add support for their Power systems line of processors and servers, Puresystems, as well as storage such as XIV or V7000 among others for tier 1 needs.

Some more notes:

  • Ties into IBM Smart Cloud initiatives, model and paradigm
  • This deal is expected to close 3Q 2013, terms or price were not disclosed.
  • Will enable Softlayer to be leveraged on a larger, broader basis by IBM
  • Gives IBM increased access to SMB, SME and web customers than in the past
  • Software and development to stay part of Softlayer
  • Provides IBM an extra jumpstart play for supporting and leveraging OpenStack
  • Compatible and supports Cloustack and Citrix who are also IBM partners
  • Also compatible and supports VMware who is also an IBM partner

Storage I/O trends

Some other thoughts and perspectives

This is a good and big move for IBM to add value and leverage their current portfolios of both services, as well as products and technologies. However it is more than just adding value or finding new routes to markets for those goods and services, it’s also about enablement IBM has long been in the services including managed services, out or in sourcing and hosting business. This can be seen as another incremental evolution of those offerings to both existing IBM enterprise customers, as well to reach new, emerging along with SMB or SME’s that tend to grow up and become larger consumers of information and data infrastructure services.

Further this helps to add some product and meaning around the IBM Smart Cloud initiatives and programs (not that there was not before) giving customers, partners and resellers something tangible to see, feel, look at, touch and gain experience not to mention confidence with clouds.

On the other hand, is IBM signaling that they want more of the growing business that AWS has been realizing, not to mention Microsoft Azure, Rackspace, Centurylink/Savvis, Verizon/Terremark, CSC, HP Cloud, Cloudsigma, Bluehost among many others (if I missed you or your favorite provider, feel free to add it to the comments section). This also gets IBM added Devops exposure something that Softlayer practices, as well as a Openstack play, not to mention cloud, software defined, virtual, big data, little data, analytics and many other buzzword bingo terms.

Congratulations to both IBM and the Softlayer folks, now lets see some execution to watch how this unfolds.

Ok, nuff said.

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Web chat Thur May 30th: Hot Storage Trends for 2013 (and beyond)

Storage I/O trends

Join me on Thursday May 30, 2013 at Noon ET (9AM PT) for a live web chat at the 21st Century IT (21cit) site (click here to register, sign-up, or view earlier posts). This will be an online web chat format interactive conversation so if you are not able to attend, you can visit at your convenience to view and give your questions along with comments. I have done several of these web chats with 21cit as well as other venues that are a lot of fun and engaging (time flies by fast).

For those not familiar, 21cIT is part of the Desum/UBM family of sites including Internet Evolution, SMB Authority, and Enterprise Efficiency among others that I do article posts, videos and live chats for.


Sponsored by NetApp

I like these types of sites in that while they have a sponsor, the content is generally kept separate between those of editors and contributors like myself and the vendor supplied material. In other words I coordinate with the site editors on what topics I feel like writing (or doing videos) about that align with the given sites focus and themes as opposed to following and advertorial calendar script.

During this industry trends perspective web chat, one of the topics and themes planned for discussion include software defined storage (SDS). View a recent video blog post I did here about SDS. In addition to SDS, Solid State Devices (SSD) including nand flash, cloud, virtualization, object, backup and data protection, performance, management tools among others are topics that will be put out on the virtual discussion table.

Storage I/O trends

Following are some examples of recent and earlier industry trends perspectives posts that I have done over at 21cit:

Video: And Now, Software-Defined Storage!
There are many different views on what is or is not “software-defined” with products, protocols, preferences and even press releases. Check out the video and comments here.

Big Data and the Boston Marathon Investigation
How the human face of big-data will help investigators piece together all the evidence in the Boston bombing tragedy and bring those responsible to justice. Check out the post and comments here.

Don’t Use New Technologies in Old Ways
You can add new technologies to your data center infrastructure, but you won’t get the full benefit unless you update your approach with people, processes, and policies. Check out the post and comments here.

Don’t Let Clouds Scare You, Be Prepared
The idea of moving to cloud computing and cloud services can be scary, but it doesn’t have to be so if you prepare as you would for implementing any other IT tool. Check out the post and comments here.

Storage and IO trends for 2013 (& Beyond)
Efficiency, new media, data protection, and management are some of the keywords for the storage sector in 2013. Check out these and other trends, predictions along with comments here.

SSD and Real Estate: Location, Location, Location
You might be surprised how many similarities between buying real estate and buying SSDs.
Location matters and it’s not if, rather when, where, why and how you will be using SSD including nand flash in the future, read more and view comments here.

Everything Is Not Equal in the Data center, Part 3
Here are steps you can take to give the right type of backup and protection to data and solutions, depending on the risks and scenarios they face. The result? Savings and efficiencies. Read more and view comments here.

Everything Is Not Equal in the Data center, Part 2
Your data center’s operations can be affected at various levels, by multiple factors, in a number of degrees. And, therefore, each scenario requires different responses. Read more and view comments here.

Everything Is Not Equal in the Data center, Part 1
It pays to check your data center Different components need different levels of security, storage, and availability. Read more and view comments here.

Data Protection Modernizing: More Than Buzzword Bingo
IT professionals and solution providers should put technologies such as disk based backup, dedupe, cloud, and data protection management tools as assets and resources to make sure they receive necessary funding and buy in. Read more and view comments here.

Don’t Take Your Server & Storage IO Pathing Software for Granted
Path managers are valuable resources. They will become even more useful as companies continue to carry out cloud and virtualization solutions. Read more and view comments here.

SSD Is in Your Future: Where, When & With What Are the Questions
During EMC World 2012, EMC (as have other vendors) made many announcements around flash solid-state devices (SSDs), underscoring the importance of SSDs to organizations future storage needs. Read more here about why SSD is in your future along with view comments.

Changing Life cycles and Data Footprint Reduction (DFR), Part 2
In the second part of this series, the ABCDs (Archive, Backup modernize, Compression, Dedupe and data management, storage tiering) of data footprint reduction, as well as SLOs, RTOs, and RPOs are discussed. Read more and view comments here.

Changing Life cycles and Data Footprint Reduction (DFR), Part 1
Web 2.0 and related data needs to stay online and readily accessible, creating storage challenges for many organizations that want to cut their data footprint. Read more and view comments here.

No Such Thing as an Information Recession
Data, even older information, must be protected and made accessible cost-effectively. Not to mention that people and data are living longer as well as getting larger. Read more and view comments here.

Storage I/O trends

These real-time, industry trends perspective interactive chats at 21cit are open forum format (however be polite and civil) as well as non vendor sales or marketing pitches. If you have specific questions you ‘d like to ask or points of view to express, click here and post them in the chat room at any time (before, during or after).

Mark your calendar for this event live Thursday, May 30, at noon ET or visit after the fact.

Ok, nuff said (for now)

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

May 2013 Server and StorageIO Update Newsletter

StorageIO News Letter Image
May 2013 News letter

Welcome to the May 2013 edition of the StorageIO Update. This edition has announcement analysis of EMC ViPR, Software Defined Storage (including a video here), server, storage and I/O metrics that matter for example how many IOPS can a HDD do (it depends). SSD including nand flash remains a popular topic, both in terms of industry adoption and customer deployment. Also included are my perspectives on the SSD vendor FusionIO CEO leaving in a flash. Speaking of nand flash, have you thought about how some RAID implementations and configurations can extend the life along with durability of SSD’s? More on this soon, however check out this video to give you some perspectives.

Click on the following links to view the May 2013 edition as (HTML sent via Email) version, or PDF versions.

Visit the news letter page to view previous editions of the StorageIO Update.

You can subscribe to the news letter by clicking here.

Enjoy this edition of the StorageIO Update news letter, let me know your comments and feedback.

Ok Nuff said, for now

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Part II: How many IOPS can a HDD HHDD SSD do with VMware?

How many IOPS can a HDD HHDD SSD do with VMware?

server storage data infrastructure i/o iop hdd ssd trends

Updated 2/10/2018

This is the second post of a two-part series looking at storage performance, specifically in the context of drive or device (e.g. mediums) characteristics of How many IOPS can a HDD HHDD SSD do with VMware. In the first post the focus was around putting some context around drive or device performance with the second part looking at some workload characteristics (e.g. benchmarks).

A common question is how many IOPS (IO Operations Per Second) can a storage device or system do?

The answer is or should be it depends.

Here are some examples to give you some more insight.

For example, the following shows how IOPS vary by changing the percent of reads, writes, random and sequential for a 4K (4,096 bytes or 4 KBytes) IO size with each test step (4 minutes each).

IO Size for test
Workload Pattern of test
Avg. Resp (R+W) ms
Avg. IOP Sec (R+W)
Bandwidth KB Sec (R+W)
4KB
100% Seq 100% Read
0.0
29,736
118,944
4KB
60% Seq 100% Read
4.2
236
947
4KB
30% Seq 100% Read
7.1
140
563
4KB
0% Seq 100% Read
10.0
100
400
4KB
100% Seq 60% Read
3.4
293
1,174
4KB
60% Seq 60% Read
7.2
138
554
4KB
30% Seq 60% Read
9.1
109
439
4KB
0% Seq 60% Read
10.9
91
366
4KB
100% Seq 30% Read
5.9
168
675
4KB
60% Seq 30% Read
9.1
109
439
4KB
30% Seq 30% Read
10.7
93
373
4KB
0% Seq 30% Read
11.5
86
346
4KB
100% Seq 0% Read
8.4
118
474
4KB
60% Seq 0% Read
13.0
76
307
4KB
30% Seq 0% Read
11.6
86
344
4KB
0% Seq 0% Read
12.1
82
330

Dell/Western Digital (WD) 1TB 7200 RPM SATA HDD (Raw IO) thread count 1 4K IO size

In the above example the drive is a 1TB 7200 RPM 3.5 inch Dell (Western Digital) 3Gb SATA device doing raw (non file system) IO. Note the high IOP rate with 100 percent sequential reads and a small IO size which might be a result of locality of reference due to drive level cache or buffering.

Some drives have larger buffers than others from a couple to 16MB (or more) of DRAM that can be used for read ahead caching. Note that this level of cache is independent of a storage system, RAID adapter or controller or other forms and levels of buffering.

Does this mean you can expect or plan on getting those levels of performance?

I would not make that assumption, and thus this serves as an example of using metrics like these in the proper context.

Building off of the previous example, the following is using the same drive however with a 16K IO size.

IO Size for test
Workload Pattern of test
Avg. Resp (R+W) ms
Avg. IOP Sec (R+W)
Bandwidth KB Sec (R+W)
16KB
100% Seq 100% Read
0.1
7,658
122,537
16KB
60% Seq 100% Read
4.7
210
3,370
16KB
30% Seq 100% Read
7.7
130
2,080
16KB
0% Seq 100% Read
10.1
98
1,580
16KB
100% Seq 60% Read
3.5
282
4,522
16KB
60% Seq 60% Read
7.7
130
2,090
16KB
30% Seq 60% Read
9.3
107
1,715
16KB
0% Seq 60% Read
11.1
90
1,443
16KB
100% Seq 30% Read
6.0
165
2,644
16KB
60% Seq 30% Read
9.2
109
1,745
16KB
30% Seq 30% Read
11.0
90
1,450
16KB
0% Seq 30% Read
11.7
85
1,364
16KB
100% Seq 0% Read
8.5
117
1,874
16KB
60% Seq 0% Read
10.9
92
1,472
16KB
30% Seq 0% Read
11.8
84
1,353
16KB
0% Seq 0% Read
12.2
81
1,310

Dell/Western Digital (WD) 1TB 7200 RPM SATA HDD (Raw IO) thread count 1 16K IO size

The previous two examples are excerpts of a series of workload simulation tests (ok, you can call them benchmarks) that I have done to collect information, as well as try some different things out.

The following is an example of the summary for each test output that includes the IO size, workload pattern (reads, writes, random, sequential), duration for each workload step, totals for reads and writes, along with averages including IOP’s, bandwidth and latency or response time.

disk iops

Want to see more numbers, speeds and feeds, check out the following table which will be updated with extra results as they become available.

Device
Vendor
Make

Model

Form Factor
Capacity
Interface
RPM Speed
Raw
Test Result
HDD
HGST
Desktop
HK250-160
2.5
160GB
SATA
5.4K
HDD
Seagate
Mobile
ST2000LM003
2.5
2TB
SATA
5.4K
HDD
Fujitsu
Desktop
MHWZ160BH
2.5
160GB
SATA
7.2K
HDD
Seagate
Momentus
ST9160823AS
2.5
160GB
SATA
7.2K
HDD
Seagate
MomentusXT
ST95005620AS
2.5
500GB
SATA
7.2K(1)
HDD
Seagate
Barracuda
ST3500320AS
3.5
500GB
SATA
7.2K
HDD
WD/Dell
Enterprise
WD1003FBYX
3.5
1TB
SATA
7.2K
HDD
Seagate
Barracuda
ST3000DM01
3.5
3TB
SATA
7.2K
HDD
Seagate
Desktop
ST4000DM000
3.5
4TB
SATA
HDD
HDD
Seagate
Capacity
ST6000NM00
3.5
6TB
SATA
HDD
HDD
Seagate
Capacity
ST6000NM00
3.5
6TB
12GSAS
HDD
HDD
Seagate
Savio 10K.3
ST9300603SS
2.5
300GB
SAS
10K
HDD
Seagate
Cheetah
ST3146855SS
3.5
146GB
SAS
15K
HDD
Seagate
Savio 15K.2
ST9146852SS
2.5
146GB
SAS
15K
HDD
Seagate
Ent. 15K
ST600MP0003
2.5
600GB
SAS
15K
SSHD
Seagate
Ent. Turbo
ST600MX0004
2.5
600GB
SAS
SSHD
SSD
Samsung
840 PRo
MZ-7PD256
2.5
256GB
SATA
SSD
HDD
Seagate
600 SSD
ST480HM000
2.5
480GB
SATA
SSD
SSD
Seagate
1200 SSD
ST400FM0073
2.5
400GB
12GSAS
SSD

Performance characteristics 1 worker (thread count) for RAW IO (non-file system)

Note: (1) Seagate Momentus XT is a Hybrid Hard Disk Drive (HHDD) based on a 7.2K 2.5 HDD with SLC nand flash integrated for read buffer in addition to normal DRAM buffer. This model is a XT I (4GB SLC nand flash), may add an XT II (8GB SLC nand flash) at some future time.

As a starting point, these results are raw IO with file system based information to be added soon along with more devices. These results are for tests with one worker or thread count, other results will be added with such as 16 workers or thread counts to show how those differ.

The above results include all reads, all writes, mix of reads and writes, along with all random, sequential and mixed for each IO size. IO sizes include 4K, 8K, 16K, 32K, 64K, 128K, 256K, 512K, 1024K and 2048K. As with any workload simulation, benchmark or comparison test, take these results with a grain of salt as your mileage can and will vary. For example you will see some what I consider very high IO rates with sequential reads even without file system buffering. These results might be due to locality of reference of IO’s being resolved out of the drives DRAM cache (read ahead) which vary in size for different devices. Use the vendor model numbers in the table above to check the manufactures specs on drive DRAM and other attributes.

If you are used to seeing 4K or 8K and wonder why anybody would be interested in some of the larger sizes take a look at big fast data or cloud and object storage. For some of those applications 2048K may not seem all that big. Likewise if you are used to the larger sizes, there are still applications doing smaller sizes. Sorry for those who like 512 byte or smaller IO’s as they are not included. Note that for all of these unless indicated a 512 byte standard sector or drive format is used as opposed to emerging Advanced Format (AF) 4KB sector or block size. Watch for some more drive and device types to be added to the above, along with results for more workers or thread counts, along with file system and other scenarios.

Using VMware as part of a Server, Storage and IO (aka StorageIO) test platform

vmware vexpert

The above performance results were generated on Ubuntu 12.04 (since upgraded to 14.04 which was hosted on a VMware vSphere 5.1 (upgraded to 5.5U2) purchased version (you can get the ESXi free version here) with vCenter enabled system. I also have VMware workstation installed on some of my Windows-based laptops for doing preliminary testing of scripts and other activity prior to running them on the larger server-based VMware environment. Other VMware tools include vCenter Converter, vSphere Client and CLI. Note that other guest virtual machines (VMs) were idle during the tests (e.g. other guest VMs were quiet). You may experience different results if you ran Ubuntu native on a physical machine or with different adapters, processors and device configurations among many other variables (that was a disclaimer btw ;) ).

Storage I/O trends

All of the devices (HDD, HHDD, SSD’s including those not shown or published yet) were Raw Device Mapped (RDM) to the Ubuntu VM bypassing VMware file system.

Example of creating an RDM for local SAS or SATA direct attached device.

vmkfstools -z /vmfs/devices/disks/naa.600605b0005f125018e923064cc17e7c /vmfs/volumes/dat1/RDM_ST1500Z110S6M5.vmdk

The above uses the drives address (find by doing a ls -l /dev/disks via VMware shell command line) to then create a vmdk container stored in a dat. Note that the RDM being created does not actually store data in the .vmdk, it’s there for VMware management operations.

If you are not familiar with how to create a RDM of a local SAS or SATA device, check out this post to learn how.This is important to note in that while VMware was used as a platform to support the guest operating systems (e.g. Ubuntu or Windows), the real devices are not being mapped through or via VMware virtual drives.

vmware iops

The above shows examples of RDM SAS and SATA devices along with other VMware devices and dats. In the next figure is an example of a workload being run in the test environment.

vmware iops

One of the advantages of using VMware (or other hypervisor) with RDM’s is that I can quickly define via software commands where a device gets attached to different operating systems (e.g. the other aspect of software defined storage). This means that after a test run, I can quickly simply shutdown Ubuntu, remove the RDM device from that guests settings, move the device just tested to a Windows guest if needed and restart those VMs. All of that from where ever I happen to be working from without physically changing things or dealing with multi-boot or cabling issues.

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

So how many IOPs can a device do?

That depends, however have a look at the above information and results.

Check back from time to time here to see what is new or has been added including more drives, devices and other related themes.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.