Fall 2013 StorageIO Update Newsletter

Storage I/O trends

Fall 2013 StorageIO Update Newsletter

Welcome to the Fall 2013 (joint September and October) edition of the StorageIO Update (newsletter) containing trends perspectives on cloud, virtualization and data infrastructure topics. It is fall (at least here in north America) which means conferences, symposium, virtual and physical events, seminars, webinars in addition to normal client project activities. Starting with VMworld back in late August, that event occurred in San Francisco which kicked off the fall (or back to school) season of activity. VMworld was followed with many other events including in-person along with virtual or on-line such as webinars, Google+ hangouts among others, not to mention all the briefings for vendor product announcements and updates. Check out the industry trends perspectives articles, comments and blog posts below that covers some activity over the past few months.

VMworld 2013
Congratulations to VMworld on the 10th anniversary of the event. With the largest installment yet of a VMworld in terms of attendance, there were also many announcements. Here are a synopsis of some of those announcements which of course included plenty of software defined marketing (SDM).

CMG and Storage Performance
During mid-September I was invited to give an industry trends and perspectives presentation to the Storage Performance Council (SPC) board. The SPC board were meeting in the Minneapolis area and I gave a brief talk about Metrics that Matter and importance of context with focus on applications. Speaking of the Minneapolis area, Tom Becchetti (@tbecchetti) organized a great CMG event hosted over at Blue Cross Blue Shield of Minnesota. I gave a discussion around Technolutionary, technology evolution and revolution, using old and new things in new ways.

Check out our backup, restore, BC, DR and archiving (Under the resources section on StorageIO.com) for various presentation, book chapter downloads and other content.

SNW Fall 2013 Long Beach
Talking about traveling, there was a quick trip out to Long Beach for the fall 2013 edition of Storage Networking World (SNW) where I had some good meetings and conversations with those who were actually there. No need to sugar coat it, likewise no need to kick sand in its face. Plain and simple, SNW is not the event it used to be has been a common discussion theme for several years which I had set my expectation accordingly.

Some have asked me why I even spent time, money and resources to attend SNW?

My answer is that I had some meetings to attend to, wanted to see and meet with others who were going to be there, and perhaps even say goodbye to an event that I have been involved with for over a decade.

Does that mean I’m all done with SNW?

Not sure yet as will have to wait and see what SNIA and IDG/Computerworld the event co-owners and producers put together for future events. However there are enough other events and activities to pick up the slack which is part of what has caused the steady decline in events like SNW among others.

Perhaps it is time for SNIA to partner with another adjacent yet like-minded organization such as CMG to collaborate and try doing something like what was done in the early 2000s? That is SNIA providing their own seminars along with others such as myself who involved with both CMG, SNW and SNIA to beef up or set up a storage and I/O focused track at the CMG event.

Beyond those items mentioned above, or in the following section, there are plenty of interesting and exciting things occurring in the background that I cant talk about yet. However watch for future posts, commentary, perspectives and other information down the road (and in the not so distant future).

Enjoy this edition of the StorageIO Update newsletter.

Ok, nuff said (for now)

Cheers gs

StorageIO Industry Trends and PerspectivesIndustry trends perspectives and commentary
What is being seen, heard and talked about while out and about

The following is a synopsis of some StorageIOblog posts, articles and comments in different venues on various industry trends, perspectives and related themes about clouds, virtualization, data and storage infrastructure topics among related themes.

Storage I/O trends

InfoStor: Perspectives on Data Dynamics file migration tool (Read more about StorageX later in this newsletter)
SearchStorage: Perspectives on Data Dynamics resurrects StorageX for file migration
SearchStorage: Perspectives on Cisco buying SSD storage vendor Whiptail

Recent StorageIO Tips and Articles in various venues:

21cIT:  Why You Should Consider Object Storage
InfoStor:  HDDs Are Still Spinning (Rust Never Sleeps)
21cIT:  Object Storage Is in Your Future, Even if You Use Files
21cIT:  Playing the Name Game With Virtual Storage
InfoStor:  Flash Data Storage: Myth vs. Reality
InfoStor:  The Nand Flash Cache SSD Cash Dance
SearchEnterpriseWAN:  Remote Office / ROBO backup and data protection for networking Pro’s
TheVirtualizationPractice:  When and Where to use NAND Flash SSD for Virtual Servers
FedTech:  These Data Center (DCIM) Tools Can Streamline Computing Resources

Storage I/O posts

Recent StorageIO blog post:

Seagate Kinetic Cloud and Object Storage I/O platform (and Ethernet HDD)
Cloud conversations: Has Nirvanix shutdown caused cloud confidence concerns?
Cisco buys Whiptail continuing the SSD storage I/O flash cash cache dash
WD buys nand flash SSD storage I/O cache vendor Virident
EMC New VNX MCx doing more storage I/O work vs. just being more
Is more of something always better? Depends on what you are doing
VMworld 2013 Vmware, server, storage I/O and networking update (Day 1)
EMC ViPR software defined object storage part II

Check out our objectstoragecenter.com page where you will find a growing collection of information and links pertaining to cloud and object storage themes, technologies and trends.

Brouwer Storage Consultancy

StorageIO in Europe (Netherlands)
Spent over a week in the Netherlands where I presented three different seminar workshop sessions organized by Brouwer Storage Consultancy who is celebrating their 10th anniversary in business. These sessions spanned five full days of interactive discussions with an engaged diverse group of attendees in the Nijkerk area who came from across Holland to take part in these workshops.

Congratulations to Gert and Frank Brouwer on their ten years of being in business and best wishes for many more. Fwiw those who are curious StorageIO will be ten years young in business in about two years.

StorageIO Industry Trends and Perspectives

Some observations from while in Europe:

Continued cloud privacy concerns amplified by NSA and suspicion of US-based companies, yet many are not aware of similar concerns of European or UK-based firms from those outside those areas. While there were some cloud concern conversations over the demise of Nirvanix, those seemed less so then in the media or US given that at least in Holland they have seen other cloud and storage as a service firms come and go already. It should be noted that the US has also seen cloud and storage as a service startups come and go, however I think sometimes we or at least the media tends to have a short if not selective memory at times.

In one of our workshops sessions we were talking about service level objectives (SLO), service level agreements (SLA), recovery point objectives (RPO) and recovery time objectives (RTO) among other themes. Somebody mentioned why the focus of time in RPO and questions why not a transactional perspective which I thought was a brilliant question. We had a good conversation in the group and concurred that while RPO is what the industry uses, that there also needs to be a transactional state context tie to what is inferred or assumed with RPO and RTO. Thus the importance of looking beyond just the point in time, however the importance of a transactional context or state, such as not just the time, however to a given transactional point.

Note that transactional could mean database, file system, backup or data protection index or catalog, meta data repository or other entity. This is where some should be jumping up and down like Donkey in Shrek wanting to point out that is exactly what RTO and RPO refer to which would be great. However all to often what is assumed is not conveyed, thus those who don’t know, well, they assume or simply don’t know what others.

StorageIO Industry Trends and Perspectives

Data Dynamics StorageX 7.0 Intelligent Policy Based File Data Migration – There is no such thing as a data or information recession . Likewise, people and data are living longer as well as getting larger. These span various use cases from traditional to personal or at work productivity. From little to big data content, collaboration including file or document sharing to rich media applications all of which are leveraging unstructured data. For example, email, word processing back-office documents, web and text files, presentations (e.g. PowerPoint), photos, audio and video among others. These macro trends result in the continued growth of unstructured Network Attached Storage (NAS) file data.

Thus, a common theme is adding management including automated data movement and migration to carry out structure around unstructured NAS file data. More than a data mover or storage migration tool, Data Dynamics StorageX is a software platform for adding storage management structure around unstructured local and distributed NAS file data. This includes heterogeneous vendor support across different storage system, protocols and tools including Windows CIFS and Unix/Linux NFS.
(Disclosure DataDynamics has been a StorageIO client). Visit Data Dynamics at www.datadynamicsinc.com/

Server and StorageIO seminars, conferences, web cats, events, activities StorageIO activities (out and about)

Seminars, symposium, conferences, webinars
Live in person and recorded recent and upcoming events

Announcing: Backup.U brought to you by Dell

Some on-line (live and recorded) events have include an ongoing series tied to data protection (Backup/restore, HA, BC, DR and Archiving) called Backup.U organized and sponsored by Dell Data Protection Software that you can learn more about at the landing page www.software.dell.com/backupu (more on this in a future post). In addition to data protection, some other events and activities including a BrightTalk webinar on storage I/O and networking for cloud environments (here).

In addition to the above, check out the StorageIO calendar to see more recent and upcoming activities.

Watch for more 2013 events to be added soon to the StorageIO events calendar page. Topics include data protection modernization (backup/restore, HA, BC, DR, archive), data footprint reduction (archive, compression, dedupe), storage optimization, SSD, object storage, server and storage virtualization, big data, little data, cloud and object storage, performance and management trends among others.

Vendors, VAR’s and event organizers, give us a call or send an email to discuss having us involved in your upcoming pod cast, web cast, virtual seminar, conference or other events.

If you missed the Summer (July and August) 2013 StorageIO update newsletter, click here to view that and other previous editions as HTML or PDF versions. Subscribe to this newsletter (and pass it along)

and click here to subscribe to this news letter. View archives of past StorageIO update news letters as well as download PDF versions at: www.storageio.com/newsletter

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Is more of something always better? Depends on what you are doing

Storage I/O trends

Is more always better? Depends on what you are doing

As with many things it depends, however how about some of these?

Is more better for example (among others):

  • Facebook likes
  • Twitter followers or tweets (I’m @storageio btw)
  • Google+ likes, follows and hangouts
  • More smart phone apps
  • LinkedIn connections
  • People in your circle or community
  • Photos or images per post or article
  • People working with or for you
  • Partners vs. doing more with those you have
  • People you are working for or with
  • Posts or longer posts with more in them
  • IOPs or SSD and storage performance
  • Domains under management and supported
  • GB/TB/PB/EB supported or under management
  • Mart-time jobs or a better full-time opportunity
  • Metrics vs. those that matter with context
  • Programmers to get job done (aka mythical man month)
  • Lines of code per cost vs. more reliable and tested code per cost
  • For free items and time spent managing them vs. more productivity for a nominal fee
  • Meetings for planning on what to do vs. streamline and being more productive
  • More sponsors or advertisers or underwriters vs. fewer yet more effective ones
  • Space in your booth or stand at a trade show or conference vs. using what you have more effectively
  • Copies of the same data vs. fewer yet more unique (not full though) copies of information
  • Patents in your portfolio vs. more technology and solutions being delivered
  • Processors, sockets, cores, threads vs. using them more effectively
  • Ports and protocols vs. using them more effectively

Storage I/O trends

Thus does more resources matter, or making more effective use of them?

For example more ports, protocols, processors, cores, sockets, threads, memory, cache, drives, bandwidth, people among other things is not always better, particular if those resources are not being used effectively.

Likewise don’t confuse effective with efficient often assumed to mean used.

For example a cache or memory may be 100% used (what some call efficient) yet only providing a 35% effective benefit (cache hit or miss) vs. cache turn (misses etc).

Throwing more processing power in terms of clock speed, or cores is one thing, kind of like throwing more server blades at a software problem vs. using those cores and sockets not to mention threads more effectively.

Good software will run better on fast hardware while enabling more to be done with the same or less.

Thus with better software or tools, more work can be done in an effective way leveraging those resources vs. simply throwing or applying more at the situation.

Hopefully you get the point, so no need to do more with this post (for now), if not, stay tuned and pay more attention around you.

Ok, nuff said, I need to go get more work done now.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Inaugural episode of the SSD Show podcast at Myce.com

Storage I/O trends

Inaugural episode of the SSD Show podcast at Myce.com

The other day I was invited by Jeremy Reynolds and J.W. Aldershoff to be a guest on the Inaugural episode of their new SSD Show podcast (click here to learn more or listen in).

audio

Many different facets or faces of nand flash SSD and SSHD or HHDD

With this first episode we discuss the latest developments in and around the solid-state device (SSD) and related storage industry, from consumer to enterprise, hardware and software, along with hands on experience insight on products, trends, technologies, technique themes. In this first podcast we discuss Solid State Hybrid Disks (SSHDs) aka Hybrid Hard Disk Drives (HHDD) with flash (read about some of my SSD, HHDD/SSHD hands on personal experiences here), the state of NAND memory (also here about nand DIMMs), the market and SSD pricing.

I had a lot of fun doing this first episode with Jeremy and hope to be invited back to do some more, follow-up on themes we discussed along with new ones in future episodes. One question remains after the podcast, will I convince Jeremy to get a Twitter account? Stay tuned!

Check out the new SSD Show podcast here.

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

Summer 2013 Server and StorageIO Update Newsletter

StorageIO 2013 Summer Newsletter

Cloud, Virtualization, SSD, Data Protection, Storage I/O

Welcome to the Summer 2013 (combined July and August) edition of the StorageIO Update (newsletter) containing trends perspectives on cloud, virtualization and data infrastructure topics.

StorageIO News Letter Image
Summer 2013 News letter

This summer has been far from quiet on the merger and acquisitions (M&E) front with Western Digital (WD) continuing its buying spree including Stec among others. There is the HDS Mid Summer Storage and Converged Compute Enhancements and EMC Evolves Enterprise Data Protection with Enhancements (Part I and Part II).

With VMworld just around the corner along with many other upcoming events, watch for more announcements to be covered in future editions and on StorageIOblog as we move into fall.

Click on the following links to view the Summer 2013 edition as (HTML sent via Email) version, or PDF versions. Visit the news letter page to view previous editions of the StorageIO Update.

You can subscribe to the news letter by clicking here

Enjoy this edition of the StorageIO Update news letter, let me know your comments and feedback.

Ok Nuff said, for now

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

IBM Server Side Storage I/O SSD Flash Cache Software

Storage I/O trends

IBM Server Side Storage I/O SSD Flash Cache Software

As I often say, the best server storage I/O or IOP is the one that you do not have to do. The second best storage I/O or IOP is the one with least impact or that can be done in a cost-effective way. Likewise the question is not if solid-state device (SSD) including nand flash are in your future, rather when, where, why, with what, how much along with from whom. Also location matters when it comes to SSD including nand flash with different environments and applications leveraging different placement (locality) options, not to mention how much performance do you need vs. want?

As part of their $1 billion USD (to be spent over three years, or $333.3333 million per year) flash ahead initiative IBM has announced their Flash Cache Storage Accelerator (FCSA) server software. While IBM did not use the term, (congratulations and thank you btw) some creative marketer might want to try calling this Software Defined Cache (SDC) or Software Defined SSD (SDSSD) which if that occurs, apologies in advance ;). Keep in mind that it was about a year ago this time when IBM announced that they were acquiring SSD industry veteran Texas Memory Systems (TMS).

What was announced, introducing Flash Cache Storage Acceleration or FCSA

With this announcement of FCSA slated for customer general availability by end of August, IBM joins EMC and NetApp among other storage systems vendors who developed their own, or have collaborated on server-side IO optimization and cache software. Some of the other startup and established vendors who have IO optimization, performance acceleration and caching software include DataRam (Ramdisk), FusionIO, Infinio (NFS for VMware), Pernix (block for VMware), Proximal and SANdisk (bought flashsoft) among others.

Read more about IBM Flash Cache Software (FCSA) including various questions and perspectives in part two of this two-part post located here.

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Part II: IBM Server Side Storage I/O SSD Flash Cache Software

Storage I/O trends

Part II IBM Server Flash Cache Storage I/O accelerator for SSD

This is the second in a two-part post series on IBM’s Flash Cache Storage Accelerator (FCSA) for Solid State Device (SSD) storage announced today. You can view part I of the IBM FCSA announcement synopsis here.

Some FCSA ssd cache questions and perspectives

What is FCSA?
FCSA is a server-side storage I/O or IOP caching software tool that makes use of local (server-side) nand flash SSD (PCIe cards or drives). As a cache tool (view IBM flash site here) FCSA provides persistent read caching on IBM servers (xSeries, Flex and Blade x86 based systems) with write through cache (e.g. data cached for later reads) while write data is written directly to block attached storage including SANs. back-end storage can be iSCSI, SAS, FC or FCoE based block systems from IBM or others including all SSD, hybrid SSD or traditional HDD based solutions from IBM and others.

How is this different from just using a dedicated PCIe nand flash SSD card?
FCSA complements those by using them as a persistent storage to cache storage I/O reads to boost performance. By using the PCIe nand flash card or SSD drives, FCSA and other storage I/O cache optimization tools free up valuable server-side DRAM from having to be used as a read cache on the servers. On the other hand, caching tools such as FCSA also keep local cached reads closer to the applications on the servers (e.g. locality of reference) reducing the impact on backed shared block storage systems.

What is FCSA for?
With storage I/O or IOPS and application performance in general, location matters due to locality of reference hence the need for using different approaches for various environments. IBM FCSA is a storage I/O caching software technology that reduces the impact of applications having to do random read operations. In addition to caching reads, FCSA also has a write-through cache, which means that while data written to back-end block storage including on iSCSI, SAS, FC or FCoE based storage (IBM or other vendors), a copy of the data is cached for later reads. Thus while the best storage I/O is the one that does not have to be done (e.g. can be resolved from cache), the second best would be writes that go to a storage system that are not competing with read requests (handled via cache).

Storage I/O trends

Who else is doing this?
This is similar to what EMC initially announced and released in February 2012 with VFcache now renamed to be XtremSW along with other caching and IO optimization software from others (e.g. SANdisk, Proximal and Pernix among others.

Does this replace IBM EasyTier?
Simple answer is no, one is for tiering (e.g. EasyTier), the other is for IO caching and optimization (e.g. FCSA).

Does this replace or compete with other IBM SSD technologies?
With anything, it is possible to find a way to make or view it as competitive. However in general FCSA complements other IBM storage I/O optimization and management software tools such as EasyTier as well as leverage and coexist with their various SSD products (from PCIe cards to drives to drive shelves to all SSD and hybrid SSD solutions).

How does FCSA work?
The FCSA software works in either a physical machine (PM) bare metal mode with Microsoft Windows operating systems (OS) such as Server 2008, 2012 among others. There is also *nix support for RedHat Linux, along with in a VMware virtual machine (VM) environment. In a VMware environment High Availability (HA), DRS and VMotion services and capabilities are supported. Hopefully it will be sooner vs. later that we hear IBM do a follow-up announcement (pure speculation and wishful thinking) on more hypervisors (e.g. Hyper-V, Xen, KVM) support along with Centos, Ubuntu or Power based systems including IBM pSeries. Read more about IBM Pure and Flex systems here.

What about server CPU and DRAM overhead?
As should be expected, a minimal amount of server DRAM (e.g. main memory) and CPU processing cycles are used to support the FCSA software and its drivers. Note the reason I say as should be expected is how you can have software running on a server doing any type of work that does not need some amount of DRAM and processing cycles. Granted some vendors will try to spin and say that there is no server-side DRAM or CPU consumed which would be true if they are completely external to the server (VM or PM). The important thing is to understand how much of an impact in terms of CPU along with DRAM consumed along with their corresponding effectiveness benefit that are derived.

Storage I/O trends

Does FCSA work with NAS (NFS or CIFS) back-end storage?
No this is a server-side block only cache solution. However having said that, if your applications or server are presenting shared storage to others (e.g. out the front-end) as NAS (NFS, CIFS, HDFS) using block storage (back-end), then FCSA can cache the storage I/O going to those back-end block devices.

Is this an appliance?
Short and simple answer is no, however I would not be surprised to hear some creative software defined marketer try to spin it as a flash cache software appliance. What this means is that FCSA is simply IO and storage optimization software for caching to boost read performance for VM and PM servers.

What is this hardware or storage agnostic stuff mean?
Simple, it means that FCSA can work with various nand flash PCIe cards or flash SSD drives installed in servers, as well as with various back-end block storage including SAN from IBM or others. This includes being able to use block storage using iSCSI, SAS, FC or FCoE attached storage.

What is the difference between Easytier and FCSA?
Simple, FCSA is providing read acceleration via caching which in turn should offload some reads from affecting storage systems so that they can focus on handling writes or read ahead operations. Easytier on the other hand is for as its name implies tiering or movement of data in a more deterministic fashion.

How do you get FCSA?
It is software that you buy from IBM that runs on an IBM x86 based server. It is licensed on a per server basis including one-year service and support. IBM has also indicated that they have volume or multiple servers based licensing options.

Storage I/O trends

Does this mean IBM is competing with other software based IO optimization and cache tool vendors?
IBM is focusing on selling and adding value to their server solutions. Thus while you can buy the software from IBM for their servers (e.g. no bundling required), you cannot buy the software to run on your AMD/Seamicro, Cisco (including EMC/VCE and NetApp) , Dell, Fujitsu, HDS, HP, Lenovo, Oracle, SuperMicro among other vendors servers.

Will this work on non-IBM servers?
IBM is only supporting FCSA on IBM x86 based servers; however, you can buy the software without having to buy a solution bundle (e.g. servers or storage).

What is this Cooperative Caching stuff?
Cooperative caching takes the next step from simple read cache with write-through to also support chance coherency in a shared environment, as well as leverage tighter application or guest operating system and storage system integration. For example, applications can work with storage systems to make intelligent predictive informed decisions on what to pre-fetch or read ahead and cached, as well as enable cache warming on restart. Another example is where in a shared storage environment if one server makes a change to a shared LUN or volume that the local server-side caches are also updated to prevent stale or inconsistent reads from occurring.

Can FCSA use multiple nand flash SSD devices on the same server?
Yes, IBM FCSA supports use of multiple server-side PCIe and or drive based SSD devices.

How is cache coherency maintained including during a reboot?
While data stored in the nand flash SSD device is persistent, it’s up to the server and applications working with the storage systems to decide if there is coherent or stale data that needs to be refreshed. Likewise, since FCSA is server-side and back-end storage system or SAN agnostic, without cooperative caching it will not know if the underlying data for a storage volume changed without being notified from another server that modified it. Thus if using shared back-end including SAN storage, do your due diligence to make sure multi-host access to the same LUN’s or volumes is being coordinated with some server-side software to support cache coherency, something that would apply to all vendors.

Storage I/O trends

What about cache warming or reloading of the read cache?
Some vendors who have tightly interested caching software and storage systems, something IBM refers to as cooperative caching that can have the ability to re-warm the cache. With solutions that support cache re-warming, the cache software and storage systems work together to main cache coherency while pre-loading data from the underlying storage system based on hot bands or other profiles and experience. As of this announcement, FCSA does not support cache warming on its own.

Does IBM have service or tools to complement FCSA?
Yes, IBM has an assessment, profile and planning tool that are available on a free consultation services basis with a technician to check your environment. Of course, the next logical step would be for IBM to make the tool available via free download or on some other basis as well.

Do I recommend and have I tried FCSA?
On paper, or WebEx, YouTube or other venue FCSA looks interesting and capable, a good fit for some environments particular if IBM server-based. However since my PM and VMware VM based servers are from other vendors, along with the fact that FCSA only runs on IBM servers, have not actually given it a hands on test drive yet. Thus if you are looking at storage I/O optimization and caching software tools for your VM or PM environment, checkout IBM FCSA to see if it meets your needs.

Storage I/O trends

General comments

It is great to see server and storage systems vendors add value to their solutions with I/O and performance optimization as well as caching software tools. However, I am also concerned with the growing numbers of different software tools that only work with one vendor’s servers or storage systems, or at least are supported as such.

This reminds me of a time not all that long ago (ok, for some longer than others) when we had a proliferation of different host bus adapter (HBA) driver and pathing drivers from various vendors. The result is a hodge podge (a technical term) of software running on different operating systems, hypervisors, PM’s, VMs, and storage systems, all of which need to be managed. On the other hand, for the time being perhaps the benefit will outweigh the pain of having different tools. That is where there are options from server-side vendor centric, storage system focused, or third-party software tool providers.

Another consideration is that some tools work in VMware environments; others support multiple hypervisors while others also support bare metal servers or PMs. Which applies to your environment will of course depend. After all, if you are an all VMware environment given that many of the caching tools tend to be VMware focused, that gives more options vs. for those who are still predominately PM environments.

Ok, nuff said (for now)

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Viking SATADIMM: Nand flash SATA SSD in DDR3 DIMM slot?

Storage I/O trends

Today computer and data storage memory vendor Viking announced that SSD vendor Solidfire has deployed their SATADIMM modules in DDR3 DIMM (e.g. Random Access Memory (RAM) main memory) slots of their SF SSD based storage solution.

solidfire ssd storage with satadimm
Solidfire SD solution with SATADIMM via Viking

Nand flash SATA SSD in a DDR3 DIMM slot?

Per Viking, Solidfire uses the SATADIMM as boot devices and cache to complement the normal SSD drives used in their SF SSD storage grid or cluster. For those not familiar, Solidfire SF storage systems or appliances are based on industry standard servers that are populated with SSD devices which in turn are interconnected with other nodes (servers) to create a grid or cluster of SSD performance and space capacity. Thus as nodes are added, more performance, availability and capacity are also increased all of which are accessed via iSCSI. Learn more about Solidfire SD solutions on their website here.

Here is the press release that Viking put out today:

Viking Technology SATADIMM Increases SSD Capacity in SolidFire’s Storage System (Press Release)

Viking Technology’s SATADIMM enables higher total SSD capacity for SolidFire systems, offering cloud infrastructure providers an optimized and more powerful solution

FOOTHILL RANCH, Calif., August 12, 2013 – Viking Technology, an industry leading supplier of Solid State Drives (SSDs), Non-Volatile Dual In-line Memory Module (NVDIMMs), and DRAM, today announced that SolidFire has selected its SATADIMM SSD as both the cache SSD and boot volume SSD for their storage nodes. Viking Technology’s SATADIMM SSD enables SolidFire to offer enhanced products by increasing both the number and the total capacity of SSDs in their solution.

“The Viking SATADIMM gives us an additional SSD within the chassis allowing us to dedicate more drives towards storage capacity, while storing boot and metadata information securely inside the system,” says Adam Carter, Director of Product Management at SolidFire. “Viking’s SATADIMM technology is unique in the market and an important part of our hardware design.”

SATADIMM is an enterprise-class SSD in a Dual In-line Memory Module (DIMM) form factor that resides within any empty DDR3 DIMM socket. The drive enables SSD caching and boot capabilities without using a hard disk drive bay. The integration of Viking Technology’s SATADIMM not only boosts overall system performance but allows SolidFire to minimize potential human errors associated with data center management, such as accidentally removing a boot or cache drive when replacing an adjacent failed drive.

“We are excited to support SolidFire with an optimal solid state solution that delivers increased value to their customers compared to traditional SSDs,” says Adrian Proctor, VP of Marketing, Viking Technology. “SATADIMM is a solid state drive that takes advantage of existing empty DDR3 sockets and provides a valuable increase in both performance and capacity.”

SATADIMM is a 6Gb SATA SSD with capacities up to 512GB. A next generation SAS solution with capacities of 1TB & 2TB will be available early in 2014. For more information, visit our website www.vikingtechnology.com or email us at sales@vikingtechnology.com.

Sales information is available at: www.vikingtechnology.com, via email at sales@vikingtechnology.com or by calling (949) 643-7255.

About Viking Technology Viking Technology is recognized as a leader in NVDIMM technology. Supporting a broad range of memory solutions that bridge DRAM and SSD, Viking delivers solutions to OEMs in the enterprise, high-performance computing, industrial and the telecommunications markets. Viking Technology is a division of Sanmina Corporation (Nasdaq: SANM), a leading Electronics Manufacturing Services (EMS) provider. More information is available at www.vikingtechnology.com.

About SolidFire SolidFire is the market leader in high-performance data storage systems designed for large-scale public and private cloud infrastructure. Leveraging an all-flash scale-out architecture with patented volume-level quality of service (QoS) control, providers can now guarantee storage performance to thousands of applications within a shared infrastructure. In-line data reduction techniques along with system-wide automation are fueling new block-storage services and advancing the way the world uses the cloud.

What’s inside the press release

On the surface this might cause some to jump to the conclusion that the nand flash SSD is being accessed via the fast memory bus normally used for DRAM (e.g. main memory) of a server or storage system controller. For some this might even cause a jump to conclusion that Viking has figured out a way to use nand flash for reads and writes not only via a DDR3 DIMM memory location, as well as doing so with the Serial ATA (SATA) protocol enabling server boot and use by any operating system or hypervisors (e.g. VMware vSphere or ESXi, Microsoft Hyper-V, Xen or KVM among others).

Note for those not familiar or needing a refresh on DRAM, DIMM and related items, here is an excerpt from Chapter 7 (Servers – Physical, Virtual and Software) from my book "The Green and Virtual Data Center" (CRC Press).

7.2.2 Memory

Computers rely on some form of memory ranging from internal registers, local on-board processor Level 1 (L1) and Level 2 (L2) caches, random accessible memory (RAM), non-volatile RAM (NVRAM) or Flash along with external disk storage. Memory, which includes external disk storage, is used for storing operating system software along with associated tools or utilities, application programs and data. Read more of the excerpt here…

Is SATADIMM memory bus nand flash SSD storage?

In short no.

Some vendors or their surrogates might be tempted to spin such a story by masking some details to allow your imagination to run wild a bit. When I saw the press release announcement I reached out to Tinh Ngo (Director Marketing Communications) over at Viking with some questions. I was expecting the usual marketing spin story, dancing around the questions with long answers or simply not responding with anything of substance (or that requires some substance to believe). Again what I found was the opposite and thus want to share with you some of the types of questions and answers.

So what actually is SATADIMM? See for yourself in the following image (click on it to view or Viking site).

Via Viking website, click on image or here to learn more about SATADIMM

Does SATADIMM actually move data via DDR3 and memory bus? No, SATADIMM only draws power from it (yes nand flash does need power when in use contrary to a myth I was told about).

Wait, then how is data moved and how does it get to and through the SATA IO stack (hardware and software)?

Simple, there is a cable connector that attached to the SATADIMM that in turn attached to an internal SATA port. Or using a different connector cable attach the SATADIMM (up to four) to a standard SAS internal port such as on a main board, HBA, RAID or caching adapter.

industry trend

Does that mean that Viking and who ever uses SATADIMM is not actually moving data or implementing SATA via the memory bus and DDR3 DIMM sockets? That would be correct, data movement occurs via cable connection to standard SATA or SAS ports.

Wait, why would I give up a DDR3 DIMM socket in my server that could be used for more DRAM? Great question and one that should be it depends on if you need more DRAM or more nand flash? If you are out of drive slots or PCIe card slots and have enough DRAM for your needs along with available DDR3 slots, you can stuff more nand flash into those locations assuming you have SAS or SATA connectivity.

satadimm
SATADIMM with SATA connector top right via Viking

satadimm sata connector
SATADIMM SATA connector via Viking

satadimm sas connector
SATADIMM SAS (Internal) connector via Viking

Why not just use the onboard USB ports and plug-in some high-capacity USB thumb drives to cut cost? If that is your primary objective it would probably work and I can also think of some other ways to cut cost. However those are also probably not the primary tenants that people looking to deploy something like SATADIMM would be looking for.

What are the storage capacities that can be placed on the SATADIMM? They are available in different sizes up to 400GB for SLC and 480GB for MLC. Viking indicated that there are larger capacities and faster 12Gb SAS interfaces in the works which would be more of a surprise if there were not. Learn more about current product specifications here.

Good questions. Attached are three images that sort of illustrates the connector. As well, why not a USB drive; well, there are customers that put 12 of these in the system (with up to 480GB usable capacity) that equates to roughly an added 5.7TBs inside the box without touching the drive bays (left for mass HDD’s). You will then need to raid/connect) all the SATADIMM via a HBA.

How fast is the SATADIMM and does putting it into a DDR3 slot speed things up or slow them down? Viking has some basic performance information on their site (here). However generally should be the same or similar to reach a SAS or SATA SSD drive, although keep SSD metrics and performance in the proper context. Also keep in mind that the DDR3 DIMM slot is only being used for power and not real data movement.

Is the SATADIMM using 3Gbs or 6Gbs SATA? Good questions, today is 6Gb SATA (remember that SATA can attach to a SAS port however not vise versa). Lets see if Viking responds in the comments with more including RAID support (hardware or software) along with other insight such as UNMAP, TRIM, Advanced Format (AF) 4KByte blocks among other things.

Have I actually tried SATADIMM yet? No, not yet. However would like to give it a test drive and workout if one were to show up on my doorstep along with disclosure and share the results if applicable.

industry trend

Future of nand flash in DRAM DIMM sockets

Keep in mind that someday nand flash will actually seem not only in a Webex or Powerpoint demo preso (e.g. similar to what Diablo Technology is previewing), as well as in real use for example what Micron earlier this year predicted for flash on DDR4 (more DDR3 vs. DDR4 here).

Is SATADIMM the best nand flash SSD approach for every solution or environment? No, however it does give some interesting options for those who are PCIe card, or HDD and SSD drive slot constrained that also have available DDR3 DIMM sockets. As to price, check with Viking, wish I could say tell them Greg from StorageIO sent you for a good value, however not sure what they would say or do.

Related more reading:
How much storage performance do you want vs. need?
Can RAID extend the life of nand flash SSD?
Can we get a side of context with them IOPS and other storage metrics?
SSD & Real Estate: Location, Location, Location
What is the best kind of IO? The one you do not have to do
SSD, flash and DRAM, DejaVu or something new?

Ok, nuff said (for now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)

twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Server and Storage IO Memory: DRAM and nand flash

Storage I/O trends

DRAM, DIMM, DDR3, nand flash memory, SSD, stating what’s often assumed

Often what’s assumed is not always the case. For example in along with around server, storage and IO networking circles including virtual as well as cloud environments terms such as nand (Negated AND or NOT And) flash memory aka (Solid State Device or SSD), DRAM (Dynamic Random Access Memory), DDR3 (Double Data Rate 3) not to mention DIMM (Dual Inline Memory Module) get tossed around with the assumption everybody must know what they mean.

On the other hand, I find plenty of people who are not sure what those among other terms or things are, sometimes they are even embarrassed to ask, particular if they are a self-proclaimed expert.

So for those who need a refresh or primer, here you go, an excerpt from Chapter 7 (Servers – Physical, Virtual and Software) from my book "The Green and Virtual Data Center" (CRC Press) available at Amazon.com and other global venues in print and ebook formats.

7.2.2 Memory

Computers rely on some form of memory ranging from internal registers, local on-board processor Level 1 (L1) and Level 2 (L2) caches, random accessible memory (RAM), non-volatile RAM (NVRAM) or nand Flash (SSD) along with external disk storage. Memory, which includes external disk storage, is used for storing operating system software along with associated tools or utilities, application programs and data. Main memory or RAM, also known as dynamic RAM (DRAM) chips, is packaged in different ways with a common form being dual inline memory modules (DIMMs) for notebook or laptop, desktop PC and servers.

RAM main memory on a server is the fastest form of memory, second only to internal processor or chip based registers, L1, L2 or local memory. RAM and processor based memories are volatile and non-persistent in that when power is removed, the contents of memory are lost. As a result, some form of persistent memory is needed to keep programs and data when power is removed. Read only memory (ROM) and NVRAM are both persistent forms of memory in that their contents are not lost when power is removed. The amount of RAM that can be installed into a server will vary with specific architecture implementation and operating software being used. In addition to memory capacity and packaging format, the speed of memory is also important to be able to move data and programs quickly to avoid internal bottlenecks. Memory bandwidth performance increases with the width of the memory bus in bits and frequency in MHz. For example, moving 8 bytes on a 64 bit buss in parallel at the same time at 100MHz provides a theoretical 800MByte/sec speed.

To improve availability and increase the level of persistence, some servers include battery backed up RAM or cache to protect data in the event of a power loss. Another technique to protect memory data on some servers is memory mirroring where twice the amount of memory is installed and divided into two groups. Each group of memory has a copy of data being stored so that in the event of a memory failure beyond those correctable with standard parity and error correction code (ECC) no data is lost. In addition to being fast, RAM based memories are also more expensive and used in smaller quantities compared to external persistent memories such as magnetic hard disk drives, magnetic tape or optical based memory medias.

Memory diagram
Memory and Storage Pyramid

The above shows a tiered memory model that may look familiar as the bottom part is often expanded to show tiered storage. At the top of the memory pyramid is high-speed processor memory followed by RAM, ROM, NVRAM and FLASH along with many forms of external memory commonly called storage. More detail about tiered storage is covered in chapter 8 (Data Storage – Data Storage – Disk, Tape, Optical, and Memory). In addition to being slower and lower cost than RAM based memories, disk storage along with NVRAM and FLASH based memory devices are also persistent.

By being persistent, when power is removed, data is retained on the storage or memory device. Also shown in the above figure is that on a relative basis, less energy is used for power storage or memory at the bottom of the pyramid than for upper levels where performance increases. From a PCFE (Power, Cooling, Floor space, Economic) perspective, balancing memory and storage performance, availability, capacity and energy to a given function, quality of service and service level objective for a given cost needs to be kept in perspective and not considering simply the lowest cost for the most amount of memory or storage. In addition to gauging memory on capacity, other metrics include percent used, operating system page faults and page read/write operations along with memory swap activity as well memory errors.

Base 2 versus base 10 numbering systems can account for some storage capacity that appears to “missing” when real storage is compared to what is expected to be seen. Disk drive manufacturers use base 10 (decimal) to count bytes of data while memory chip, server and operating system vendors typically use base 2 (binary) to count bytes of data. This has led to confusion when comparing a disk drive base 10 GB with a chip memory base 2 GB of memory capacity, such as 1,000,000,000 (10^9) bytes versus 1,073,741,824 (2^30) bytes. Nomenclature based on the International System of Units uses MiB, GiB and TiB to denote million, billion and trillion bytes for base 2 numbering with base 10 using MB, TB and GB . Most vendors do document how many bytes, sometimes in both base 2 and base 10, as well as the number of 512 byte sectors supported on their storage devices and storage systems, though it might be in the small print.

Related more reading:
How much storage performance do you want vs. need?
Can RAID extend the life of nand flash SSD?
Can we get a side of context with them IOPS and other storage metrics?
SSD & Real Estate: Location, Location, Location
What is the best kind of IO? The one you do not have to do
SSD, flash and DRAM, DejaVu or something new?

Ok, nuff said (for now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier).

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

How much storage performance do you want vs. need?

Storage I/O trends

How much storage I/O performance do you want vs. need?

The answer to how much storage I/O performance you need vs. want probably depends on cost, for which applications along with benefit among other things.

Storage I/O performance
View Part II: How many IOPS can a HDD, HHDD or SSD do with VMware?

I did a piece over at 21cit titled Parsing the Need for Speed in Storage that looks at those and other related themes including metrics that matter across tiered storage.

Here is an excerpt:

Can storage speed be too fast? Or, put another away, how do you decide a return on investments or innovation from the financial resources you spend on storage and the various technologies that go into storage performance.

Think about it: Fast storage needs fast servers, IO and networking interfaces, software, firmware, hypervisors, operating systems, drivers, and a file system or database, along with applications. Then there are the other buzzword bingo technologies that are also factors, among them fast storage DRAM and flash Solid State Devices (SSD).

Some questions to ask about storage I/O performance include among others:

  • How do response time, latency, and think or wait-times effect your environment and applications?
  • Do you know the location of your storage or data center performance bottlenecks?
  • If you remove bottlenecks in storage systems or appliances as well as in the data path, how will your application or the CPU in the server it runs on behave?
  • If your application server is currently showing high CPU due to the system overhead of having to wait for storage I/Os, you may see a positive improvement.
  • If more real work can be done now, will all of the components be ready to support each other without creating a new bottleneck?
  • Also speaking of storage I/O performance, how about can we get a side of context with them IOPs and other metrics that matter!

So how about it, how much performance, for primary, secondary, backup, cloud or virtual storage do you want vs. need?

Ok, nuff said for now.

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Can RAID extend the life of nand flash SSD?

Storage I/O trends

Can RAID extend nand flash SSD life?

Imho, the short answer is YES, under some circumstances.

There is a myth and some FUD that RAID (Redundant Array of Independent Disks) can shorten the life durability of nand flash SSD (Solid State Device) vs. HDD (Hard Disk Drives) due to extra IOP’s. The reality is that depending on how configured, RAID level, implementation and other factors, nand flash SSD can be extended as I discuss in this here video.

Video

Nand flash SSD cells and wear

First, there is a myth that nand flash SSD does not have moving parts like hard disk drives (HDD’s) thus do not wear out or break. That is just a myth in that nand flash by its nature wears out with write usage. This is due to how they store data in cells that have a rated number of program erase (P/E) cycles that vary by type of medium. For example, Single Level Cell (SLC) has a longer P/E life duration vs. Multi-Level Cells (MLC) and eMLC that stack multiple cells together.

There are a number of factors that contribute to nand flash wear, also known as duty cycle or durability tied to P/E. For example, some storage systems or controllers do a better job both at the lower level flash translation layer (FTL) in addition to controllers, firmware, caching using DRAM and IO optimization such as write ordering or grouping.

Now what about this RAID and SSD thing?

Ok first as a recap keep in mind that there are many RAID levels along with variations, enhancements and where, or how implemented ranging from software to hardware, adapters to controllers to storage systems.

In the case of RAID 1 or mirroring, just like replication or other one to one or one too many copy operation a write to one device is echoed to another. In the case of RAID 5, data is spread across drives and parity; however, the parity is rotated across all drives in an equal manner.

Some FUD or myths or misunderstandings come into play is that not all RAID 5 implementations as an example are not the same. Some do a better job of buffering or caching data in battery protected mirrored DRAM memory until a full stripe write can occur, or if needed, a partial write.

Another attribute is the chunk or shard size (how much data is sent to each drive member) along with the stripe width (how many drives). Some systems have narrow stripes of say 3+1 or 4+1 or 5+1 while others can be 14+1 or 15+1 or wider. Thus, data can be written across a wider number of drives reducing the P/E consumption or use of a single drive depending on implementation.

How about RAID 6 (dual parity)?

Same thing, it is a matter of how well the implementation is, how the write gathering is done and so forth.

What about RAID wearing out nand flash SSD?

While it is possible that it has or can occur depending on type of RAID implementation, lack of caching or optimization, configuration, type of SSD, RAID level and other things, in general I will say myth busted.

Want some proof?

I could go through a long technical proof point and citing lots of facts, figures, experts and so forth leaving you all silenced and dazed similar to the students listening to Ben Stein in Ferris Buelers day off (Click here to see what I mean) asking “anybody anybody Buleler?

Ben Stein via https://nostagjicmoviesandthings.blogspot.com
Image via nostagjicmoviesandthings.blogspot.com

How about some simple SSD and storage math?

On a very conservative basis, my estimate is that around 250PB of nand flash SSD drives are shipped and installed on a revenue basis attached to or in storage systems and appliances. Combine what Dell + DotHill + EMC + Fujitsu + HDS + HP + IBM (including TMS) + NEC + NetApp + NEC + Oracle among other legacy along with new all flash as well as hybrid vendors (e.g. Cloudbyte, FusionIO (Via their Nexgen acquisition), Kaminario, Greenbytes, Nutanix or Nimble, Purestorage, Starboard or Solidfire, Tegile or Tintri, Violin or Whiptail among others).

It is also a safe assumption based on how customers configure and use those and other storage systems is with some form of RAID. Thus if things were as bad as some researchers were, vendors and their pundits have made them out to be, wouldn’t’t we be hearing of those issues?

Is it just a RAID 5 problem and that RAID 6 magically corrects the problem?

Well, that depends on apples to apples vs. apples to oranges comparisons.

For example if you are using a 14+2 (16 drive) RAID 6 to compare to say a 3+1 (4 drive) RAID 5 that is not a fair comparison. Granted, it is a handy one if you are a vendor that supports wider RAID groups, stripes and ranks vs. those who do not. However also keep in mind that some legacy vendors actually also support wide stripes and RAID groups.

So in some cases the magic is not in the RAID level, rather the implementation or how configured or lack thereof.

Video

Watch this TechTarget produced video recorded live while I was at EMCworld 2013 to learn more.

Otherwise, ok, nuff said (for now).

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

Cloud, Virtual, Server, Storage I/O and other technology tiering

Storage I/O trends

Tiering technology and the right data center tool for a given task

Depending on who or what is your sphere of influence, or your sources of information and insight are, there will be different views of tiering, particular when it comes to tiered storage and storage tiering for cloud, virtual and traditional environments.

Recently I did piece over at 21st century IT (21cit) titled Tiered Storage Explained that looks at both tiered storage and storage tiering (e.g. movement and migration, automated or manual) that you can read here.

In the data center (or information factory) everything is not the same as different applications have various performance, availability, capacity and economics among other requirements. Consequently there are different levels or categories of service along with associated tiers of technology to support them, more on these in few moments.

Technology tiering is all around you

Tiering is not unique to Information Technology (IT) as it is more common than you may realize, granted, not always called tiering per say. For example there are different tiers of transportation (beside public or private, shared or single use) ranging from planes, trains, bicycles and boats among others.

Dutch BikesDutch TrainAirbus A330Gondola
Tiered transportation (Bikes, Trains, Planes, Gondolas)

Storage I/O trends

Moving beyond IT (we will get back to that shortly), there are other examples of tiered technologies. For example I live in the Stillwater / Minneapolis Minnesota area thus have a need for different types of snow movement and management tools, after all, not all snow situations are the same.

Snow plow
Tiered snow movement technology (Different tools for various tasks)

The other part of the year when the snow is not actually accumulating or the St. Croix river is not frozen which on a good year can be from March to November, its fishing time. That means having different types of fishing rods rigged for various things such as casting, trolling or jigging, not to mention big fish or little fish, something like how a golfer has different clubs. While like a golfer a single fishing rod can do the task, it’s not as practical thus different tools for various tasks.

Kyak FishingWalleye FishBig Fish
Different sizes and types of fish


Speaking of transportation and automobiles, there are also various metrics some of which have a correlation to Data Center energy use and effectiveness, not to mention EPA Energy Star for Data Centers and Data Center Storage.


Storage I/O trends

Technology tiering in and around the data center

IT data center

Now let’s get back to technology tiering the data center (or information factory) including tiered storage and storage tiering (here’s link to the tiered storage explained piece I mentioned earlier). The three primary building blocks for IT services are processing or compute (e.g. servers, workstations), networking or connectivity and storage that include hardware, software, management tools and applications. These resources in turn get accessed by yes you guessed it, different tiers or categories of devices from mobile smart phones, tablets, laptops, workstations or terminals browsers, applets and other presentation services.

IT building blocks, server, storage, networks

Lets focus on storage for a bit (pun intended)

Keep in mind that not everything is the same in the data center from a performance, availability, capacity and economic perspective. This means different threat risks to protect applications and data against, performance or space capacity needs among others.

data protection tiers
Avoid treating all threat risks the same, tiered data protection

Tiered data protection
Part of modernizing data protection is aligning various tools and technologies to meet different requirements including Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO) along with Service Level Agreements (SLAs) and Service Level Objectives (SLO’s).

In addition to protecting data and applications to meet various needs, there are also tiered storage mediums or media (e.g. HDD, SSD, Tape) along with storage systems.

Storage Tiers
Storage I/O trends

Excerpt, Chapter 9: Storage Services and Systems from my book Cloud and Virtual Data Storage Networking book (CRC Press) available via Amazon (also Kindle) and other venues.

9.2 Tiered Storage

Tiered storage is often referred to by the type of disk drives or media, by the price band, by the architecture or by its target use (online for files, emails and databases; near line for reference or backup; offline for archive). The intention of tiered storage is to configure various types of storage systems and media for different levels of performance, availability, capacity and energy or economics (PACE) capabilities to meet a given set of application service requirements. Other storage mediums such as HDD, SSD, magnetic tape and optical storage devices are also used in tiered storage.

Storage tiering can mean different things to different people. For some it is describing storage or storage systems tied to business, application or information services delivery functional need. Others classify storage tiers by price band or how much the solution costs. For others it’s the size or capacity or functionality. Another way to think of tiering is by where it will be used such as on-line, near-line or off-line (primary, secondary or tertiary). Price bands are a way of categorizing disk storage systems based on price to align with various markets and usage scenarios. For example consumer, small office home office (SOHO) and low-end SMB in a price band of under $5,000 USD, mid to high-end SMB in middle price bands from $50,000 to $100,000 range, and small to large enterprise systems ranging from a few hundred thousand dollars to millions of dollars.

Another method of classification is by high performance active or high-capacity inactive or idle. Storage tiering is also used in the context of different mediums such as high performance solid state devices (SSD) or 15,500 revolution per minute (15.5K RPM) SAS of Fibre Channel hard disk drives (HDD), or slower 7.2K and 10K high-capacity SAS and SATA drives or magnetic tape. Yet another category is internal dedicated, external shared, networked and cloud accessible using different protocols and interfaces. Adding to the confusion are marketing approaches that emphasize functionality as defining a tier in trying to standout and differentiate above competition. In other words, if you can’t beat someone in a given category or classification then just create a new one.

Another dimension of tiered storage is tiered access, meaning the type of storage I/O interface and protocol or access method used for storing and retrieving data. For example, high-speed 8Gb Fibre Channel (8GFC) and 10GbE Fibre Channel over Ethernet (FCoE) versus older and slower 4GFC or low-cost 1Gb Ethernet (1GbE) or high performance 10GbE based iSCSI for shared storage access or serial attached SCSI (SAS) for direct attached storage (DAS) or shared storage between a pair of clustered servers. Additional examples of tiered access include file or NAS based access of storage using network file system (NFS) or Windows-based Common Internet File system (CIFS) file sharing among others.

Different categories of storage systems, also called tiered storage systems, combine various tiered storage mediums with tiered access and tiered data protection. For example, tiered data protection includes local and remote mirroring, in different RAID levels, point-in-time (pit) copies or snapshots and other forms of securing and maintaining data integrity to meet various service level, RTO and RPO requirements. Regardless of the approach or taxonomy, ultimately, tiered servers, tiered hypervisors, tiered networks, tiered storage and tiered data protection are about and need to map back to the business and applications functionality.

Storage I/O trends

There is more to storage tiering which includes movement or migration of data (manually or automatically) across various types of storage devices or systems. For example EMC FAST (Fully Automated Storage Tiering), HDS Dynamic Tiering, IBM Easy Tier (and here), and NetApp Virtual Storage Tier (replaces what was known as Automated Storage Tiering) among others.

Likewise there are different types of storage systems or appliances from primary to secondary as well as for backup and archiving.

Then there are also markets or price bands (cost) for various storage systems solutions to meet different needs.

Needless to say there is plenty more to tiered storage and storage tiering for later conversations.

However for now check out the following related links:
Non Disruptive Updates, Needs vs. Wants (Requirements vs. wish lists)
Tiered Hypervisors and Microsoft Hyper-V (Different types or classes of Hypervisors for various needs)
tape summit resources (Using different types or tiers of storage)
EMC VMAX 10K, looks like high-end storage systems are still alive (Tiered storage systems)
Storage comments from the field and customers in the trenches (Various perspectives on tools and technology)
Green IT, Green Gap, Tiered Energy and Green Myths (Energy avoidance vs. energy effectiveness and tiering)
Has SSD put Hard Disk Drives (HDD’s) On Endangered Species List? (Tiered storage systems and devices)
Tiered Storage, Systems and Mediums (Storage Tiering and Tiered Storage)
Cloud, virtualization, Storage I/O trends for 2013 and beyond (Industry Trends and Perspectives)
Amazon cloud storage options enhanced with Glacier (Tiered Cloud Storage)
Garbage data in, garbage information out, big data or big garbage? (How much data are your preserving or hoarding?)Saving Money with Green IT: Time To Invest In Information Factories
I/O Virtualization (IOV) and Tiered Storage Access (Tiered storage access)
EMC VFCache respinning SSD and intelligent caching (Storage and SSD tiering including caching
Green and SASy = Energy and Economic, Effective Storage (Tired storage devices)
EMC Evolves Enterprise Data Protection with Enhancements (Tiered data protection)
Inside the Virtual Data Center (Data Center and Technology Tiering)
Airport Parking, Tiered Storage and Latency (Travel and Technology, Cost and Latency)
Tiered Storage Strategies (Comments on Storage Tiering)
Tiered Storage: Excerpt from Cloud and Virtual Data Storage Networking (CRC Press, see more here)
Using SAS and SATA for tiered storage (SAS and SATA Storage Devices)
The Right Storage Option Is Important for Big Data Success (Big Data and Storage)
VMware vSphere v5 and Storage DRS (VMware vSphere and Storage Tiers)
Tiered Communication and Media Venues (Social and Traditional Media for IT)
Tiered Storage Explained

Ok, nuff said (for now).

Cheers gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved

As the platters spin, HDD’s for cloud, virtual and traditional storage environments

HDDs for cloud, virtual and traditional storage environments

Storage I/O trends

Updated 1/23/2018

As the platters spin is a follow-up to a recent series of posts on Hard Disk Drives (HDD’s) along with some posts about How Many IOPS HDD’s can do.

HDD and storage trends and directions include among others

HDD’s will continue to be declared dead into the next decade, just as they have been for over a decade, meanwhile they are being enhanced, continued to be used in evolving roles.

hdd and ssd

SSD will continue to coexist with HDD, either as separate or converged HHDD’s. Where, where and how they are used will also continue to evolve. High IO (IOPS) or low latency activity will continue to move to some form of nand flash SSD (PCM around the corner), while storage capacity including some of which has been on tape stays on disk. Instead of more HDD capacity in a server, it moves to a SAN or NAS or to a cloud or service provider. This includes for backup/restore, BC, DR, archive and online reference or what some call active archives.

The need for storage spindle speed and more

The need for faster revolutions per minute (RPM’s) performance of drives (e.g. platter spin speed) is being replaced by SSD and more robust smaller form factor (SFF) drives. For example, some of today’s 2.5” SFF 10,000 RPM (e.g. 10K) SAS HDD’s can do as well or better than their larger 3.5” 15K predecessors can for both IOPS and bandwidth. This is also an example where the RPM speed of a drive may not be the only determination for performance as it has been in the past.


Performance comparison of four different drive types, click to view larger image.

The need for storage space capacity and areal density

In terms of storage enhancements, watch for the appearance of Shingled Magnetic Recording (SMR) enabled HDD’s to help further boost the space capacity in the same footprint. Using SMR HDD manufactures can put more bits (e.g. areal density) into the same physical space on a platter.


Traditional vs. SMR to increase storage areal density capacity

The generic idea with SMR is to increase areal density (how many bits can be safely stored per square inch) of data placed on spinning disk platter media. In the above image on the left is a representative example of how traditional magnetic disk media lays down tracks next to each other. With traditional magnetic recording approaches, the tracks are placed as close together as possible for the write heads to safely write data.

With new recording formats such as SMR along with improvements to read/write heads, the tracks can be more closely grouped together in an overlapping way. This overlapping way (used in a generic sense) is like how the shingles on a roof overlap, hence Shingled Magnetic Recording. Other magnetic recording or storage enhancements in the works include Heat Assisted Magnetic Recording (HAMR) and Helium filed drives. Thus, there is still plenty of bits and bytes room for growth in HDD’s well into the next decade to co-exist and complement SSD’s.

DIF and AF (Advanced Format), or software defining the drives

Another evolving storage feature that ties into HDD’s is Data Integrity Feature (DIF) that has a couple of different types. Depending on which type of DIF (0, 1, 2, and 3) is used; there can be added data integrity checks from the application to the storage medium or drive beyond normal functionality. Here is something to keep in mind, as there are different types or levels of DIF, when somebody says they support or need DIF, ask them which type or level as well as why.

Are you familiar with Advanced Format (AF)? If not you should be. Traditionally outside of special formats for some operating systems or controllers, that standard open system data storage block, page or sector has been 512 bytes. This has served well in the past, however; with the advent of TByte and larger sized drives, a new mechanism is needed. The need is to support both larger average data allocation sizes from operating systems and storage systems, as well as to cut the overhead of managing all the small sectors. Operating systems and file systems have added new partitioning features such as GUID Partition Table (GPT) to support 1TB and larger SSD, HDD and storage system LUN’s.

These enhancements are enabling larger devices to be used in place of traditional Master Boot Record (MBR) or other operating system partition and allocation schemes. The next step, however, is to teach operating systems, file systems, and hypervisors along with their associated tools or drives how to work with 4,096 byte or 4 Kbyte sectors. The advantage will be to cut the overhead of tracking all of those smaller sectors or file system extents and clusters. Today many HDD’s support AF however by default may have 512-byte emulation mode enabled due to lack of operating system or other support.

Intelligent Power Management, moving beyond drive spin down

Intelligent Power Management (IPM) is a collection of techniques that can be applied to vary the amount of energy consumed by a drive, controller or processor to do its work. These include in the case of an HDD slowing the spin rate of platters, however, keep in mind that mass in motion tends to stay in motion. This means that HDD’s once up and spinning do not need as much relative power as they function like a flywheel. Where their power draw comes in is during reading and write, in part to the movement of reading/write heads, however also for running the processors and electronics that control the device. Another big power consumer is when drives spin up, thus if they can be kept moving, however at a lower rate, along with disabling energy used by read/write heads and their electronics, you can see a drop in power consumption. Btw, a current generation 3.5” 4TB 6Gbs SATA HDD consumes about 6-7 watts of power while in active use, or less when in idle mode. Likewise a current generation high performance 2.5” 1.2TB HDD consumes about 4.8 watts of energy, a far cry from the 12-16 plus watts of energy some use as HDD fud.

Hybrid Hard Disk Drives (HHDD) and Solid State Hybrid Drives (SSDHD)

Hybrid HDD’s (HHDD’s) also known as Solid State Hybrid Drives (SSHD) have been around for a while and if you have read my earlier posts, you know that I have been a user and fan of them for several years. However one of the drawbacks of the HHDD’s has been lack of write acceleration, (e.g. they only optimize for reads) with some models. Current and emerging HDDD’s are appearing with a mix of nand flash SLC (used in earlier versions), MLC and eMLC along with DRAM while enabling write optimization. There are also more drive options available as HHDD’s from different manufactures both for desktop and enterprise class scenarios.

The challenge with HHDD’s is that many vendors either do not understand how they fit and compliment their tiering or storage management software tools or simply do not see the value proposition. I have had vendors and others tell me that the HHDD’s don’t make sense as they are too simple, how can they be a fit without requiring tiering software, controllers, SSD and HDD’s to be viable?

Storage I/O trends

I also see a trend similar to when the desktop high-capacity SATA drives appeared for enterprise-class storage systems in the early 2000s. Some of the same people did not see where or how a desktop class product or technology could ever be used in an enterprise solution.

Hmm, hey wait a minute, I seem to recall similar thinking when SCSI drives appeared in the early 90s, funny how some things do not change, DejaVu anybody?

Does that mean HHDD’s will be used everywhere?

Not necessarily, however, there will be places where they make sense, others where either an HDD or SSD will be more practical.

Networking with your server and storage

Drive native interfaces near-term will remain as 6Gbs (going to 12Gbs) SAS and SATA with some FC (you might still find a parallel SCSI drive out there). Likewise, with bridges or interface cards, those drives may appear as USB or something else.

What about SCSI over PCIe, will that catch on as a drive interface? Tough to say however I am sure we can find some people who will gladly try to convince you of that. FC based drives operating at 4Gbs FC (4GFC) are still being used for some environments however most activity is shifting over to SAS and SATA. SAS and SATA are switching over from 3Gbs to 6Gbs with 12Gbs SAS on the roadmaps.

So which drive is best for you?

That depends; do you need bandwidth or IOPS, low latency or high capacity, small low profile thin form factor or feature functions? Do you need a hybrid or all SSD or a self-encrypting device (SED) also known as Instant Secure Erase (ISE), these are among your various options.

Disk drives

Why the storage diversity?

Simple, some are legacy soon to be replaced and disposed of while others are newer. I also have a collection so to speak that get used for various testing, research, learning and trying things out. Click here and here to read about some of the ways I use various drives in my VMware environment including creating Raw Device Mapped (RDM) local SAS and SATA devices.

Other capabilities and functionality existing or being added to HDD’s include RAID and data copy assist; securely erase, self-encrypting, vibration dampening among other abilities for supporting dense data environments.

Where To Learn More

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

Do not judge a drive only by its interface, space capacity, cost or RPM alone. Look under the cover a bit to see what is inside in terms of functionality, performance, and reliability among other options to fit your needs. After all, in the data center or information factory not everything is the same.

From a marketing and fun to talk about new technology perspective, HDD’s might be dead for some. The reality is that they are very much alive in physical, virtual and cloud environments, granted their role is changing.

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

Can we get a side of context with them IOPS server storage metrics?

Can we get a side of context with them server storage metrics?

Whats the best server storage I/O network metric or benchmark? It depends as there needs to be some context with them IOPS and other server storage I/O metrics that matter.

There is an old saying that the best I/O (Input/Output) is the one that you do not have to do.

In the meantime, let’s get a side of some context with them IOPS from vendors, marketers and their pundits who are tossing them around for server, storage and IO metrics that matter.

Expanding the conversation, the need for more context

The good news is that people are beginning to discuss storage beyond space capacity and cost per GByte, TByte or PByte for both DRAM or nand flash Solid State Devices (SSD), Hard Disk Drives (HDD) along with Hybrid HDD (HHDD) and Solid State Hybrid Drive (SSHD) based solutions. This applies to traditional enterprise or SMB IT data center with physical, virtual or cloud based infrastructures.

hdd and ssd iops

This is good because it expands the conversation beyond just cost for space capacity into other aspects including performance (IOPS, latency, bandwidth) for various workload scenarios along with availability, energy effective and management.

Adding a side of context

The catch is that IOPS while part of the equation are just one aspect of performance and by themselves without context, may have little meaning if not misleading in some situations.

Granted it can be entertaining, fun to talk about or simply make good press copy for a million IOPS. IOPS vary in size depending on the type of work being done, not to mention reads or writes, random and sequential which also have a bearing on data throughout or bandwidth (Mbytes per second) along with response time. Not to mention block, file, object or blob as well as table.

However, are those million IOP’s applicable to your environment or needs?

Likewise, what do those million or more IOPS represent about type of work being done? For example, are they small 64 byte or large 64 Kbyte sized, random or sequential, cached reads or lazy writes (deferred or buffered) on a SSD or HDD?

How about the response time or latency for achieving them IOPS?

In other words, what is the context of those metrics and why do they matter?

storage i/o iops
Click on image to view more metrics that matter including IOP’s for HDD and SSD’s

Metrics that matter give context for example IO sizes closer to what your real needs are, reads and writes, mixed workloads, random or sequential, sustained or bursty, in other words, real world reflective.

As with any benchmark take them with a grain (or more) of salt, they key is use them as an indicator then align to your needs. The tool or technology should work for you, not the other way around.

Here are some examples of context that can be added to help make IOP’s and other metrics matter:

  • What is the IOP size, are they 512 byte (or smaller) vs. 4K bytes (or larger)?
  • Are they reads, writes, random, sequential or mixed and what percentage?
  • How was the storage configured including RAID, replication, erasure or dispersal codes?
  • Then there is the latency or response time and IO queue depths for the given number of IOPS.
  • Let us not forget if the storage systems (and servers) were busy with other work or not.
  • If there is a cost per IOP, is that list price or discount (hint, if discount start negotiations from there)
  • What was the number of threads or workers, along with how many servers?
  • What tool was used, its configuration, as well as raw or cooked (aka file system) IO?
  • Was the IOP’s number with one worker or multiple workers on a single or multiple servers?
  • Did the IOP’s number come from a single storage system or total of multiple systems?
  • Fast storage needs fast serves and networks, what was their configuration?
  • Was the performance a short burst, or long sustained period?
  • What was the size of the test data used; did it all fit into cache?
  • Were short stroking for IOPS or long stroking for bandwidth techniques used?
  • Data footprint reduction (DFR) techniques (thin provisioned, compression or dedupe) used?
  • Were write data committed synchronously to storage, or deferred (aka lazy writes used)?

The above are just a sampling and not all may be relevant to your particular needs, however they help to put IOP’s into more contexts. Another consideration around IOPS are the configuration of the environment, from an actual running application using some measurement tool, or are they generated from a workload tool such as IOmeter, IOrate, VDbench among others.

Sure, there are more contexts and information that would be interesting as well, however learning to walk before running will help prevent falling down.

Storage I/O trends

Does size or age of vendors make a difference when it comes to context?

Some vendors are doing a good job of going for out of this world record-setting marketing hero numbers.

Meanwhile other vendors are doing a good job of adding context to their IOP or response time or bandwidth among other metrics that matter. There is a mix of startup and established that give context with their IOP’s or other metrics, likewise size or age does not seem to matter for those who lack context.

Some vendors may not offer metrics or information publicly, so fine, go under NDA to learn more and see if the results are applicable to your environments.

Likewise, if they do not want to provide the context, then ask some tough yet fair questions to decide if their solution is applicable for your needs.

Storage I/O trends

Where To Learn More

View additional NAS, NVMe, SSD, NVM, SCM, Data Infrastructure and HDD related topics via the following links.

Additional learning experiences along with common questions (and answers), as well as tips can be found in Software Defined Data Infrastructure Essentials book.

Software Defined Data Infrastructure Essentials Book SDDC

What This All Means

What this means is let us start putting and asking for metrics that matter such as IOP’s with context.

If you have a great IOP metric, if you want it to matter than include some context such as what size (e.g. 4K, 8K, 16K, 32K, etc.), percentage of reads vs. writes, latency or response time, random or sequential.

IMHO the most interesting or applicable metrics that matter are those relevant to your environment and application. For example if your main application that needs SSD does about 75% reads (random) and 25% writes (sequential) with an average size of 32K, while fun to hear about, how relevant is a million 64 byte read IOPS? Likewise when looking at IOPS, pay attention to the latency, particular if SSD or performance is your main concern.

Get in the habit of asking or telling vendors or their surrogates to provide some context with them metrics if you want them to matter.

So how about some context around them IOP’s (or latency and bandwidth or availability for that matter)?

Ok, nuff said, for now.

Gs

Greg Schulz – Microsoft MVP Cloud and Data Center Management, VMware vExpert 2010-2017 (vSAN and vCloud). Author of Software Defined Data Infrastructure Essentials (CRC Press), as well as Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press), Resilient Storage Networks (Elsevier) and twitter @storageio. Courteous comments are welcome for consideration. First published on https://storageioblog.com any reproduction in whole, in part, with changes to content, without source attribution under title or without permission is forbidden.

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO. All Rights Reserved. StorageIO is a registered Trade Mark (TM) of Server StorageIO.

June 2013 Server and StorageIO Update Newsletter

StorageIO News Letter Image
June 2013 News letter

Welcome to the June 2013 edition of the StorageIO Update. In this edition coverage includes data center infrastructure management (DCIM), metrics that matter, industry trends, IBM buying Softlayer for Cloud, IaaS and managed services. Other items include backup and data protection topics for SMBs, as well as big data storage topics. Also the EPA has announced a review session for Energy Star for Data Center storage that you can give your comments. Enjoy this edition of the StorageIO Update newsletter.

Click on the following links to view the June 2013 edition as (HTML sent via Email) version, or PDF versions.

Visit the news letter page to view previous editions of the StorageIO Update.

You can subscribe to the news letter by clicking here.

Enjoy this edition of the StorageIO Update news letter, let me know your comments and feedback.

Ok Nuff said, for now

Cheers
Gs

Greg Schulz – Author Cloud and Virtual Data Storage Networking (CRC Press), The Green and Virtual Data Center (CRC Press) and Resilient Storage Networks (Elsevier)
twitter @storageio

All Comments, (C) and (TM) belong to their owners/posters, Other content (C) Copyright 2006-2024 Server StorageIO and UnlimitedIO LLC All Rights Reserved